Asadpour-Zeynali, Karim; Saeb, Elhameh
2016-01-01
Three antituberculosis medications are investigated in this work consist of rifampicin, isoniazid and pyrazinamide. The ultra violet (UV) spectra of these compounds are overlapped, thus use of suitable chemometric methods are helpful for simultaneous spectrophotometric determination of them. A generalized version of net analyte signal standard addition method (GNASSAM) was used for determination of three antituberculosis medications as a model system. In generalized net analyte signal standard addition method only one standard solution was prepared for all analytes. This standard solution contains a mixture of all analytes of interest, and the addition of such solution to sample, causes increases in net analyte signal of each analyte which are proportional to the concentrations of analytes in added standards solution. For determination of concentration of each analyte in some synthetic mixtures, the UV spectra of pure analytes and each sample were recorded in the range of 210 nm-550 nm. The standard addition procedure was performed for each sample and the UV spectrum was recorded after each addition and finally the results were analyzed by net analyte signal method. Obtained concentrations show acceptable performance of GNASSAM in these cases. PMID:28243267
Anticipating Surprise: Analysis for Strategic Warning
2002-12-01
Intentions versus Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 Introduction to the Analytical Method ...Analysis . . . . . . . . . . . . . . . . . . . . . . 32 Specifics of the Analytical Method . . . . . . . . . . . . . . . . . . . . . . . . 42 3...intelligence. Why is it that “no one’’—a slight but not great exaggeration—believes in the indications method , despite its demonstrably good record in these
Werner, S.L.; Johnson, S.M.
1994-01-01
As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.
A Novel Method of Enhancing Grounded Theory Memos with Voice Recording
ERIC Educational Resources Information Center
Stocker, Rachel; Close, Helen
2013-01-01
In this article the authors present the recent discovery of a novel method of supplementing written grounded theory memos with voice recording, the combination of which may provide significant analytical advantages over solely the traditional written method. Memo writing is an essential component of a grounded theory study, however it is often…
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, Norwood B.; Walker, J.F.
1992-01-01
Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
-Omic and Electronic Health Records Big Data Analytics for Precision Medicine
Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D.; Venugopalan, Janani; Hoffman, Ryan; Wang, May D.
2017-01-01
Objective Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of health care. Methods In this article, we present -omic and EHR data characteristics, associated challenges, and data analytics including data pre-processing, mining, and modeling. Results To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Conclusion Big data analytics is able to address –omic and EHR data challenges for paradigm shift towards precision medicine. Significance Big data analytics makes sense of –omic and EHR data to improve healthcare outcome. It has long lasting societal impact. PMID:27740470
Review of levoglucosan in glacier snow and ice studies: Recent progress and future perspectives.
You, Chao; Xu, Chao
2018-03-01
Levoglucosan (LEV) in glacier snow and ice layers provides a fingerprint of fire activity, ranging from modern air pollution to ancient fire emissions. In this study, we review recent progress in our understanding and application of LEV in glaciers, including analytical methods, transport and post-depositional processes, and historical records. We firstly summarize progress in analytical methods for determination of LEV in glacier snow and ice. Then, we discuss the processes influencing the records of LEV in snow and ice layers. Finally, we make some recommendations for future work, such as assessing the stability of LEV and obtaining continuous records, to increase reliability of the reconstructed ancient fire activity. This review provides an update for researchers working with LEV and will facilitate the further use of LEV as a biomarker in paleo-fire studies based on ice core records. Copyright © 2017 Elsevier B.V. All rights reserved.
Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A
2007-01-01
The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartas, Raul; Mimendia, Aitor; Valle, Manel del
2009-05-23
Calibration models for multi-analyte electronic tongues have been commonly built using a set of sensors, at least one per analyte under study. Complex signals recorded with these systems are formed by the sensors' responses to the analytes of interest plus interferents, from which a multivariate response model is then developed. This work describes a data treatment method for the simultaneous quantification of two species in solution employing the signal from a single sensor. The approach used here takes advantage of the complex information recorded with one electrode's transient after insertion of sample for building the calibration models for both analytes.more » The departure information from the electrode was firstly processed by discrete wavelet for transforming the signals to extract useful information and reduce its length, and then by artificial neural networks for fitting a model. Two different potentiometric sensors were used as study case for simultaneously corroborating the effectiveness of the approach.« less
A Performance-Based Method of Student Evaluation
ERIC Educational Resources Information Center
Nelson, G. E.; And Others
1976-01-01
The Problem Oriented Medical Record (which allows practical definition of the behavioral terms thoroughness, reliability, sound analytical sense, and efficiency as they apply to the identification and management of patient problems) provides a vehicle to use in performance based type evaluation. A test-run use of the record is reported. (JT)
Perspectives on Using Video Recordings in Conversation Analytical Studies on Learning in Interaction
ERIC Educational Resources Information Center
Rusk, Fredrik; Pörn, Michaela; Sahlström, Fritjof; Slotte-Lüttge, Anna
2015-01-01
Video is currently used in many studies to document the interaction in conversation analytical (CA) studies on learning. The discussion on the method used in these studies has primarily focused on the analysis or the data construction, whereas the relation between data construction and analysis is rarely brought to attention. The aim of this…
Ben-Assuli, Ofir; Leshno, Moshe
2016-09-01
In the last decade, health providers have implemented information systems to improve accuracy in medical diagnosis and decision-making. This article evaluates the impact of an electronic health record on emergency department physicians' diagnosis and admission decisions. A decision analytic approach using a decision tree was constructed to model the admission decision process to assess the added value of medical information retrieved from the electronic health record. Using a Bayesian statistical model, this method was evaluated on two coronary artery disease scenarios. The results show that the cases of coronary artery disease were better diagnosed when the electronic health record was consulted and led to more informed admission decisions. Furthermore, the value of medical information required for a specific admission decision in emergency departments could be quantified. The findings support the notion that physicians and patient healthcare can benefit from implementing electronic health record systems in emergency departments. © The Author(s) 2015.
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1975-01-01
Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.
Quantitative Evaluation of the Use of Actigraphy for Neurological and Psychiatric Disorders
Song, Yu; Kwak, Shin; Yoshida, Sohei; Yamamoto, Yoshiharu
2014-01-01
Quantitative and objective evaluation of disease severity and/or drug effect is necessary in clinical practice. Wearable accelerometers such as an actigraph enable long-term recording of a patient's movement during activities and they can be used for quantitative assessment of symptoms due to various diseases. We reviewed some applications of actigraphy with analytical methods that are sufficiently sensitive and reliable to determine the severity of diseases and disorders such as motor and nonmotor disorders like Parkinson's disease, sleep disorders, depression, behavioral and psychological symptoms of dementia (BPSD) for vascular dementia (VD), seasonal affective disorder (SAD), and stroke, as well as the effects of drugs used to treat them. We believe it is possible to develop analytical methods to assess more neurological or psychopathic disorders using actigraphy records. PMID:25214709
40 CFR 141.33 - Record maintenance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... laboratory reports may be kept, or data may be transferred to tabular summaries, provided that the following...) Laboratory and person responsible for performing analysis; (5) The analytical technique/method used; and (6...
NASA Technical Reports Server (NTRS)
Ghista, D. N.; Sandler, H.
1974-01-01
An analytical method is presented for determining the oxygen consumption rate of the intact heart working (as opposed to empty but beating) human left ventricle. Use is made of experimental recordings obtained for the chamber pressure and the associated dimensions of the LV. LV dimensions are determined by cineangiocardiography, and the chamber pressure is obtained by means of fluid-filled catheters during retrograde or transeptal catheterization. An analytical method incorporating these data is then employed for the evaluation of the LV coronary oxygen consumption in five subjects. Oxygen consumption for these subjects was also obtained by the conventional clinical method in order to evaluate the reliability of the proposed method.
Towards Personalized Medicine: Leveraging Patient Similarity and Drug Similarity Analytics
Zhang, Ping; Wang, Fei; Hu, Jianying; Sorrentino, Robert
2014-01-01
The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance. PMID:25717413
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention
Fisher, Brian; Smith, Jennifer; Pike, Ian
2017-01-01
Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications. PMID:28895928
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention.
Al-Hajj, Samar; Fisher, Brian; Smith, Jennifer; Pike, Ian
2017-09-12
Background : Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods : Inspired by the Delphi method, we introduced a novel methodology-group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders' observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results : The GA methodology triggered the emergence of ' common g round ' among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders' verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusion s : Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ' common ground' among diverse stakeholders about health data and their implications.
2017-01-01
Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395
Live load test and failure analysis for the steel deck truss bridge over the New River in Virginia.
DOT National Transportation Integrated Search
2009-01-01
This report presents the methods used to model a steel deck truss bridge over the New River in Hillsville, Virginia. These methods were evaluated by comparing analytical results with data recorded from 14 members during live load testing. The researc...
Research strategies that result in optimal data collection from the patient medical record
Gregory, Katherine E.; Radovinsky, Lucy
2010-01-01
Data obtained from the patient medical record are often a component of clinical research led by nurse investigators. The rigor of the data collection methods correlates to the reliability of the data and, ultimately, the analytical outcome of the study. Research strategies for reliable data collection from the patient medical record include the development of a precise data collection tool, the use of a coding manual, and ongoing communication with research staff. PMID:20974093
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
Models and methods to characterize site amplification from a pair of records
Safak, E.
1997-01-01
The paper presents a tutorial review of the models and methods that are used to characterize site amplification from the pairs of rock- and soil-site records, and introduces some new techniques with better theoretical foundations. The models and methods discussed include spectral and cross-spectral ratios, spectral ratios for downhole records, response spectral ratios, constant amplification factors, parametric models, physical models, and time-varying filters. An extensive analytical and numerical error analysis of spectral and cross-spectral ratios shows that probabilistically cross-spectral ratios give more reliable estimates of site amplification. Spectral ratios should not be used to determine site amplification from downhole-surface recording pairs because of the feedback in the downhole sensor. Response spectral ratios are appropriate for low frequencies, but overestimate the amplification at high frequencies. The best method to be used depends on how much precision is required in the estimates.
2017-06-01
prescription practices, the Standard Inpatient Data Record (SIDR) to determine healthcare-associated exposures, Defense Manpower Data Center (DMDC...Methods, and Limitations .......................................................................................... 1 Results ...no new methods or limitations were applied to this annual summary. As such, this report presents analytical results and discussion of CY 2016 data
Vandermause, Roxanne; Barbosa-Leiker, Celestina; Fritz, Roschelle
2014-12-01
This multimethod, qualitative study provides results for educators of nursing doctoral students to consider. Combining the expertise of an empirical analytical researcher (who uses statistical methods) and an interpretive phenomenological researcher (who uses hermeneutic methods), a course was designed that would place doctoral students in the midst of multiparadigmatic discussions while learning fundamental research methods. Field notes and iterative analytical discussions led to patterns and themes that highlight the value of this innovative pedagogical application. Using content analysis and interpretive phenomenological approaches, together with one of the students, data were analyzed from field notes recorded in real time over the period the course was offered. This article describes the course and the study analysis, and offers the pedagogical experience as transformative. A link to a sample syllabus is included in the article. The results encourage nurse educators of doctoral nursing students to focus educational practice on multiple methodological perspectives. Copyright 2014, SLACK Incorporated.
AlDeeb, Omar A A; Mahgoub, Hoda; Foda, Nagwa H
2013-01-01
Sucralose is a nonnutritive, zero-calorie artificial sweetener. It is a chlorinated sugar substitute that is about 600 times as sweet as sucrose. It is produced from sucrose when three chlorine atoms replace three hydroxyl groups. It is consumed as tablets (Blendy) by diabetic and obese patients. It is also used as an excipient in drug manufacturing. Unlike other artificial sweeteners, it is stable when heated and can, therefore, be used in baked and fried foods. The FDA approved sucralose in 1998. This review presents a comprehensive profile for sucralose including physical, analytical, and ADME profiles and methods of its synthesis. Spectral data for X-ray powder diffraction and DSC of sucralose are recorded and presented. The authors also recorded FT-IR, (1)H- and (13)C NMR, and ESI-MS spectra. Interpretation with detailed spectral assignments is provided. The analytical profile of sucralose covered the compendial methods, spectroscopic, and different chromatographic analytical techniques. The ADME profile covered all absorption, distribution, metabolism, and elimination data in addition to pharmacokinetics and pharmacological effects of sucralose. Some nutritional aspects for sucralose in obesity and diabetes are also presented. Both chemical and microbiological synthesis schemes for sucralose are reviewed and included. Copyright © 2013 Elsevier Inc. All rights reserved.
Fang, Jun; Park, Se-Chul; Schlag, Leslie; Stauden, Thomas; Pezoldt, Jörg; Jacobs, Heiko O
2014-12-03
In the field of sensors that target the detection of airborne analytes, Corona/lens-based-collection provides a new path to achieve a high sensitivity. An active-matrix-based analyte collection approach referred to as "airborne analyte memory chip/recorder" is demonstrated, which takes and stores airborne analytes in a matrix to provide an exposure history for off-site analysis. © 2014 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The use of plant-specific pyrolysis products as biomarkers in peat deposits
NASA Astrophysics Data System (ADS)
Schellekens, Judith; Bradley, Jonathan A.; Kuyper, Thomas W.; Fraga, Isabel; Pontevedra-Pombal, Xabier; Vidal-Torrado, Pablo; Abbott, Geoffrey D.; Buurman, Peter
2015-09-01
Peatlands are archives of environmental change that can be driven by climate and human activity. Proxies for peatland vegetation composition provide records of (local) environmental conditions that can be linked to both autogenic and allogenic factors. Analytical pyrolysis offers a molecular fingerprint of peat, and thereby a suite of environmental proxies. Here we investigate analytical pyrolysis as a method for biomarker analysis. Pyrolysates of 48 peatland plant species were compared, comprising seventeen lichens, three Sphagnum species, four non-Sphagnum mosses, eleven graminoids (Cyperaceae, Juncaceae, Poaceae), five Ericaceae and six species from other families. This resulted in twenty-one potential biomarkers, including new markers for lichens (3-methoxy-5-methylphenol) and graminoids (ferulic acid methyl ester). The potential of the identified biomarkers to reconstruct vegetation composition is discussed according to their depth records in cores from six peatlands from boreal, temperate and tropical biomes. The occurrence of markers for Sphagnum, graminoids and lichens in all six studied peat deposits indicates that they persist in peat of thousands of years old, in different vegetation types and under different conditions. In order to facilitate the quantification of biomarkers from pyrolysates, typically expressed as proportion (%) of the total quantified pyrolysis products, an internal standard (5-α-androstane) was introduced. Depth records of the Sphagnum marker 4-isopropenylphenol from the upper 3 m of a Sphagnum-dominated peat, from samples analysed with and without internal standard showed a strong positive correlation (r2 = 0.72, P < 0.0005, n = 12). This indicates that application of an internal standard is a reliable method to assess biomarker depth records, which enormously facilitates the use of analytical pyrolysis in biomarker research by avoiding quantification of a high number of products.
40 CFR 141.131 - Analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Disinfectant Residuals, Disinfection Byproducts, and... Constitution Avenue, NW., EPA West, Room B102, Washington, DC 20460, or at the National Archives and Records...: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. EPA Method 552...
Interim reliability evaluation program, Browns Ferry fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, M.E.
1981-01-01
An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.
Jessen, Torben E; Höskuldsson, Agnar T; Bjerrum, Poul J; Verder, Henrik; Sørensen, Lars; Bratholm, Palle S; Christensen, Bo; Jensen, Lene S; Jensen, Maria A B
2014-09-01
Direct measurement of chemical constituents in complex biologic matrices without the use of analyte specific reagents could be a step forward toward the simplification of clinical biochemistry. Problems related to reagents such as production errors, improper handling, and lot-to-lot variations would be eliminated as well as errors occurring during assay execution. We describe and validate a reagent free method for direct measurement of six analytes in human plasma based on Fourier-transform infrared spectroscopy (FTIR). Blood plasma is analyzed without any sample preparation. FTIR spectrum of the raw plasma is recorded in a sampling cuvette specially designed for measurement of aqueous solutions. For each analyte, a mathematical calibration process is performed by a stepwise selection of wavelengths giving the optimal least-squares correlation between the measured FTIR signal and the analyte concentration measured by conventional clinical reference methods. The developed calibration algorithms are subsequently evaluated for their capability to predict the concentration of the six analytes in blinded patient samples. The correlation between the six FTIR methods and corresponding reference methods were 0.87
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2017-01-01
Two new, simple, and specific green analytical methods are proposed: zero-crossing first-derivative and chemometric-based spectrophotometric artificial neural network (ANN). The proposed methods were used for the simultaneous estimation of two closely related antioxidant nutraceuticals, coenzyme Q10 (Q10) and vitamin E, in their mixtures and pharmaceutical preparations. The first method is based on the handling of spectrophotometric data with the first-derivative technique, in which both nutraceuticals were determined in ethanol, each at the zero crossing of the other. The amplitudes of the first-derivative spectra for Q10 and vitamin E were recorded at 285 and 235 nm respectively, and correlated with their concentrations. The linearity ranges of Q10 and vitamin E were 10-60 and 5.6-70 μg⋅mL-1, respectively. The second method, ANN, is a multivariate calibration method and it was developed and applied for the simultaneous determination of both analytes. A training set of 90 different synthetic mixtures containing Q10 and vitamin E in the ranges of 0-100 and 0-556 μg⋅mL-1, respectively, was prepared in ethanol. The absorption spectra of the training set were recorded in the spectral region of 230-300 nm. By relating the concentration sets (x-block) with their corresponding absorption data (y-block), gradient-descent back-propagation ANN calibration could be computed. To validate the proposed network, a set of 45 synthetic mixtures of the two drugs was used. Both proposed methods were successfully applied for the assay of Q10 and vitamin E in their laboratory-prepared mixtures and in their pharmaceutical tablets with excellent recovery. These methods offer advantages over other methods because of low-cost equipment, time-saving measures, and environmentally friendly materials. In addition, no chemical separation prior to analysis was needed. The ANN method was superior to the derivative technique because ANN can determine both drugs under nonlinear experimental conditions. Consequently, ANN would be the method of choice in the routine analysis of Q10 and vitamin E tablets. No interference from common pharmaceutical additives was observed. Student's t-test and the F-test were used to compare the two methods. No significant difference was recorded.
Wasike, Chrilukovian B; Magothe, Thomas M; Kahi, Alexander K; Peters, Kurt J
2011-01-01
Animal recording in Kenya is characterised by erratic producer participation and high drop-out rates from the national recording scheme. This study evaluates factors influencing efficiency of beef and dairy cattle recording system. Factors influencing efficiency of animal identification and registration, pedigree and performance recording, and genetic evaluation and information utilisation were generated using qualitative and participatory methods. Pairwise comparison of factors was done by strengths, weaknesses, opportunities and threats-analytical hierarchical process analysis and priority scores to determine their relative importance to the system calculated using Eigenvalue method. For identification and registration, and evaluation and information utilisation, external factors had high priority scores. For pedigree and performance recording, threats and weaknesses had the highest priority scores. Strengths factors could not sustain the required efficiency of the system. Weaknesses of the system predisposed it to threats. Available opportunities could be explored as interventions to restore efficiency in the system. Defensive strategies such as reorienting the system to offer utility benefits to recording, forming symbiotic and binding collaboration between recording organisations and NARS, and development of institutions to support recording were feasible.
IDENTIFICATION OF COMPOUNDS IN SOUTH AFRICAN STREAM SAMPLES USING ION COMPOSITION ELUCIDATION (ICE)
Analytical methods for target compounds usually employ clean-up procedures to remove potential mass interferences and utilize selected ion recording (SIR) to provide low detection limits. Such an approach, however, could overlook non-target compounds that might be present and tha...
User's guide to the Radiometric Age Data Bank (RADB)
Zartman, Robert Eugene; Cole, James C.; Marvin, Richard F.
1976-01-01
The Radiometric Age Data Bank (RADB) has been established by the U.S. Geological Survey, as a means for collecting and organizing the estimated 100,000 radiometric ages presently published for the United States. RADB has been constructed such that a complete sample description (location, rock type, etc.), literature citation, and extensive analytical data are linked to form an independent record for each sample reported in a published work. Analytical data pertinent to the potassium-argon, rubidium-strontium, uranium-thorium-lead, lead-alpha, and fission-track methods can be accommodated, singly or in combinations, for each record. Data processing is achieved using the GIPSY program (University of Oklahoma) which maintains the data file and builds, updates, searches, and prints the records using simple yet versatile command statements. Searching and selecting records is accomplished by specifying the presence, absence, or (numeric or alphabetic) value of any element of information in the data bank, and these specifications can be logically linked to develop sophisticated searching strategies. Output is available in the form of complete data records, abbreviated tests, or columnar tabulations. Samples of data-reporting forms, GIPSY command statements, output formats, and data records are presented to illustrate the comprehensive nature and versatility of the Radiometric Age Data Bank.
Diving into the analysis of time-depth recorder and behavioural data records: A workshop summary
NASA Astrophysics Data System (ADS)
Womble, Jamie N.; Horning, Markus; Lea, Mary-Anne; Rehberg, Michael J.
2013-04-01
Directly observing the foraging behavior of animals in the marine environment can be extremely challenging, if not impossible, as such behavior often takes place beneath the surface of the ocean and in extremely remote areas. In lieu of directly observing foraging behavior, data from time-depth recorders and other types of behavioral data recording devices are commonly used to describe and quantify the behavior of fish, squid, seabirds, sea turtles, pinnipeds, and cetaceans. Often the definitions of actual behavioral units and analytical approaches may vary substantially which may influence results and limit our ability to compare behaviors of interest across taxonomic groups and geographic regions. A workshop was convened in association with the Fourth International Symposium on Bio-logging in Hobart, Tasmania on 8 March 2011, with the goal of providing a forum for the presentation, review, and discussion of various methods and approaches that are used to describe and analyze time-depth recorder and associated behavioral data records. The international meeting brought together 36 participants from 14 countries from a diversity of backgrounds including scientists from academia and government, graduate students, post-doctoral fellows, and developers of electronic tagging technology and analysis software. The specific objectives of the workshop were to host a series of invited presentations followed by discussion sessions focused on (1) identifying behavioral units and metrics that are suitable for empirical studies, (2) reviewing analytical approaches and techniques that can be used to objectively classify behavior, and (3) identifying cases when temporal autocorrelation structure is useful for identifying behaviors of interest. Outcomes of the workshop included highlighting the need to better define behavioral units and to devise more standardized processing and analytical techniques in order to ensure that results are comparable across studies and taxonomic groups.
Approximate analytical modeling of leptospirosis infection
NASA Astrophysics Data System (ADS)
Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani
2017-11-01
Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-10
... Protection, DHS/CBP--017 Analytical Framework for Intelligence (AFI) System of Records AGENCY: Privacy Office... Homeland Security/U.S. Customs and Border Protection, DHS/CBP--017 Analytical Framework for Intelligence... Analytical Framework for Intelligence (AFI) System of Records'' from one or more provisions of the Privacy...
An Interoperable Electronic Medical Record-Based Platform for Personalized Predictive Analytics
ERIC Educational Resources Information Center
Abedtash, Hamed
2017-01-01
Precision medicine refers to the delivering of customized treatment to patients based on their individual characteristics, and aims to reduce adverse events, improve diagnostic methods, and enhance the efficacy of therapies. Among efforts to achieve the goals of precision medicine, researchers have used observational data for developing predictive…
Corrected Four-Sphere Head Model for EEG Signals.
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.
Corrected Four-Sphere Head Model for EEG Signals
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671
Trujillo-Rodríguez, María J; Pino, Verónica; Psillakis, Elefteria; Anderson, Jared L; Ayala, Juan H; Yiantzi, Evangelia; Afonso, Ana M
2017-04-15
This work proposes a new vacuum headspace solid-phase microextraction (Vac-HSSPME) method combined to gas chromatography-flame ionization detection for the determination of free fatty acids (FFAs) and phenols. All target analytes of the multicomponent solution were volatiles but their low Henry's Law constants rendered them amenable to Vac-HSSPME. The ability of a new and easy to construct Vac-HSSPME sampler to maintain low-pressure conditions for extended sampling times was concurrently demonstrated. Vac-HSSPME and regular HSSPME methods were independently optimized and the results were compared at all times. The performances of four commercial SPME fibers and two polymeric ionic liquid (PIL)-based SPME fibers were evaluated and the best overall results were obtained with the adsorbent-type CAR/PDMS fiber. For the concentrations used here, competitive displacement became more intense for the smaller and more volatile analytes of the multi-component solution when lowering the sampling pressure. The extraction time profiles showed that Vac-HSSPME had a dramatic positive effect on extraction kinetics. The local maxima of adsorbed analytes recorded with Vac-HSSPME occurred faster, but were always lower than that with regular HSSPME due to the faster analyte-loading from the multicomponent solution. Increasing the sampling temperature during Vac-HSSPME reduced the extraction efficiency of smaller analytes due to the enhancement in water molecule collisions with the fiber. This effect was not recorded for the larger phenolic compounds. Based on the optimum values selected, Vac-HSSPME required a shorter extraction time and milder sampling conditions than regular HSSPME: 20 min and 35 °C for Vac-HSSPME versus 40 min and 45 °C for regular HSSPME. The performance of the optimized Vac-HSSPME and regular HSSPME procedures were assessed and Vac-HSSPME method proved to be more sensitive, with lower limits of detection (from 0.14 to 13 μg L -1 ), and better intra-day precision (relative standard deviations values < 10% at the lowest spiked level) than regular HSSPME for almost all target analytes. The proposed Vac-HSSPME method was successfully applied to quantify FFAs and phenols in milk and milk derivatives samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe
2014-05-01
The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Investigation of low-loss spectra and near-edge fine structure of polymers by PEELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heckmann, W.
Transmission electron microscopy has changed from a purely imaging method to an analytical method. This has been facilitated particularly by equipping electron microscopes with energy filters and with parallel electron energy loss spectrometers (PEELS). Because of their relatively high energy resolution (1 to 2 eV) they provide information not only on the elements present but also on the type of bonds between the molecular groups. Polymers are radiation sensitive and the molecular bonds change as the spectrum is being recorded. This can be observed with PEEL spectrometers that are able to record spectra with high sensitivity and in rapid succession.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
... Border Protection, DHS/CBP--017 Analytical Framework for Intelligence (AFI) System of Records AGENCY... Framework for Intelligence (AFI) System of Records'' and this proposed rulemaking. In this proposed... Protection, DHS/CBP--017 Analytical Framework for Intelligence (AFI) System of Records.'' AFI enhances DHS's...
A Method for Continuous (239)Pu Determinations in Arctic and Antarctic Ice Cores.
Arienzo, M M; McConnell, J R; Chellman, N; Criscitiello, A S; Curran, M; Fritzsche, D; Kipfstuhl, S; Mulvaney, R; Nolan, M; Opel, T; Sigl, M; Steffensen, J P
2016-07-05
Atmospheric nuclear weapons testing (NWT) resulted in the injection of plutonium (Pu) into the atmosphere and subsequent global deposition. We present a new method for continuous semiquantitative measurement of (239)Pu in ice cores, which was used to develop annual records of fallout from NWT in ten ice cores from Greenland and Antarctica. The (239)Pu was measured directly using an inductively coupled plasma-sector field mass spectrometer, thereby reducing analysis time and increasing depth-resolution with respect to previous methods. To validate this method, we compared our one year averaged results to published (239)Pu records and other records of NWT. The (239)Pu profiles from the Arctic ice cores reflected global trends in NWT and were in agreement with discrete Pu profiles from lower latitude ice cores. The (239)Pu measurements in the Antarctic ice cores tracked low latitude NWT, consistent with previously published discrete records from Antarctica. Advantages of the continuous (239)Pu measurement method are (1) reduced sample preparation and analysis time; (2) no requirement for additional ice samples for NWT fallout determinations; (3) measurements are exactly coregistered with all other chemical, elemental, isotopic, and gas measurements from the continuous analytical system; and (4) the long half-life means the (239)Pu record is stable through time.
40 CFR 141.40 - Monitoring requirements for unregulated contaminants.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Contaminant List, in paragraph (a)(3) of this section. EPA will provide sample containers, provide pre-paid... Testing to be Sampled After Notice of Analytical Methods Availability] 1—Contaminant 2—CAS registry number... Records Administration (NARA). For information on availability of this material at NARA, call 202-741-6030...
Talking about Brine Shrimps: Three Ways of Analysing Pupil Conversations.
ERIC Educational Resources Information Center
Tunnicliffe, Sue Dale; Reiss, Michael J.
1999-01-01
Applies three distinct analyses to recorded and transcribed student conversations (n=240) about brine shrimps. The complementary analytic methods provide information on the content of pupils' conversations in terms of the observations made, the ways in which pupils make sense of their observations, and the ways in which students use conversation…
Exploring Middle School Students' Use of Inscriptions in Project-Based Science Classrooms
ERIC Educational Resources Information Center
Wu, Hsin-Kai; Krajcik, Joseph S.
2006-01-01
This study explores seventh graders' use of inscriptions in a teacher-designed project-based science unit. To investigate students' learning practices during the 8-month water quality unit, we collected multiple sources of data (e.g., classroom video recordings, student artifacts, and teacher interviews) and employed analytical methods that drew…
Simmons, Blake A [San Francisco, CA; Talin, Albert Alec [Livermore, CA
2009-11-27
A method for producing metal nanoparticles that when associated with an analyte material will generate an amplified SERS spectrum when the analyte material is illuminated by a light source and a spectrum is recorded. The method for preparing the metal nanoparticles comprises the steps of (i) forming a water-in-oil microemulsion comprising a bulk oil phase, a dilute water phase, and one or more surfactants, wherein the water phase comprises a transition metal ion; (ii) adding an aqueous solution comprising a mild reducing agent to the water-in-oil microemulsion; (iii) stirring the water-in-oil microemulsion and aqueous solution to initiate a reduction reaction resulting in the formation of a fine precipitate dispersed in the water-in-oil microemulsion; and (iv) separating the precipitate from the water-in-oil microemulsion.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Classified Information (DOE-44). (I) Administrative and Analytical Records and Reports (DOE-81). (J) Law... result of learning the identities of confidential informants; to prevent disclosure of investigative... DOE employees). (K) Administrative and Analytical Records and Reports (DOE-81). (L) Law Enforcement...
Lack of grading agreement among international hemostasis external quality assessment programs
Olson, John D.; Jennings, Ian; Meijer, Piet; Bon, Chantal; Bonar, Roslyn; Favaloro, Emmanuel J.; Higgins, Russell A.; Keeney, Michael; Mammen, Joy; Marlar, Richard A.; Meley, Roland; Nair, Sukesh C.; Nichols, William L.; Raby, Anne; Reverter, Joan C.; Srivastava, Alok; Walker, Isobel
2018-01-01
Laboratory quality programs rely on internal quality control and external quality assessment (EQA). EQA programs provide unknown specimens for the laboratory to test. The laboratory's result is compared with other (peer) laboratories performing the same test. EQA programs assign target values using a variety of methods statistical tools and performance assessment of ‘pass’ or ‘fail’ is made. EQA provider members of the international organization, external quality assurance in thrombosis and hemostasis, took part in a study to compare outcome of performance analysis using the same data set of laboratory results. Eleven EQA organizations using eight different analytical approaches participated. Data for a normal and prolonged activated partial thromboplastin time (aPTT) and a normal and reduced factor VIII (FVIII) from 218 laboratories were sent to the EQA providers who analyzed the data set using their method of evaluation for aPTT and FVIII, determining the performance for each laboratory record in the data set. Providers also summarized their statistical approach to assignment of target values and laboratory performance. Each laboratory record in the data set was graded pass/fail by all EQA providers for each of the four analytes. There was a lack of agreement of pass/fail grading among EQA programs. Discordance in the grading was 17.9 and 11% of normal and prolonged aPTT results, respectively, and 20.2 and 17.4% of normal and reduced FVIII results, respectively. All EQA programs in this study employed statistical methods compliant with the International Standardization Organization (ISO), ISO 13528, yet the evaluation of laboratory results for all four analytes showed remarkable grading discordance. PMID:29232255
Analysis of general-aviation accidents using ATC radar records
NASA Technical Reports Server (NTRS)
Wingrove, R. C.; Bach, R. E., Jr.
1982-01-01
It is pointed out that general aviation aircraft usually do not carry flight recorders, and in accident investigations the only available data may come from the Air Traffic Control (ATC) records. A description is presented of a technique for deriving time-histories of aircraft motions from ATC radar records. The employed procedure involves a smoothing of the raw radar data. The smoothed results, in combination with other available information (meteorological data and aircraft aerodynamic data) are used to derive the expanded set of motion time-histories. Applications of the considered analytical methods are related to different types of aircraft, such as light piston-props, executive jets, and commuter turboprops, as well as different accident situations, such as takeoff, climb-out, icing, and deep stall.
van Lierop, B; Castle, L; Feigenbaum, A; Ehlert, K; Boenke, A
1998-10-01
A collection has been made of additives that are required as analytical standards for enforcement of European Union legislation on food contact plastics. The 100 additives have been characterized by mass spectrometry, infra-red spectroscopy and proton nuclear magnetic resonance spectroscopy to provide reference spectra. Gas chromatographic retention times have been recorded to facilitate identification by retention index. This information has been further supplemented by physico-chemical data. Finally, chromatographic methods have been used to indicate the presence of any impurities in the commercial chemicals. Samples of the reference substances are available on request and the collection of spectra and other information will be made available in printed format and on-line through the Internet. This paper gives an overview of the work done to establish the reference collection and the spectral atlas, which together will assist enforcement laboratories in the characterization of plastics and the selection of analytical methods for additives that may migrate.
Reddy, Bhargava K; Delen, Dursun; Agrawal, Rupesh K
2018-01-01
Crohn's disease is among the chronic inflammatory bowel diseases that impact the gastrointestinal tract. Understanding and predicting the severity of inflammation in real-time settings is critical to disease management. Extant literature has primarily focused on studies that are conducted in clinical trial settings to investigate the impact of a drug treatment on the remission status of the disease. This research proposes an analytics methodology where three different types of prediction models are developed to predict and to explain the severity of inflammation in patients diagnosed with Crohn's disease. The results show that machine-learning-based analytic methods such as gradient boosting machines can predict the inflammation severity with a very high accuracy (area under the curve = 92.82%), followed by regularized regression and logistic regression. According to the findings, a combination of baseline laboratory parameters, patient demographic characteristics, and disease location are among the strongest predictors of inflammation severity in Crohn's disease patients.
NASA Astrophysics Data System (ADS)
Brookman, T. H.; Whittaker, T. E.; King, P. L.; Horton, T. W.
2011-12-01
Stable isotope dendroclimatology is a burgeoning field in palaeoclimate science due to its unique potential to contribute (sub)annually resolved climate records, over millennial timescales, to the terrestrial palaeoclimate record. Until recently the time intensive methods precluded long-term climate reconstructions. Advances in continuous-flow mass spectrometry and isolation methods for α-cellulose (ideal for palaeoclimate studies as, unlike other wood components, it retains its initial isotopic composition) have made long-term, calendar dated palaeoclimate reconstructions a viable proposition. The Modified Brendel (mBrendel) α-cellulose extraction method is a fast, cost-effective way of preparing whole-wood samples for stable oxygen and carbon isotope analysis. However, resinous woods often yield incompletely processed α-cellulose using the standard mBrendel approach. As climate signals may be recorded by small (<1%) isotopic shifts it is important to investigate if incomplete processing affects the accuracy and precision of tree-ring isotopic records. In an effort to address this methodological issue, we investigated three highly resinous woods: kauri (Agathis australis), ponderosa pine (Pinus ponderosa) and huon pine (Lagarastrobus franklinii). Samples of each species were treated with 16 iterations of the mBrendel, varying reaction temperature, time and reagent volumes. Products were investigated using microscopic and bulk transmission Fourier Transform infrared spectroscopy (FITR) to reveal variations in the level of processing; poorly-digested fibres display a peak at 1520cm-1 suggesting residual lignin and a peak at ~1600cm-1 in some samples suggests retained resin. Despite the different levels of purity, replicate analyses of samples processed by high temperature digestion yielded consistent δ18O within and between experiments. All α-cellulose samples were 5-7% enriched compared to the whole-wood, suggesting that even incomplete processing at high temperature can provide acceptable δ18O analytical external precision. For kauri, short, lower temperature extractions produced α-cellulose with δ18O consistently ~1% lower than longer, higher temperature kauri experiments. These findings suggest that temperature and time are significant variables that influence the analytical precision of α-cellulose stable isotope analysis and that resinous hardwoods (e.g. kauri) may require longer and/or hotter digestions than softwoods. The effects of mBrendel variants on the carbon isotope ratio precision of α-cellulose extracts will also be presented. Our findings indicate that the standard mBrendel α-cellulose extraction method may not fully remove lignins and resins depending on the type of wood being analysed. Residual impurities can decrease analytical precision and accuracy. Fortunately, FTIR analysis prior to isotopic analysis is a relatively fast and cost effective way to determine α-cellulose extract purity, ultimately improving the data quality, accuracy and utility of tree-ring based stable isotopic climate records.
Highlights from CPTAC Scientific Symposium | Office of Cancer Clinical Proteomics Research
Dear Colleagues and Friends, The first CPTAC Public Scientific Symposium was recently held on November 13, 2013 at the National Institutes of Health in Bethesda, MD. The symposium brought together a record number of registrants, 450 scientists, who shared and discussed novel biological discoveries, analytical methods, and translational approaches using CPTAC data.
Advances in spatial epidemiology and geographic information systems.
Kirby, Russell S; Delmelle, Eric; Eberth, Jan M
2017-01-01
The field of spatial epidemiology has evolved rapidly in the past 2 decades. This study serves as a brief introduction to spatial epidemiology and the use of geographic information systems in applied research in epidemiology. We highlight technical developments and highlight opportunities to apply spatial analytic methods in epidemiologic research, focusing on methodologies involving geocoding, distance estimation, residential mobility, record linkage and data integration, spatial and spatio-temporal clustering, small area estimation, and Bayesian applications to disease mapping. The articles included in this issue incorporate many of these methods into their study designs and analytical frameworks. It is our hope that these studies will spur further development and utilization of spatial analysis and geographic information systems in epidemiologic research. Copyright © 2016 Elsevier Inc. All rights reserved.
Health Analytics and Vital Records
State Employees Health Analytics and Vital Records DHSS State of Alaska Home Divisions and Agencies Alaska Pioneer Homes Behavioral Health Office of Children's Services Office of the Commissioner Office of Substance Misuse and Addiction Prevention Finance & Management Services Health Care Services Juvenile
Lippincott, Christine; Foronda, Cynthia; Zdanowicz, Martin; McCabe, Brian E; Ambrosia, Todd
2017-08-01
The objective of this study was to examine the relationship between nursing excellence and electronic health record adoption. Of 6582 US hospitals, 4939 were eligible for the Medicare Electronic Health Record Incentive Program, and 6419 were eligible for evaluation on the HIMSS Analytics Electronic Medical Record Adoption Model. Of 399 Magnet hospitals, 330 were eligible for the Medicare Electronic Health Record Incentive Program, and 393 were eligible for evaluation in the HIMSS Analytics Electronic Medical Record Adoption Model. Meaningful use attestation was defined as receipt of a Medicare Electronic Health Record Incentive Program payment. The adoption electronic health record was defined as Level 6 and/or 7 on the HIMSS Analytics Electronic Medical Record Adoption Model. Logistic regression showed that Magnet-designated hospitals were more likely attest to Meaningful Use than non-Magnet hospitals (odds ratio = 3.58, P < .001) and were more likely to adopt electronic health records than non-Magnet hospitals (Level 6 only: odds ratio = 3.68, P < .001; Level 6 or 7: odds ratio = 4.02, P < .001). This study suggested a positive relationship between Magnet status and electronic health record use, which involves earning financial incentives for successful adoption. Continued investigation is needed to examine the relationships between the quality of nursing care, electronic health record usage, financial implications, and patient outcomes.
Insausti, Matías; Fernández Band, Beatriz S
2015-04-05
A highly sensitive spectrofluorimetric method has been developed for the determination of 2-ethylhexyl nitrate in diesel fuel. Usually, this compound is used as an additive in order to improve cetane number. The analytical method consists in building the chemometric model as a first step. Then, it is possible to quantify the analyte with only recording a single excitation-emission fluorescence spectrum (EEF), whose data are introduced in the chemometric model above mentioned. Another important characteristic of this method is that the fuel sample was used without any pre-treatment for EEF. This work provides an interest improvement to fluorescence techniques using the rapid and easily applicable EEF approach to analyze such complex matrices. Exploding EEF was the key to a successful determination, obtaining a detection limit of 0.00434% (v/v) and a limit of quantification of 0.01446% (v/v). Copyright © 2015 Elsevier B.V. All rights reserved.
Evaluation of Analytical Modeling Functions for the Phonation Onset Process.
Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael
2016-01-01
The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.
Student and Faculty Member Perspectives on Lecture Capture in Pharmacy Education
Pearson, Marion L.; Albon, Simon P.
2014-01-01
Objectives. To examine faculty members’ and students’ use and perceptions of lecture recordings in a previously implemented lecture-capture initiative. Methods. Patterns of using lecture recordings were determined from software analytics, and surveys were conducted to determine awareness and usage, effect on attendance and other behaviors, and learning impact. Results. Most students and faculty members were aware of and appreciated the recordings. Students’ patterns of use changed as the novelty wore off. Students felt that the recordings enhanced their learning, improved their in-class engagement, and had little effect on their attendance. Faculty members saw little difference in students’ grades or in-class engagement but noted increased absenteeism. Conclusion. Students made appropriate use of recordings to support their learning, but faculty members generally did not make active educational use of the recordings. Further investigation is needed to understand the effects of lecture recordings on attendance. Professional development activities for both students and faculty members would help maximize the learning benefits of the recordings. PMID:24850936
Big Data and Predictive Analytics: Applications in the Care of Children.
Suresh, Srinivasan
2016-04-01
Emerging changes in the United States' healthcare delivery model have led to renewed interest in data-driven methods for managing quality of care. Analytics (Data plus Information) plays a key role in predictive risk assessment, clinical decision support, and various patient throughput measures. This article reviews the application of a pediatric risk score, which is integrated into our hospital's electronic medical record, and provides an early warning sign for clinical deterioration. Dashboards that are a part of disease management systems, are a vital tool in peer benchmarking, and can help in reducing unnecessary variations in care. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Chambers, Jeffrey A.
1994-01-01
Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
-Omic and Electronic Health Record Big Data Analytics for Precision Medicine.
Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D; Venugopalan, Janani; Hoffman, Ryan; Wang, May D
2017-02-01
Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of healthcare. In this paper, we present -omic and EHR data characteristics, associated challenges, and data analytics including data preprocessing, mining, and modeling. To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Big data analytics is able to address -omic and EHR data challenges for paradigm shift toward precision medicine. Big data analytics makes sense of -omic and EHR data to improve healthcare outcome. It has long lasting societal impact.
Ajami, Sima; Ketabi, Saeedeh
2012-06-01
Medical Records Department (MRD) is an important unit for evaluating and planning of care services. The goal of this study is evaluating the performance of the Medical Records Departments (MRDs) of the selected hospitals in Isfahan, Iran by using Analytical Hierarchy Process (AHP). This was an analytic of cross-sectional study that was done in spring 2008 in Isfahan, Iran. The statistical population consisted of MRDs of Alzahra, Kashani and Khorshid Hospitals in Isfahan. Data were collected by forms and through brainstorm technique. To analyze and perform AHP, Expert Choice software was used by researchers. Results were showed archiving unit has received the largest importance weight with respect to information management. However, on customer aspect admission unit has received the largest weight. Ordering weights of Medical Records Departments' Alzahra, Kashani and Khorshid Hospitals in Isfahan were with 0.394, 0.342 and 0.264 respectively. It is useful for managers to allocate and prioritize resources according to AHP technique for ranking at the Medical Records Departments.
NASA Astrophysics Data System (ADS)
Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan
2017-06-01
Objective. Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis {{\\ell}1} -minimization (GWALM), is proposed for wireless neural recording. Approach. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis {{\\ell}1} -minimization. Main results. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Significance. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.
Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan
2017-06-01
Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis [Formula: see text]-minimization (GWALM), is proposed for wireless neural recording. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis [Formula: see text]-minimization. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.
Post, Andrew R.; Kurc, Tahsin; Cholleti, Sharath; Gao, Jingjing; Lin, Xia; Bornstein, William; Cantrell, Dedra; Levine, David; Hohmann, Sam; Saltz, Joel H.
2013-01-01
Objective To create an analytics platform for specifying and detecting clinical phenotypes and other derived variables in electronic health record (EHR) data for quality improvement investigations. Materials and Methods We have developed an architecture for an Analytic Information Warehouse (AIW). It supports transforming data represented in different physical schemas into a common data model, specifying derived variables in terms of the common model to enable their reuse, computing derived variables while enforcing invariants and ensuring correctness and consistency of data transformations, long-term curation of derived data, and export of derived data into standard analysis tools. It includes software that implements these features and a computing environment that enables secure high-performance access to and processing of large datasets extracted from EHRs. Results We have implemented and deployed the architecture in production locally. The software is available as open source. We have used it as part of hospital operations in a project to reduce rates of hospital readmission within 30 days. The project examined the association of over 100 derived variables representing disease and co-morbidity phenotypes with readmissions in five years of data from our institution’s clinical data warehouse and the UHC Clinical Database (CDB). The CDB contains administrative data from over 200 hospitals that are in academic medical centers or affiliated with such centers. Discussion and Conclusion A widely available platform for managing and detecting phenotypes in EHR data could accelerate the use of such data in quality improvement and comparative effectiveness studies. PMID:23402960
Occupational exposure to silica in construction workers: a literature-based exposure database.
Beaudry, Charles; Lavoué, Jérôme; Sauvé, Jean-François; Bégin, Denis; Senhaji Rhazi, Mounia; Perrault, Guy; Dion, Chantal; Gérin, Michel
2013-01-01
We created an exposure database of respirable crystalline silica levels in the construction industry from the literature. We extracted silica and dust exposure levels in publications reporting silica exposure levels or quantitative evaluations of control effectiveness published in or after 1990. The database contains 6118 records (2858 of respirable crystalline silica) extracted from 115 sources, summarizing 11,845 measurements. Four hundred and eighty-eight records represent summarized exposure levels instead of individual values. For these records, the reported summary parameters were standardized into a geometric mean and a geometric standard deviation. Each record is associated with 80 characteristics, including information on trade, task, materials, tools, sampling strategy, analytical methods, and control measures. Although the database was constructed in French, 38 essential variables were standardized and translated into English. The data span the period 1974-2009, with 92% of the records corresponding to personal measurements. Thirteen standardized trades and 25 different standardized tasks are associated with at least five individual silica measurements. Trade-specific respirable crystalline silica geometric means vary from 0.01 (plumber) to 0.30 mg/m³ (tunnel construction skilled labor), while tasks vary from 0.01 (six categories, including sanding and electrical maintenance) to 1.59 mg/m³ (abrasive blasting). Despite limitations associated with the use of literature data, this database can be analyzed using meta-analytical and multivariate techniques and currently represents the most important source of exposure information about silica exposure in the construction industry. It is available on request to the research community.
Nanopore tweezers: voltage-controlled trapping and releasing of analytes.
Chinappi, Mauro; Luchian, Tudor; Cecconi, Fabio
2015-09-01
Several devices for single-molecule detection and analysis employ biological and artificial nanopores as core elements. The performance of such devises strongly depends on the amount of time the analytes spend into the pore. This residence time needs to be long enough to allow the recording of a high signal-to-noise ratio analyte-induced blockade. We propose a simple approach, dubbed nanopore tweezing, for enhancing the trapping time of molecules inside the pore via a proper tuning of the applied voltage. This method requires the creation of a strong dipole that can be generated by adding a positive and a negative tail at the two ends of the molecules to be analyzed. Capture rate is shown to increase with the applied voltage while escape rate decreases. In this paper we rationalize the essential ingredients needed to control the residence time and provide a proof of principle based on atomistic simulations.
Analytic study of a rolling sphere on a rough surface
NASA Astrophysics Data System (ADS)
Florea, Olivia A.; Rosca, Ileana C.
2016-11-01
In this paper it is realized an analytic study of the rolling's sphere on a rough horizontal plane under the action of its own gravity. The necessities of integration of the system of dynamical equations of motion lead us to find a reference system where the motion equations should be transformed into simpler expressions and which, in the presence of some significant hypothesis to permit the application of some original methods of analytical integration. In technical applications, the bodies may have a free rolling motion or a motion constrained by geometrical relations in assemblies of parts and machine parts. This study involves a lot of investigations in the field of tribology and of applied dynamics accompanied by experiments. Multiple recordings of several trajectories of the sphere, as well as their treatment of images, also followed by statistical processing experimental data allowed highlighting a very good agreement between the theoretical findings and experimental results.
Study of a vibrating plate: comparison between experimental (ESPI) and analytical results
NASA Astrophysics Data System (ADS)
Romero, G.; Alvarez, L.; Alanís, E.; Nallim, L.; Grossi, R.
2003-07-01
Real-time electronic speckle pattern interferometry (ESPI) was used for tuning and visualization of natural frequencies of a trapezoidal plate. The plate was excited to resonant vibration by a sinusoidal acoustical source, which provided a continuous range of audio frequencies. Fringe patterns produced during the time-average recording of the vibrating plate—corresponding to several resonant frequencies—were registered. From these interferograms, calculations of vibrational amplitudes by means of zero-order Bessel functions were performed in some particular cases. The system was also studied analytically. The analytical approach developed is based on the Rayleigh-Ritz method and on the use of non-orthogonal right triangular co-ordinates. The deflection of the plate is approximated by a set of beam characteristic orthogonal polynomials generated by using the Gram-Schmidt procedure. A high degree of correlation between computational analysis and experimental results was observed.
Tagliaro, F; Manetto, G; Crivellente, F; Scarcella, D; Marigo, M
1998-04-05
The present paper describes the methodological optimisation and validation of a capillary zone electrophoresis method for the determination of morphine, cocaine and 3,4-methylenedioxymethamphetamine (MDMA) in hair, with injection based on field-amplified sample stacking. Diode array UV absorption detection was used to improve analytical selectivity and identification power. Analytical conditions: running buffer 100 mM potassium phosphate adjusted to pH 2.5 with phosphoric acid, applied potential 10 kV, temperature 20 degrees C, injection by electromigration at 10 kV for 10 s, detection by UV absorption at the fixed wavelength of 200 nm or by recording the full spectrum between 190 and 400 nm. Injection conditions: the dried hair extracts were reconstituted with a low-conductivity solvent (0.1 mM formic acid), the injection end of the capillary was dipped in water for 5 s without applying pressure (external rinse step), then a plug of 0.1 mM phosphoric acid was loaded by applying 0.5 psi for 10 s and, finally, the sample was injected electrokinetically at 10 kV for 10 s. Under the described conditions, the limit of detection was 2 ng/ml for MDMA, 8 ng/ml for cocaine and 6 ng/ml for morphine (with a signal-to-noise ratio of 5). The lowest concentration suitable for recording interpretable spectra was about 10-20-times the limit of detection of each analyte. The intraday and day-to-day reproducibility of migration times (n = 6), with internal standardisation, was characterised by R.S.D. values < or = 0.6%; peak area R.S.D.s were better than 10% in intraday and than 15% in day-to-day experiments. Analytical linearity was good with R2 better than 0.9990 for all the analytes.
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
Kellerhals, Thomas; Tobler, Leonhard; Brütsch, Sabina; Sigl, Michael; Wacker, Lukas; Gäggeler, Heinz W; Schwikowski, Margit
2010-02-01
Trace element records from glacier and ice sheet archives provide insights into biogeochemical cycles, atmospheric circulation changes, and anthropogenic pollution history. We present the first continuous high-resolution thallium (Tl) record, derived from an accurately dated ice core from tropical South America, and discuss Tl as a tracer for volcanic eruptions. We identify four prominent Tl peaks and propose that they represent signals from the massive explosive eruptions of the "unknown 1258" A.D. volcano, of Kuwae ( approximately 1450 A.D.), Tambora (1815 A.D.), and Krakatoa (1883 A.D.). The highly resolved record was obtained with an improved setup for the continuous analysis of trace elements in ice with inductively coupled plasma sector field mass spectrometry (ICP-SFMS). The new setup allowed for a stronger initial acidification of the meltwater and shorter tubing length, thereby reducing the risk of memory effects and losses of analytes to the capillary walls. With a comparison of the continuous method to the established conventional decontamination and analysis procedure for discrete samples, we demonstrate the accuracy of the continuous method for Tl analyses.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Taguchi, Katsuyuki; Zhang, Mengxi; Frey, Eric C.; Wang, Xiaolan; Iwanczyk, Jan S.; Nygard, Einar; Hartsough, Neal E.; Tsui, Benjamin M. W.; Barber, William C.
2011-01-01
Purpose: Recently, photon counting x-ray detectors (PCXDs) with energy discrimination capabilities have been developed for potential use in clinical computed tomography (CT) scanners. These PCXDs have great potential to improve the quality of CT images due to the absence of electronic noise and weights applied to the counts and the additional spectral information. With high count rates encountered in clinical CT, however, coincident photons are recorded as one event with a higher or lower energy due to the finite speed of the PCXD. This phenomenon is called a “pulse pileup event” and results in both a loss of counts (called “deadtime losses”) and distortion of the recorded energy spectrum. Even though the performance of PCXDs is being improved, it is essential to develop algorithmic methods based on accurate models of the properties of detectors to compensate for these effects. To date, only one PCXD (model DXMCT-1, DxRay, Inc., Northridge, CA) has been used for clinical CT studies. The aim of that study was to evaluate the agreement between data measured by DXMCT-1 and those predicted by analytical models for the energy response, the deadtime losses, and the distorted recorded spectrum caused by pulse pileup effects. Methods: An energy calibration was performed using 99mTc (140 keV), 57Co (122 keV), and an x-ray beam obtained with four x-ray tube voltages (35, 50, 65, and 80 kVp). The DXMCT-1 was placed 150 mm from the x-ray focal spot; the count rates and the spectra were recorded at various tube current values from 10 to 500 μA for a tube voltage of 80 kVp. Using these measurements, for each pulse height comparator we estimated three parameters describing the photon energy-pulse height curve, the detector deadtime τ, a coefficient k that relates the x-ray tube current I to an incident count rate a by a=k×I, and the incident spectrum. The mean pulse shape of all comparators was acquired in a separate study and was used in the model to estimate the distorted recorded spectrum. The agreement between data measured by the DXMCT-1 and those predicted by the models was quantified by the coefficient of variation (COV), i.e., the root mean square difference divided by the mean of the measurement. Results: Photon energy versus pulse height curves calculated with an analytical model and those measured using the DXMCT-1 were in agreement within 0.2% in terms of the COV. The COV between the output count rates measured and those predicted by analytical models was 2.5% for deadtime losses of up to 60%. The COVs between spectra measured and those predicted by the detector model were within 3.7%–7.2% with deadtime losses of 19%–46%. Conclusions: It has been demonstrated that the performance of the DXMCT-1 agreed exceptionally well with the analytical models regarding the energy response, the count rate, and the recorded spectrum with pulse pileup effects. These models will be useful in developing methods to compensate for these effects in PCXD-based clinical CT systems. PMID:21452746
Microemulsification: an approach for analytical determinations.
Lima, Renato S; Shiroma, Leandro Y; Teixeira, Alvaro V N C; de Toledo, José R; do Couto, Bruno C; de Carvalho, Rogério M; Carrilho, Emanuel; Kubota, Lauro T; Gobbi, Angelo L
2014-09-16
We address a novel method for analytical determinations that combines simplicity, rapidity, low consumption of chemicals, and portability with high analytical performance taking into account parameters such as precision, linearity, robustness, and accuracy. This approach relies on the effect of the analyte content over the Gibbs free energy of dispersions, affecting the thermodynamic stabilization of emulsions or Winsor systems to form microemulsions (MEs). Such phenomenon was expressed by the minimum volume fraction of amphiphile required to form microemulsion (Φ(ME)), which was the analytical signal of the method. Thus, the measurements can be taken by visually monitoring the transition of the dispersions from cloudy to transparent during the microemulsification, like a titration. It bypasses the employment of electric energy. The performed studies were: phase behavior, droplet dimension by dynamic light scattering, analytical curve, and robustness tests. The reliability of the method was evaluated by determining water in ethanol fuels and monoethylene glycol in complex samples of liquefied natural gas. The dispersions were composed of water-chlorobenzene (water analysis) and water-oleic acid (monoethylene glycol analysis) with ethanol as the hydrotrope phase. The mean hydrodynamic diameter values for the nanostructures in the droplet-based water-chlorobenzene MEs were in the range of 1 to 11 nm. The procedures of microemulsification were conducted by adding ethanol to water-oleic acid (W-O) mixtures with the aid of micropipette and shaking. The Φ(ME) measurements were performed in a thermostatic water bath at 23 °C by direct observation that is based on the visual analyses of the media. The experiments to determine water demonstrated that the analytical performance depends on the composition of ME. It shows flexibility in the developed method. The linear range was fairly broad with limits of linearity up to 70.00% water in ethanol. For monoethylene glycol in water, in turn, the linear range was observed throughout the volume fraction of analyte. The best limits of detection were 0.32% v/v water to ethanol and 0.30% v/v monoethylene glycol to water. Furthermore, the accuracy was highly satisfactory. The natural gas samples provided by the Petrobras exhibited color, particulate material, high ionic strength, and diverse compounds as metals, carboxylic acids, and anions. These samples had a conductivity of up to 2630 μS cm(-1); the conductivity of pure monoethylene glycol was only 0.30 μS cm(-1). Despite such downsides, the method allowed accurate measures bypassing steps such as extraction, preconcentration, and dilution of the sample. In addition, the levels of robustness were promising. This parameter was evaluated by investigating the effect of (i) deviations in volumetric preparation of the dispersions and (ii) changes in temperature over the analyte contents recorded by the method.
van Stee, Leo L P; Brinkman, Udo A Th
2011-10-28
A method is presented to facilitate the non-target analysis of data obtained in temperature-programmed comprehensive two-dimensional (2D) gas chromatography coupled to time-of-flight mass spectrometry (GC×GC-ToF-MS). One main difficulty of GC×GC data analysis is that each peak is usually modulated several times and therefore appears as a series of peaks (or peaklets) in the one-dimensionally recorded data. The proposed method, 2DAid, uses basic chromatographic laws to calculate the theoretical shape of a 2D peak (a cluster of peaklets originating from the same analyte) in order to define the area in which the peaklets of each individual compound can be expected to show up. Based on analyte-identity information obtained by means of mass spectral library searching, the individual peaklets are then combined into a single 2D peak. The method is applied, amongst others, to a complex mixture containing 362 analytes. It is demonstrated that the 2D peak shapes can be accurately predicted and that clustering and further processing can reduce the final peak list to a manageable size. Copyright © 2011 Elsevier B.V. All rights reserved.
A standard methodology for the analysis, recording, and control of verbal behavior
Drash, Philip W.; Tudor, Roger M.
1991-01-01
Lack of a standard methodology has been one of the major obstacles preventing advancement of behavior analytic research in verbal behavior. This article presents a standard method for the analysis, recording, and control of verbal behavior that overcomes several major methodological problems that have hindered operant research in verbal behavior. The system divides all verbal behavior into four functional response classes, correct, error, no response, and inappropriate behavior, from which all vocal responses of a subject may be classified and consequated. The effects of contingencies of reinforcement on verbal operants within each category are made immediately visible to the researcher as changes in frequency of response. Incorporating frequency of response within each category as the unit of response allows both rate and probability of verbal response to be utilized as basic dependent variables. This method makes it possible to record and consequate verbal behavior in essentially the same way as any other operant response. It may also facilitate an experimental investigation of Skinner's verbal response categories. PMID:22477629
Łęski, Szymon; Pettersen, Klas H; Tunstall, Beth; Einevoll, Gaute T; Gigg, John; Wójcik, Daniel K
2011-12-01
The recent development of large multielectrode recording arrays has made it affordable for an increasing number of laboratories to record from multiple brain regions simultaneously. The development of analytical tools for array data, however, lags behind these technological advances in hardware. In this paper, we present a method based on forward modeling for estimating current source density from electrophysiological signals recorded on a two-dimensional grid using multi-electrode rectangular arrays. This new method, which we call two-dimensional inverse Current Source Density (iCSD 2D), is based upon and extends our previous one- and three-dimensional techniques. We test several variants of our method, both on surrogate data generated from a collection of Gaussian sources, and on model data from a population of layer 5 neocortical pyramidal neurons. We also apply the method to experimental data from the rat subiculum. The main advantages of the proposed method are the explicit specification of its assumptions, the possibility to include system-specific information as it becomes available, the ability to estimate CSD at the grid boundaries, and lower reconstruction errors when compared to the traditional approach. These features make iCSD 2D a substantial improvement over the approaches used so far and a powerful new tool for the analysis of multielectrode array data. We also provide a free GUI-based MATLAB toolbox to analyze and visualize our test data as well as user datasets.
Najat, Dereen
2017-01-01
Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.
Using predictive analytics and big data to optimize pharmaceutical outcomes.
Hernandez, Inmaculada; Zhang, Yuting
2017-09-15
The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Van Damme, T.
2015-04-01
Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.
NASA Astrophysics Data System (ADS)
Babaie Mahani, A.; Eaton, D. W.
2013-12-01
Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.
Recent and emerging applications of holographic photopolymers and nanocomposites
NASA Astrophysics Data System (ADS)
Naydenova, Izabela; Kotakonda, Pavani; Jallapuram, Raghavendra; Babeva, Tsvetanka; Mintova, S.; Bade, Denis; Martin, Suzanne; Toal, Vincent
2010-11-01
Sensing applications of holograms may be based on effects such as change in the spacing of the recorded fringes in a holographic diffraction grating in the presence of an analyte so that the direction of the diffracted laser light changes, or, in the case of a white light reflection grating, the wavelength of the diffracted light changes. An example is a reflection grating which swells in the presence of atmospheric moisture to indicate relative humidity by a change is the colour of the diffracted light. These devices make use of the photopolymer's ability to absorb moisture. In a more versatile approach one can add inorganic nanoparticles to the photopolymer composition. These nanoparticles have refractive indices that are different from that of the bulk photopolymer. During the holographic recording of diffraction gratings, the polymerisation and accompanying diffusion processes cause redistribution of the nanoparticles enhancing the holographic diffraction efficiency. Zeolite nanoparticles have the form of hollow cages enabling them to trap analyte molecules of appropriate sizes. The refractive index of the nanoparticle-analyte combination is normally different from that of the nanoparticles alone and this alters the refractive index modulation of the recorded grating, leading to a change in diffraction efficiency and hence of the strength of the diffracted light signal. Yet another approach makes use of a principle which we call dye deposition holography. The analyte is labelled using a dye which acts as a photosensitiser for the polymerisation process. When the analyte labeled is deposited on a layer containing the other photopolymer components photopolymerisation can take place. If the illumination is in the form of an interference pattern, a diffraction grating is formed, in the region where dye has been deposited. In this way the formation of a holographic diffraction grating itself becomes a sensing action with the potential for extremely high signal to noise ratio. The method also allows fabrication of photonic devices by direct writing, using photosensitising dye, of structures such as Fresnel zone plate lenses and waveguides onto the photopolymer layer followed by exposure to spatially uniform light. Our work on HDS is concerned with enhancing the diffraction efficiency of user selected very weak diffraction gratings by illumination with a single beam at the Bragg angle. Light in the illuminating beam is coupled into the diffracted beam and the two interfere to enhance the grating strength. In this way grating diffraction efficiency can be raised above a threshold so that a binary zero can be changed to binary one. A large number of identical weak holographic gratings may be multiplexed into the recording medium at the manufacturing stage, for user selection at the data recording stage. In this way consumer HDS systems could be made much more simply and cheaply than at present.
Barnes, Rebecca K; Jepson, Marcus; Thomas, Clare; Jackson, Sue; Metcalfe, Chris; Kessler, David; Cramer, Helen
2018-06-01
The study aim was to assess implementation fidelity (i.e., adherence) to a talk-based primary care intervention using Conversation Analytic (CA) methods. The context was a UK feasibility trial where General Practitioners (GPs) were trained to use "BATHE" (Background,Affect,Trouble,Handling,Empathy) - a technique to screen for psychosocial issues during consultations - with frequently attending patients. 35 GPs received BATHE training between July-October 2015. 15 GPs across six practices self-selected to record a sample of their consultations with study patients at three and six months. 31 consultations were recorded. 21/26 patients in four intervention practices gave permission for analysis. The recordings were transcribed and initially coded for the presence or absence of the five BATHE components. CA methods were applied to assess delivery, focusing on position and composition of each component, and patients' responses. Initial coding showed most of the BATHE components to be present in most contacts. However the CA analysis revealed unplanned deviations in position and adaptations in composition. Frequently the intervention was initiated too early in the consultation, and the BATHE questions misunderstood by patients as pertaining to their presenting problems rather than the psychosocial context for their problems. Often these deviations resulted in reducing theoretical fidelity of the intervention as a whole. A CA approach enabled a dynamic assessment of the delivery and receipt of BATHE in situ revealing common pitfalls in delivery and provided valuable examples of more and less efficacious implementations. During the trial this evidence was used in top-up trainings to address problems in delivery and to improve GP engagement. Using CA methods enabled a more accurate assessment of implementation fidelity, a fuller description of the intervention itself, and enhanced resources for future training. When positioned appropriately, BATHE can be a useful tool for eliciting information about the wider context of the medical visit. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat
2015-01-01
Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.
Park, Jung In; Pruinelli, Lisiane; Westra, Bonnie L; Delaney, Connie W
2014-01-01
With the pervasive implementation of electronic health records (EHR), new opportunities arise for nursing research through use of EHR data. Increasingly, comparative effectiveness research within and across health systems is conducted to identify the impact of nursing for improving health, health care, and lowering costs of care. Use of EHR data for this type of research requires use of national and internationally recognized nursing terminologies to normalize data. Research methods are evolving as large data sets become available through EHRs. Little is known about the types of research and analytic methods for applied to nursing research using EHR data normalized with nursing terminologies. The purpose of this paper is to report on a subset of a systematic review of peer reviewed studies related to applied nursing informatics research involving EHR data using standardized nursing terminologies.
Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.
Corcoran, Timothy C
2018-03-01
In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.
Recording 2-D Nutation NQR Spectra by Random Sampling Method
Sinyavsky, Nikolaj; Jadzyn, Maciej; Ostafin, Michal; Nogaj, Boleslaw
2010-01-01
The method of random sampling was introduced for the first time in the nutation nuclear quadrupole resonance (NQR) spectroscopy where the nutation spectra show characteristic singularities in the form of shoulders. The analytic formulae for complex two-dimensional (2-D) nutation NQR spectra (I = 3/2) were obtained and the condition for resolving the spectral singularities for small values of an asymmetry parameter η was determined. Our results show that the method of random sampling of a nutation interferogram allows significant reduction of time required to perform a 2-D nutation experiment and does not worsen the spectral resolution. PMID:20949121
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Taguchi, Katsuyuki; Zhang, Mengxi; Frey, Eric C; Wang, Xiaolan; Iwanczyk, Jan S; Nygard, Einar; Hartsough, Neal E; Tsui, Benjamin M W; Barber, William C
2011-02-01
Recently, photon counting x-ray detectors (PCXDs) with energy discrimination capabilities have been developed for potential use in clinical computed tomography (CT) scanners. These PCXDs have great potential to improve the quality of CT images due to the absence of electronic noise and weights applied to the counts and the additional spectral information. With high count rates encountered in clinical CT, however, coincident photons are recorded as one event with a higher or lower energy due to the finite speed of the PCXD. This phenomenon is called a "pulse pileup event" and results in both a loss of counts (called "deadtime losses") and distortion of the recorded energy spectrum. Even though the performance of PCXDs is being improved, it is essential to develop algorithmic methods based on accurate models of the properties of detectors to compensate for these effects. To date, only one PCXD (model DXMCT-1, DxRay, Inc., Northridge, CA) has been used for clinical CT studies. The aim of that study was to evaluate the agreement between data measured by DXMCT-1 and those predicted by analytical models for the energy response, the deadtime losses, and the distorted recorded spectrum caused by pulse pileup effects. An energy calibration was performed using 99mTc (140 keV), 57Co (122 keV), and an x-ray beam obtained with four x-ray tube voltages (35, 50, 65, and 80 kVp). The DXMCT-1 was placed 150 mm from the x-ray focal spot; the count rates and the spectra were recorded at various tube current values from 10 to 500 microA for a tube voltage of 80 kVp. Using these measurements, for each pulse height comparator we estimated three parameters describing the photon energy-pulse height curve, the detector deadtime tau, a coefficient k that relates the x-ray tube current I to an incident count rate a by a = k x I, and the incident spectrum. The mean pulse shape of all comparators was acquired in a separate study and was used in the model to estimate the distorted recorded spectrum. The agreement between data measured by the DXMCT-1 and those predicted by the models was quantified by the coefficient of variation (COV), i.e., the root mean square difference divided by the mean of the measurement. Photon energy versus pulse height curves calculated with an analytical model and those measured using the DXMCT-1 were in agreement within 0.2% in terms of the COV. The COV between the output count rates measured and those predicted by analytical models was 2.5% for deadtime losses of up to 60%. The COVs between spectra measured and those predicted by the detector model were within 3.7%-7.2% with deadtime losses of 19%-46%. It has been demonstrated that the performance of the DXMCT-1 agreed exceptionally well with the analytical models regarding the energy response, the count rate, and the recorded spectrum with pulse pileup effects. These models will be useful in developing methods to compensate for these effects in PCXD-based clinical CT systems.
Contact-coupled impact of slender rods: analysis and experimental validation
Tibbitts, Ira B.; Kakarla, Deepika; Siskey, Stephanie; Ochoa, Jorge A.; Ong, Kevin L.; Brannon, Rebecca M.
2013-01-01
To validate models of contact mechanics in low speed structural impact, slender rods were impacted in a drop tower, and measurements of the contact and vibration were compared to analytical and finite element (FE) models. The contact area was recorded using a novel thin-film transfer technique, and the contact duration was measured using electrical continuity. Strain gages recorded the vibratory strain in one rod, and a laser Doppler vibrometer measured speed. The experiment was modeled analytically on a one-dimensional spatial domain using a quasi-static Hertzian contact law and a system of delay differential equations. The three-dimensional FE model used hexahedral elements, a penalty contact algorithm, and explicit time integration. A small submodel taken from the initial global FE model economically refined the analysis in the small contact region. Measured contact areas were within 6% of both models’ predictions, peak speeds within 2%, cyclic strains within 12 με (RMS value), and contact durations within 2 μs. The global FE model and the measurements revealed small disturbances, not predicted by the analytical model, believed to be caused by interactions of the non-planar stress wavefront with the rod’s ends. The accuracy of the predictions for this simple test, as well as the versatility of the diagnostic tools, validates the theoretical and computational models, corroborates instrument calibration, and establishes confidence that the same methods may be used in experimental and computational study of contact mechanics during impact of more complicated structures. Recommendations are made for applying the methods to a particular biomechanical problem: the edge-loading of a loose prosthetic hip joint which can lead to premature wear and prosthesis failure. PMID:24729630
Moore, Jeffrey C; Spink, John; Lipp, Markus
2012-04-01
Food ingredient fraud and economically motivated adulteration are emerging risks, but a comprehensive compilation of information about known problematic ingredients and detection methods does not currently exist. The objectives of this research were to collect such information from publicly available articles in scholarly journals and general media, organize into a database, and review and analyze the data to identify trends. The results summarized are a database that will be published in the US Pharmacopeial Convention's Food Chemicals Codex, 8th edition, and includes 1305 records, including 1000 records with analytical methods collected from 677 references. Olive oil, milk, honey, and saffron were the most common targets for adulteration reported in scholarly journals, and potentially harmful issues identified include spices diluted with lead chromate and lead tetraoxide, substitution of Chinese star anise with toxic Japanese star anise, and melamine adulteration of high protein content foods. High-performance liquid chromatography and infrared spectroscopy were the most common analytical detection procedures, and chemometrics data analysis was used in a large number of reports. Future expansion of this database will include additional publically available articles published before 1980 and in other languages, as well as data outside the public domain. The authors recommend in-depth analyses of individual incidents. This report describes the development and application of a database of food ingredient fraud issues from publicly available references. The database provides baseline information and data useful to governments, agencies, and individual companies assessing the risks of specific products produced in specific regions as well as products distributed and sold in other regions. In addition, the report describes current analytical technologies for detecting food fraud and identifies trends and developments. © 2012 US Pharmacupia Journal of Food Science © 2012 Institute of Food Technologistsreg;
Wavelet-based analysis of circadian behavioral rhythms.
Leise, Tanya L
2015-01-01
The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. © 2015 Elsevier Inc. All rights reserved.
Two-condition within-participant statistical mediation analysis: A path-analytic framework.
Montoya, Amanda K; Hayes, Andrew F
2017-03-01
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Bizzi, Cezar A; Cruz, Sandra M; Schmidt, Lucas; Burrow, Robert A; Barin, Juliano S; Paniz, Jose N G; Flores, Erico M M
2018-04-03
A new method for analytical applications based on the Maxwell-Wagner effect is proposed. Considering the interaction of carbonaceous materials with an electromagnetic field in the microwave frequency range, a very fast heating is observed due to interfacial polarization that results in localized microplasma formation. Such effect was evaluated in this work using a monomode microwave system, and temperature was recorded using an infrared camera. For analytical applications, a closed reactor under oxygen pressure was evaluated. The combination of high temperature and oxidant atmosphere resulted in a very effective self-ignition reaction of sample, allowing its use as sample preparation procedure for further elemental analysis. After optimization, a high sample mass (up to 600 mg of coal and graphite) was efficiently digested using only 4 mol L -1 HNO 3 as absorbing solution. Several elements (Ba, Ca, Fe, K, Li, Mg, Na, and Zn) were determined by inductively coupled plasma optical emission spectrometry (ICP-OES). Accuracy was evaluated by using a certified reference material (NIST 1632b). Blanks were negligible, and only a diluted solution was required for analytes absorption preventing residue generation and making the proposed method in agreement with green chemistry recommendations. The feasibility of the proposed method for hard-to-digest materials, the minimization of reagent consumption, and the possibility of multi elemental analysis with lower blanks and better limits of detection can be considered as the main advantages of this method.
Oyaert, Matthijs; Van Maerken, Tom; Bridts, Silke; Van Loon, Silvi; Laverge, Heleen; Stove, Veronique
2018-03-01
Point-of-care blood gas test results may benefit therapeutic decision making by their immediate impact on patient care. We evaluated the (pre-)analytical performance of a novel cartridge-type blood gas analyzer, the GEM Premier 5000 (Werfen), for the determination of pH, partial carbon dioxide pressure (pCO 2 ), partial oxygen pressure (pO 2 ), sodium (Na + ), potassium (K + ), chloride (Cl - ), ionized calcium ( i Ca 2+ ), glucose, lactate, and total hemoglobin (tHb). Total imprecision was estimated according to the CLSI EP5-A2 protocol. The estimated total error was calculated based on the mean of the range claimed by the manufacturer. Based on the CLSI EP9-A2 evaluation protocol, a method comparison with the Siemens RapidPoint 500 and Abbott i-STAT CG8+ was performed. Obtained data were compared against preset quality specifications. Interference of potential pre-analytical confounders on co-oximetry and electrolyte concentrations were studied. The analytical performance was acceptable for all parameters tested. Method comparison demonstrated good agreement to the RapidPoint 500 and i-STAT CG8+, except for some parameters (RapidPoint 500: pCO 2 , K + , lactate and tHb; i-STAT CG8+: pO 2 , Na + , i Ca 2+ and tHb) for which significant differences between analyzers were recorded. No interference of lipemia or methylene blue on CO-oximetry results was found. On the contrary, significant interference for benzalkonium and hemolysis on electrolyte measurements were found, for which the user is notified by an interferent specific flag. Identification of sample errors from pre-analytical sources, such as interferences and automatic corrective actions, along with the analytical performance, ease of use and low maintenance time of the instrument, makes the evaluated instrument a suitable blood gas analyzer for both POCT and laboratory use. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
von Gunten, Lucien; D'Andrea, William J.; Bradley, Raymond S.; Huang, Yongsong
2012-01-01
High-resolution paleoclimate reconstructions are often restricted by the difficulties of sampling geologic archives in great detail and the analytical costs of processing large numbers of samples. Using sediments from Lake Braya Sø, Greenland, we introduce a new method that provides a quantitative high-resolution paleoclimate record by combining measurements of the alkenone unsaturation index () with non-destructive scanning reflectance spectroscopic measurements in the visible range (VIS-RS). The proxy-to-proxy (PTP) method exploits two distinct calibrations: the in situ calibration of to lake water temperature and the calibration of scanning VIS-RS data to down core data. Using this approach, we produced a quantitative temperature record that is longer and has 5 times higher sampling resolution than the original time series, thereby allowing detection of temperature variability in frequency bands characteristic of the AMO over the past 7,000 years. PMID:22934132
Friese, K C; Grobecker, K H; Wätjen, U
2001-07-01
A method has been developed for measurement of the homogeneity of analyte distribution in powdered materials by use of electrothermal vaporization with inductively coupled plasma mass spectrometric (ETV-ICP-MS) detection. The method enabled the simultaneous determination of As, Cd, Cu, Fe, Mn, Pb, and Zn in milligram amounts of samples of biological origin. The optimized conditions comprised a high plasma power of 1,500 W, reduced aerosol transport flow, and heating ramps below 300 degrees C s(-1). A temperature ramp to 550 degrees C ensured effective pyrolysis of approximately 70% of the organic compounds without losses of analyte. An additional hold stage at 700 degrees C led to separation of most of the analyte signals from the evaporation of carbonaceous matrix compounds. The effect of time resolution of signal acquisition on the precision of the ETV measurements was investigated. An increase in the number of masses monitored up to 20 is possible with not more than 1% additional relative standard deviation of results caused by limited temporal resolution of the transient signals. Recording of signals from the nebulization of aqueous standards in each sample run enabled correction for drift of the sensitivity of the ETV-ICP-MS instrument. The applicability of the developed method to homogeneity studies was assessed by use of four certified reference materials. According to the best repeatability observed in these sample runs, the maximum contribution of the method to the standard deviation is approximately 5% to 6% for all the elements investigated.
Schweitzer, Mary Higby; Schroeter, Elena R; Goshe, Michael B
2014-07-15
Advances in resolution and sensitivity of analytical techniques have provided novel applications, including the analyses of fossil material. However, the recovery of original proteinaceous components from very old fossil samples (defined as >1 million years (1 Ma) from previously named limits in the literature) is far from trivial. Here, we discuss the challenges to recovery of proteinaceous components from fossils, and the need for new sample preparation techniques, analytical methods, and bioinformatics to optimize and fully utilize the great potential of information locked in the fossil record. We present evidence for survival of original components across geological time, and discuss the potential benefits of recovery, analyses, and interpretation of fossil materials older than 1 Ma, both within and outside of the fields of evolutionary biology.
AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku
2014-05-27
The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting classifier which averaged the results of multi-nominal logistic regression and voting feature intervals classifiers. Of 42 final model risk factors, discharge disposition, discretized age, and indicators of anemia were the most significant. This model achieved a c-statistic of 86.8%. The proposed three-step analytical approach enhanced predictive model performance for CHF readmissions. It could potentially be leveraged to improve predictive model performance in other areas of clinical medicine.
Quantitative Characterization of Tissue Microstructure with Temporal Diffusion Spectroscopy
Xu, Junzhong; Does, Mark D.; Gore, John C.
2009-01-01
The signals recorded by diffusion-weighted magnetic resonance imaging (DWI) are dependent on the micro-structural properties of biological tissues, so it is possible to obtain quantitative structural information non-invasively from such measurements. Oscillating gradient spin echo (OGSE) methods have the ability to probe the behavior of water diffusion over different time scales and the potential to detect variations in intracellular structure. To assist in the interpretation of OGSE data, analytical expressions have been derived for diffusion-weighted signals with OGSE methods for restricted diffusion in some typical structures, including parallel planes, cylinders and spheres, using the theory of temporal diffusion spectroscopy. These analytical predictions have been confirmed with computer simulations. These expressions suggest how OGSE signals from biological tissues should be analyzed to characterize tissue microstructure, including how to estimate cell nuclear sizes. This approach provides a model to interpret diffusion data obtained from OGSE measurements that can be used for applications such as monitoring tumor response to treatment in vivo. PMID:19616979
Taguchi, Katsuyuki; Frey, Eric C.; Wang, Xiaolan; Iwanczyk, Jan S.; Barber, William C.
2010-01-01
Purpose: Recently, novel CdTe photon counting x-ray detectors (PCXDs) with energy discrimination capabilities have been developed. When such detectors are operated under a high x-ray flux, however, coincident pulses distort the recorded energy spectrum. These distortions are called pulse pileup effects. It is essential to compensate for these effects on the recorded energy spectrum in order to take full advantage of spectral information PCXDs provide. Such compensation can be achieved by incorporating a pileup model into the image reconstruction process for computed tomography, that is, as a part of the forward imaging process, and iteratively estimating either the imaged object or the line integrals using, e.g., a maximum likelihood approach. The aim of this study was to develop a new analytical pulse pileup model for both peak and tail pileup effects for nonparalyzable detectors. Methods: The model takes into account the following factors: The bipolar shape of the pulse, the distribution function of time intervals between random events, and the input probability density function of photon energies. The authors used Monte Carlo simulations to evaluate the model. Results: The recorded spectra estimated by the model were in an excellent agreement with those obtained by Monte Carlo simulations for various levels of pulse pileup effects. The coefficients of variation (i.e., the root mean square difference divided by the mean of measurements) were 5.3%–10.0% for deadtime losses of 1%–50% with a polychromatic incident x-ray spectrum. Conclusions: The proposed pulse pileup model can predict recorded spectrum with relatively good accuracy. PMID:20879558
Soltwisch, Jens; Jaskolla, Thorsten W; Dreisewerd, Klaus
2013-10-01
The success of matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) as a widely employed analytical tool in the biomolecular sciences builds strongly on an effective laser-material interaction that is resulting in a soft co-desorption and ionization of matrix and imbedded biomolecules. To obtain a maximized ion yield for the analyte(s) of interest, in general both wavelength and fluence need to be tuned to match the specific optical absorption profile of the used matrix. However, commonly only lasers with fixed emission wavelengths of either 337 or 355 nm are used for MALDI-MS. Here, we employed a wavelength-tunable dye laser and recorded both the neutral material ejection and the MS ion data in a wide wavelength and fluence range between 280 and 377.5 nm. α-Cyano-4-hydroxycinnamic acid (HCCA), 4-chloro-α-cyanocinnamic acid (ClCCA), α-cyano-2,4-difluorocinnamic acid (DiFCCA), and 2,5-dihydroxybenzoic acid (DHB) were investigated as matrices, and several peptides as analytes. Recording of the material ejection was achieved by adopting a photoacoustic approach. Relative ion yields were derived by division of photoacoustic and ion signals. In this way, distinct wavelength/fluence regions can be identified for which maximum ion yields were obtained. For the tested matrices, optimal results were achieved for wavelengths corresponding to areas of high optical absorption of the respective matrix and at fluences about a factor of 2-3 above the matrix- and wavelength-dependent ion detection threshold fluences. The material ejection as probed by the photoacoustic method is excellently fitted by the quasithermal model, while a sigmoidal function allows for an empirical description of the ion signal-fluence relationship.
Ryan, Patrick B.; Schuemie, Martijn
2013-01-01
Background: Clinical studies that use observational databases, such as administrative claims and electronic health records, to evaluate the effects of medical products have become commonplace. These studies begin by selecting a particular study design, such as a case control, cohort, or self-controlled design, and different authors can and do choose different designs for the same clinical question. Furthermore, published papers invariably report the study design but do not discuss the rationale for the specific choice. Studies of the same clinical question with different designs, however, can generate different results, sometimes with strikingly different implications. Even within a specific study design, authors make many different analytic choices and these too can profoundly impact results. In this paper, we systematically study heterogeneity due to the type of study design and due to analytic choices within study design. Methods and findings: We conducted our analysis in 10 observational healthcare databases but mostly present our results in the context of the GE Centricity EMR database, an electronic health record database containing data for 11.2 million lives. We considered the impact of three different study design choices on estimates of associations between bisphosphonates and four particular health outcomes for which there is no evidence of an association. We show that applying alternative study designs can yield discrepant results, in terms of direction and significance of association. We also highlight that while traditional univariate sensitivity analysis may not show substantial variation, systematic assessment of all analytical choices within a study design can yield inconsistent results ranging from statistically significant decreased risk to statistically significant increased risk. Our findings show that clinical studies using observational databases can be sensitive both to study design choices and to specific analytic choices within study design. Conclusion: More attention is needed to consider how design choices may be impacting results and, when possible, investigators should examine a wide array of possible choices to confirm that significant findings are consistently identified. PMID:25083251
A privacy-preserved analytical method for ehealth database with minimized information loss.
Chen, Ya-Ling; Cheng, Bo-Chao; Chen, Hsueh-Lin; Lin, Chia-I; Liao, Guo-Tan; Hou, Bo-Yu; Hsu, Shih-Chun
2012-01-01
Digitizing medical information is an emerging trend that employs information and communication technology (ICT) to manage health records, diagnostic reports, and other medical data more effectively, in order to improve the overall quality of medical services. However, medical information is highly confidential and involves private information, even legitimate access to data raises privacy concerns. Medical records provide health information on an as-needed basis for diagnosis and treatment, and the information is also important for medical research and other health management applications. Traditional privacy risk management systems have focused on reducing reidentification risk, and they do not consider information loss. In addition, such systems cannot identify and isolate data that carries high risk of privacy violations. This paper proposes the Hiatus Tailor (HT) system, which ensures low re-identification risk for medical records, while providing more authenticated information to database users and identifying high-risk data in the database for better system management. The experimental results demonstrate that the HT system achieves much lower information loss than traditional risk management methods, with the same risk of re-identification.
Capturing commemoration: Using mobile recordings within memory research
Birdsall, Carolyn; Drozdzewski, Danielle
2017-01-01
This paper details the contribution of mobile devices to capturing commemoration in action. It investigates the incorporation of audio and sound recording devices, observation, and note-taking into a mobile (auto)ethnographic research methodology, to research a large-scale commemorative event in Amsterdam, the Netherlands. On May 4, 2016, the sounds of a Silent March—through the streets of Amsterdam to Dam Square—were recorded and complemented by video grabs of the march’s participants and onlookers. We discuss how the mixed method enabled a multilevel analysis across visual, textual, and aural layers of the commemorative atmosphere. Our visual data aided in our evaluation of the construction of collective spectacle, while the audio data necessitated that we venture into new analytic territory. Using Sonic Visualiser, we uncovered alternative methods of “reading” landscape by identifying different sound signatures in the acoustic environment. Together, this aural and visual representation of the May 4 events enabled the identification of spatial markers and the temporal unfolding of the Silent March and the national 2 minutes’ silence in Amsterdam’s Dam Square. PMID:29780585
Visual Analytics in Public Safety: Example Capabilities for Example Government Agencies
2011-10-01
is not limited to: the Police Records Information Management Environment for British Columbia (PRIME-BC), the Police Reporting and Occurrence System...and filtering for rapid identification of relevant documents - Graphical environment for visual evidence marshaling - Interactive linking and...analytical reasoning facilitated by interactive visual interfaces and integration with computational analytics. Indeed, a wide variety of technologies
Ad-Hoc Queries over Document Collections - A Case Study
NASA Astrophysics Data System (ADS)
Löser, Alexander; Lutter, Steffen; Düssel, Patrick; Markl, Volker
We discuss the novel problem of supporting analytical business intelligence queries over web-based textual content, e.g., BI-style reports based on 100.000's of documents from an ad-hoc web search result. Neither conventional search engines nor conventional Business Intelligence and ETL tools address this problem, which lies at the intersection of their capabilities. "Google Squared" or our system GOOLAP.info, are examples of these kinds of systems. They execute information extraction methods over one or several document collections at query time and integrate extracted records into a common view or tabular structure. Frequent extraction and object resolution failures cause incomplete records which could not be joined into a record answering the query. Our focus is the identification of join-reordering heuristics maximizing the size of complete records answering a structured query. With respect to given costs for document extraction we propose two novel join-operations: The multi-way CJ-operator joins records from multiple relationships extracted from a single document. The two-way join-operator DJ ensures data density by removing incomplete records from results. In a preliminary case study we observe that our join-reordering heuristics positively impact result size, record density and lower execution costs.
Abdul-Karim, Nadia; Blackman, Christopher S; Gill, Philip P; Karu, Kersti
2016-10-05
The continued usage of explosive devices, as well as the ever growing threat of 'dirty' bombs necessitates a comprehensive understanding of particle dispersal during detonation events in order to develop effectual methods for targeting explosive and/or additive remediation efforts. Herein, the distribution of explosive analytes from controlled detonations of aluminised ammonium nitrate and an RDX-based explosive composition were established by systematically sampling sites positioned around each firing. This is the first experimental study to produce evidence that the post-blast residue mass can distribute according to an approximate inverse-square law model, while also demonstrating for the first time that distribution trends can vary depending on individual analytes. Furthermore, by incorporating blast-wave overpressure measurements, high-speed imaging for fireball volume recordings, and monitoring of environmental conditions, it was determined that the principle factor affecting all analyte dispersals was the wind direction, with other factors affecting specific analytes to varying degrees. The dispersal mechanism for explosive residue is primarily the smoke cloud, a finding which in itself has wider impacts on the environment and fundamental detonation theory. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-04-01
... authorizations, records of authorized positions, and terminations 6 years. (g) Comparative or analytical..., detail drawings, and records of engineering studies that are part of or performed by the company within...
Code of Federal Regulations, 2010 CFR
2010-04-01
... authorizations, records of authorized positions, and terminations 6 years. (g) Comparative or analytical..., detail drawings, and records of engineering studies that are part of or performed by the company within...
42 CFR 493.1283 - Standard: Test records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Analytic Systems § 493.1283 Standard: Test records. (a) The laboratory must maintain an information or record system that includes the following: (1) The positive identification of the specimen. (2) The date and...
Nestić, Marina; Babić, Sandra; Pavlović, Dragana Mutavdžić; Sutlović, Davorka
2013-09-10
In presented paper analytical method based on solid-phase extraction using molecularly imprinted polymer and gas chromatography-mass spectrometry has been developed and validated for the confirmation of THC, THC-OH and THC-COOH in urine samples. Non-covalent molecularly imprinted polymers of THC-OH were prepared using different functional monomers (methacrylic acid, 4-vinylpyridine, and 2-hydroxyethyl methacrylate), ethylene glycol dimethacrylate as a cross-linker and 2,2'-azobis-isobutyronitrile as an initiator of radical polymerization. Analytes were extracted from urine samples using prepared polymer sorbent with highest binding selectivity and capability. Before extraction, urine samples were hydrolyzed with alkaline. Elution was performed with chloroform:ethyl acetate (60:40, v/v). Dry extracts were silylated with BSTFA+1% TMCS. Detection and quantification were performed using gas chromatography-mass spectrometry in single ion recording mode. The developed method was linear over the range from LOQ to 150 ng mL(-1) for all three analytes. For THC, THC-OH and THC-COOH LOD was 2.5, 1 and 1 ng mL(-1), and LOQ was 3, 2 and 2 ng mL(-1), respectively. The precision, accuracy, recovery and matrix effect were investigated at 5, 25 and 50 ng mL(-1). In the investigated concentration range recoveries were 71.9% for THC, 78.6% for THC-OH and 75.2% for THC-COOH. Matrix effect was not significant (<10%) for all analytes in the concentration range from 5 ng mL(-1) to 50 ng mL(-1). Extraction recovery on non-imprinted polymer was relatively high indicating high non-specific binding. Optimized and validated method was applied to 15 post-mortem urine samples. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Encounter Detection Using Visual Analytics to Improve Maritime Domain Awareness
2015-06-01
assigned to be processed in a record set consisting of all the records within a one degree of latitude by one degree of longitude square box. For the case...0.002 3 30 185 0.001 4 30 370 0.002 37 a degree of latitude by a tenth of a degree of longitude . This prototype further reduces the processing ...STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) A visual analytics process
The recalculation of the original pulse produced by a partial discharge
NASA Technical Reports Server (NTRS)
Tanasescu, F.
1978-01-01
The loads on a dielectric or an insulation arrangement cannot be precisely rated without properly assessing the manner in which a pulse produced by a partial discharge is transmitted from the point of the event to the point where it is recorded. A number of analytical and graphic methods are presented, and computer simulations are used for specific cases of a few measurement circuits. It turns out to be possible to determine the effect of each circuit element and thus make some valid corrections.
Comparison Between Sea Surface Wind Speed Estimates From Reflected GPS Signals and Buoy Measurements
NASA Technical Reports Server (NTRS)
Garrison, James L.; Katzberg, Steven J.; Zavorotny, Valery U.
2000-01-01
Reflected signals from the Global Positioning System (GPS) have been collected from an aircraft at approximately 3.7 km altitude on 5 different days. Estimation of surface wind speed by matching the shape of the reflected signal correlation function against analytical models was demonstrated. Wind speed obtained from this method agreed with that recorded from buoys to with a bias of less than 0.1 m/s, and with a standard derivation of 1.3 meters per second.
Water quality measurements in San Francisco Bay by the U.S. Geological Survey, 1969–2015
Schraga, Tara; Cloern, James E.
2017-01-01
The U.S. Geological Survey (USGS) maintains a place-based research program in San Francisco Bay (USA) that began in 1969 and continues, providing one of the longest records of water-quality measurements in a North American estuary. Constituents include salinity, temperature, light extinction coefficient, and concentrations of chlorophyll-a, dissolved oxygen, suspended particulate matter, nitrate, nitrite, ammonium, silicate, and phosphate. We describe the sampling program, analytical methods, structure of the data record, and how to access all measurements made from 1969 through 2015. We provide a summary of how these data have been used by USGS and other researchers to deepen understanding of how estuaries are structured and function differently from the river and ocean ecosystems they bridge.
Water quality measurements in San Francisco Bay by the U.S. Geological Survey, 1969-2015.
Schraga, Tara S; Cloern, James E
2017-08-08
The U.S. Geological Survey (USGS) maintains a place-based research program in San Francisco Bay (USA) that began in 1969 and continues, providing one of the longest records of water-quality measurements in a North American estuary. Constituents include salinity, temperature, light extinction coefficient, and concentrations of chlorophyll-a, dissolved oxygen, suspended particulate matter, nitrate, nitrite, ammonium, silicate, and phosphate. We describe the sampling program, analytical methods, structure of the data record, and how to access all measurements made from 1969 through 2015. We provide a summary of how these data have been used by USGS and other researchers to deepen understanding of how estuaries are structured and function differently from the river and ocean ecosystems they bridge.
How do gut feelings feature in tutorial dialogues on diagnostic reasoning in GP traineeship?
Stolper, C F; Van de Wiel, M W J; Hendriks, R H M; Van Royen, P; Van Bokhoven, M A; Van der Weijden, T; Dinant, G J
2015-05-01
Diagnostic reasoning is considered to be based on the interaction between analytical and non-analytical cognitive processes. Gut feelings, a specific form of non-analytical reasoning, play a substantial role in diagnostic reasoning by general practitioners (GPs) and may activate analytical reasoning. In GP traineeships in the Netherlands, trainees mostly see patients alone but regularly consult with their supervisors to discuss patients and problems, receive feedback, and improve their competencies. In the present study, we examined the discussions of supervisors and their trainees about diagnostic reasoning in these so-called tutorial dialogues and how gut feelings feature in these discussions. 17 tutorial dialogues focussing on diagnostic reasoning were video-recorded and transcribed and the protocols were analysed using a detailed bottom-up and iterative content analysis and coding procedure. The dialogues were segmented into quotes. Each quote received a content code and a participant code. The number of words per code was used as a unit of analysis to quantitatively compare the contributions to the dialogues made by supervisors and trainees, and the attention given to different topics. The dialogues were usually analytical reflections on a trainee's diagnostic reasoning. A hypothetico-deductive strategy was often used, by listing differential diagnoses and discussing what information guided the reasoning process and might confirm or exclude provisional hypotheses. Gut feelings were discussed in seven dialogues. They were used as a tool in diagnostic reasoning, inducing analytical reflection, sometimes on the entire diagnostic reasoning process. The emphasis in these tutorial dialogues was on analytical components of diagnostic reasoning. Discussing gut feelings in tutorial dialogues seems to be a good educational method to familiarize trainees with non-analytical reasoning. Supervisors need specialised knowledge about these aspects of diagnostic reasoning and how to deal with them in medical education.
NASA Astrophysics Data System (ADS)
Hawthorne, Donna; Mitchell, Fraser J. G.
2016-04-01
Globally, in recent years there has been an increase in the scale, intensity and level of destruction caused by wildfires. This can be seen in Ireland where significant changes in vegetation, land use, agriculture and policy, have promoted an increase in fires in the Irish landscape. This study looks at wildfire throughout the Holocene and draws on lacustrine charcoal records from seven study sites spread across Ireland, to reconstruct the past fire regimes recorded at each site. This work utilises new and accepted methods of fire history reconstruction to provide a recommended analytical procedure for statistical charcoal analysis. Digital charcoal counting was used and fire regime reconstructions carried out via the CharAnalysis programme. To verify this record new techniques are employed; an Ensemble-Member strategy to remove the objectivity associated with parameter selection, a Signal to Noise Index to determine if the charcoal record is appropriate for peak detection, and a charcoal peak screening procedure to validate the identified fire events based on bootstrapped samples. This analysis represents the first study of its kind in Ireland, examining the past record of fire on a multi-site and paleoecological timescale, and will provide a baseline level of data which can be built on in the future when the frequency and intensity of fire is predicted to increase.
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
75 FR 53262 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... a new Privacy Act system of records, JUSTICE/FBI- 021, the Data Integration and Visualization System... Act system of records, the Data Integration and Visualization System (DIVS), Justice/FBI-021. The... investigative mission by enabling access, search, integration, and analytics across multiple existing databases...
Wall, G.R.; Ingleston, H.H.; Litten, S.
2005-01-01
Total mercury (THg) load in rivers is often calculated from a site-specific "rating-curve" based on the relation between THg concentration and river discharge along with a continuous record of river discharge. However, there is no physical explanation as to why river discharge should consistently predict THg or any other suspended analyte. THg loads calculated by the rating-curve method were compared with those calculated by a "continuous surrogate concentration" (CSC) method in which a relation between THg concentration and suspended-sediment concentration (SSC) is constructed; THg loads then can be calculated from the continuous record of SSC and river discharge. The rating-curve and CSC methods, respectively, indicated annual THg loads of 46.4 and 75.1 kg for the Mohawk River, and 52.9 and 33.1 kg for the upper Hudson River. Differences between the results of the two methods are attributed to the inability of the rating-curve method to adequately characterize atypical high flows such as an ice-dam release, or to account for hysteresis, which typically degrades the strength of the relation between stream discharge and concentration of material in suspension. ?? Springer 2005.
von Oertzen, Timo; Brandmaier, Andreas M
2013-06-01
Structural equation models have become a broadly applied data-analytic framework. Among them, latent growth curve models have become a standard method in longitudinal research. However, researchers often rely solely on rules of thumb about statistical power in their study designs. The theory of power equivalence provides an analytical answer to the question of how design factors, for example, the number of observed indicators and the number of time points assessed in repeated measures, trade off against each other while holding the power for likelihood-ratio tests on the latent structure constant. In this article, we present applications of power-equivalent transformations on a model with data from a previously published study on cognitive aging, and highlight consequences of participant attrition on power. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Lindgren, Annie R; Anderson, Frank E
2018-01-01
Historically, deep-level relationships within the molluscan class Cephalopoda (squids, cuttlefishes, octopods and their relatives) have remained elusive due in part to the considerable morphological diversity of extant taxa, a limited fossil record for species that lack a calcareous shell and difficulties in sampling open ocean taxa. Many conflicts identified by morphologists in the early 1900s remain unresolved today in spite of advances in morphological, molecular and analytical methods. In this study we assess the utility of transcriptome data for resolving cephalopod phylogeny, with special focus on the orders of Decapodiformes (open-eye squids, bobtail squids, cuttlefishes and relatives). To do so, we took new and previously published transcriptome data and used a unique cephalopod core ortholog set to generate a dataset that was subjected to an array of filtering and analytical methods to assess the impacts of: taxon sampling, ortholog number, compositional and rate heterogeneity and incongruence across loci. Analyses indicated that datasets that maximized taxonomic coverage but included fewer orthologs were less stable than datasets that sacrificed taxon sampling to increase the number of orthologs. Clades recovered irrespective of dataset, filtering or analytical method included Octopodiformes (Vampyroteuthis infernalis + octopods), Decapodiformes (squids, cuttlefishes and their relatives), and orders Oegopsida (open-eyed squids) and Myopsida (e.g., loliginid squids). Ordinal-level relationships within Decapodiformes were the most susceptible to dataset perturbation, further emphasizing the challenges associated with uncovering relationships at deep nodes in the cephalopod tree of life. Copyright © 2017 Elsevier Inc. All rights reserved.
Rodriguez, Estrella Sanz; Poynter, Sam; Curran, Mark; Haddad, Paul R; Shellie, Robert A; Nesterenko, Pavel N; Paull, Brett
2015-08-28
Preservation of ionic species within Antarctic ice yields a unique proxy record of the Earth's climate history. Studies have been focused until now on two proxies: the ionic components of sea salt aerosol and methanesulfonic acid. Measurement of the all of the major ionic species in ice core samples is typically carried out by ion chromatography. Former methods, whilst providing suitable detection limits, have been based upon off-column preconcentration techniques, requiring larger sample volumes, with potential for sample contamination and/or carryover. Here, a new capillary ion chromatography based analytical method has been developed for quantitative analysis of limited volume Antarctic ice core samples. The developed analytical protocol applies capillary ion chromatography (with suppressed conductivity detection) and direct on-column sample injection and focusing, thus eliminating the requirement for off-column sample preconcentration. This limits the total sample volume needed to 300μL per analysis, allowing for triplicate sample analysis with <1mL of sample. This new approach provides a reliable and robust analytical method for the simultaneous determination of organic and inorganic anions, including fluoride, methanesulfonate, chloride, sulfate and nitrate anions. Application to composite ice-core samples is demonstrated, with coupling of the capillary ion chromatograph to high resolution mass spectrometry used to confirm the presence and purity of the observed methanesulfonate peak. Copyright © 2015 Elsevier B.V. All rights reserved.
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Prediction task guided representation learning of medical codes in EHR.
Cui, Liwen; Xie, Xiaolei; Shen, Zuojun
2018-06-18
There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.
75 FR 41539 - Privacy Act Systems of Records Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-16
... statistics and analytical studies in support of the function for which the records are collected and maintained, or for related work force studies. While published statistics and studies do not contain...
77 FR 61791 - System of Records; Presidential Management Fellows Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-11
... program personnel for the following reasons: a. To determine basic program eligibility and to evaluate... descriptive statistics and analytical studies in support of the function for which the records are collected...
Age-of-Air, Tape Recorder, and Vertical Transport Schemes
NASA Technical Reports Server (NTRS)
Lin, S.-J.; Einaudi, Franco (Technical Monitor)
2000-01-01
A numerical-analytic investigation of the impacts of vertical transport schemes on the model simulated age-of-air and the so-called 'tape recorder' will be presented using an idealized 1-D column transport model as well as a more realistic 3-D dynamical model. By comparing to the 'exact' solutions of 'age-of-air' and the 'tape recorder' obtainable in the 1-D setting, useful insight is gained on the impacts of numerical diffusion and dispersion of numerical schemes used in global models. Advantages and disadvantages of Eulerian, semi-Lagrangian, and Lagrangian transport schemes will be discussed. Vertical resolution requirement for numerical schemes as well as observing systems for capturing the fine details of the 'tape recorder' or any upward propagating wave-like structures can potentially be derived from the 1-D analytic model.
A New Method of Obtaining High-Resolution Paleoclimate Records from Speleothem Fluid Inclusions
NASA Astrophysics Data System (ADS)
Logan, A. J.; Horton, T. W.
2010-12-01
We present a new method for stable hydrogen and oxygen isotope analysis of ancient drip water trapped within cave speleothems. Our method improves on existing fluid inclusion isotopic analytical techniques in that it decreases the sample size by a factor of ten or more, dramatically improving the spatial and temporal precision of fluid inclusion-based paleoclimatology. Published thermal extraction methods require large samples (c. 150 mg) and temperatures high enough (c. 500-900°C) to cause calcite decomposition, which is also associated with isotopic fractionation of the trapped fluids. Extraction by crushing faces similar challenges, where the failure to extract all the trapped fluid can result in isotopic fractionation, and samples in excess of 500 mg are required. Our new method combines the strengths of these published thermal and crushing methods using continuous-flow isotope ratio analytical techniques. Our method combines relatively low-temperature (~250°C) thermal decrepitation with cryogenic trapping across a switching valve sample loop. In brief, ~20 mg carbonate samples are dried (75°C for >1 hour) and heated (250°C for >1 hour) in a quartz sample chamber under a continuously flowing stream of ultra-high purity helium. Heating of the sample chamber is achieved by use of a tube furnace. Fluids released during the heating step are trapped in a coiled stainless steel cold trap (~ -98°C) serving as the sample loop in a 6-way switching valve. Trapped fluids are subsequently injected into a high-temperature conversion elemental analyzer by switching the valve and rapidly thawing the trap. This approach yielded accurate and precise measurements of injected liquid water IAEA reference materials (GISP; SMOW2; SLAP2) for both hydrogen and oxygen isotopic compositions. Blanking tests performed on the extraction line demonstrate extremely low line-blank peak heights (<50mv). Our tests also demonstrate that complete recovery of liquid water is possible and that a minimum quantity of ~100nL water was required. In contrast to liquid water analyses, carbonate inclusion waters gave highly variable results. As plenty of signal was produced from relatively small sample sizes (~20 mg), the observed isotopic variation most likely reflects fractionation during fluid extraction, or natural isotopic variability. Additional tests and modifications to the extraction procedure are in progress, using a recently collected New Zealand stalagmite from a West Coast cave (DOC collection permit WC-27462-GEO). U-Th age data will accompany a paleoclimate record from this stalagmite obtained using standard carbonate analytical techniques, and compared to the results from our new fluid inclusion analyses.
42 CFR 493.1283 - Standard: Test records.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 5 2014-10-01 2014-10-01 false Standard: Test records. 493.1283 Section 493.1283 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Analytic Systems § 493.1283 Standard: Test records. (...
A Microcomputer-Based Data Acquisition System for Use in Undergraduate Laboratories.
ERIC Educational Resources Information Center
Johnson, Ray L.
1982-01-01
A laboratory computer system based on the Commodore PET 2001 is described including three applications for the undergraduate analytical chemistry laboratory: (1) recording a UV-visible absorption spectrum; (2) recording and use of calibration curves; and (3) recording potentiometric data. Lists of data acquisition programs described are available…
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
SAM Radiochemical Methods Query
Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.
Custovic, Adnan; Ainsworth, John; Arshad, Hasan; Bishop, Christopher; Buchan, Iain; Cullinan, Paul; Devereux, Graham; Henderson, John; Holloway, John; Roberts, Graham; Turner, Steve; Woodcock, Ashley; Simpson, Angela
2015-01-01
We created Asthma e-Lab, a secure web-based research environment to support consistent recording, description and sharing of data, computational/statistical methods and emerging findings across the five UK birth cohorts. The e-Lab serves as a data repository for our unified dataset and provides the computational resources and a scientific social network to support collaborative research. All activities are transparent, and emerging findings are shared via the e-Lab, linked to explanations of analytical methods, thus enabling knowledge transfer. eLab facilitates the iterative interdisciplinary dialogue between clinicians, statisticians, computer scientists, mathematicians, geneticists and basic scientists, capturing collective thought behind the interpretations of findings. PMID:25805205
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
Hall, Damien
2010-03-15
Observations of the motion of individual molecules in the membrane of a number of different cell types have led to the suggestion that the outer membrane of many eukaryotic cells may be effectively partitioned into microdomains. A major cause of this suggested partitioning is believed to be due to the direct/indirect association of the cytosolic face of the cell membrane with the cortical cytoskeleton. Such intimate association is thought to introduce effective hydrodynamic barriers into the membrane that are capable of frustrating molecular Brownian motion over distance scales greater than the average size of the compartment. To date, the standard analytical method for deducing compartment characteristics has relied on observing the random walk behavior of a labeled lipid or protein at various temporal frequencies and different total lengths of time. Simple theoretical arguments suggest that the presence of restrictive barriers imparts a characteristic turnover to a plot of mean squared displacement versus sampling period that can be interpreted to yield the average dimensions of the compartment expressed as the respective side lengths of a rectangle. In the following series of articles, we used computer simulation methods to investigate how well the conventional analytical strategy coped with heterogeneity in size, shape, and barrier permeability of the cell membrane compartments. We also explored questions relating to the necessary extent of sampling required (with regard to both the recorded time of a single trajectory and the number of trajectories included in the measurement bin) for faithful representation of the actual distribution of compartment sizes found using the SPT technique. In the current investigation, we turned our attention to the analytical characterization of diffusion through cell membrane compartments having both a uniform size and permeability. For this ideal case, we found that (i) an optimum sampling time interval existed for the analysis and (ii) the total length of time for which a trajectory was recorded was a key factor. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Tidally induced residual current over the Malin Sea continental slope
NASA Astrophysics Data System (ADS)
Stashchuk, Nataliya; Vlasenko, Vasiliy; Hosegood, Phil; Nimmo-Smith, W. Alex M.
2017-05-01
Tidally induced residual currents generated over shelf-slope topography are investigated analytically and numerically using the Massachusetts Institute of Technology general circulation model. Observational support for the presence of such a slope current was recorded over the Malin Sea continental slope during the 88-th cruise of the RRS ;James Cook; in July 2013. A simple analytical formula developed here in the framework of time-averaged shallow water equations has been validated against a fully nonlinear nonhydrostatic numerical solution. A good agreement between analytical and numerical solutions is found for a wide range of input parameters of the tidal flow and bottom topography. In application to the Malin Shelf area both the numerical model and analytical solution predicted a northward moving current confined to the slope with its core located above the 400 m isobath and with vertically averaged maximum velocities up to 8 cm s-1, which is consistent with the in-situ data recorded at three moorings and along cross-slope transects.
Studies in perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Valcu, Bogdan F.
This dissertation uses both micromagnetic simulation and analytical methods to analyze several aspects of a perpendicular recording system. To increase the head field amplitude, the recording layer is grown on top of a soft magnetic layer (keeper). There is concern about the ability of the keeper to conduct the magnetic flux from the head at high data rates. We compute numerically the magnetization motion of the soft underlayer during the reversal process. Generation of non-linear spin waves characterizes the magnetization dynamics in the keeper, the spins are oscillating with a frequency higher than that of the reversal current. However, the recording field applied to the data layer follows the time dependence of the input wave form. The written transition shape is determined by the competition between the head field gradient and the demagnetizing field gradient. An analytical slope model that takes into consideration the angular orientation of the applied field is used to estimate the transition parameter; agreement is shown with the micromagnetic results. On the playback side, the reciprocity principle is applied to calculate the read out signal from a single magnetic transition in the perpendicular medium. The pulse shape is close to an error-function, going through zero when the sensor is above the transition center and decaying from the peak to an asymptotic value when the transition center is far away. Analytical closed forms for both the slope in the origin and the asymptotic value show the dependence on the recording geometry parameters. The Signal-to-Noise Ratio is calculated assuming that the noise is dominated by the medium jitter. To keep the SNR at a readable level while increasing the areal density, the average magnetic grain diameter must decrease; consequently grain size fluctuations will affect the thermal decay. We performed Transmission Electron Microscopy measurements and observed differences in the grain size distribution between various types of media. Perpendicular media has more non-uniform grains than typical longitudinal media; the difference might appear due to the higher symmetry (related to the crystallographic orientation). The SNR is affected in great measure by the amount of exchange interaction between the grains. The intergranular coupling in CoCr alloys---typical for recording media---is reduced by Cr diffusion at the grain boundary. Micromagnetic modeling with an elementary discrete cell of atomic dimensions is used to calculate the magnetization variations through the grain boundary. An effective exchange interaction parameter is determined in terms of details of the chemical composition.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...
2D-Visualization of metabolic activity with planar optical chemical sensors (optodes)
NASA Astrophysics Data System (ADS)
Meier, R. J.; Liebsch, G.
2015-12-01
Microbia plays an outstandingly important role in many hydrologic compartments, such as e.g. the benthic community in sediments, or biologically active microorganisms in the capillary fringe, in ground water, or soil. Oxygen, pH, and CO2 are key factors and indicators for microbial activity. They can be measured using optical chemical sensors. These sensors record changing fluorescence properties of specific indicator dyes. The signals can be measured in a non-contact mode, even through transparent walls, which is important for many lab-experiments. They can measure in closed (transparent) systems, without sampling or intruding into the sample. They do not consume the analytes while measuring, are fully reversible and able to measure in non-stirred solutions. These sensors can be applied as high precision fiberoptic sensors (for profiling), robust sensor spots, or as planar sensors for 2D visualization (imaging). Imaging enables to detect thousands of measurement spots at the same time and generate 2D analyte maps over a region of interest. It allows for comparing different regions within one recorded image, visualizing spatial analyte gradients, or more important to identify hot spots of metabolic activity. We present ready-to-use portable imaging systems for the analytes oxygen, pH, and CO2. They consist of a detector unit, planar sensor foils and a software for easy data recording and evaluation. Sensors foils for various analytes and measurement ranges enable visualizing metabolic activity or analyte changes in the desired range. Dynamics of metabolic activity can be detected in one shot or over long time periods. We demonstrate the potential of this analytical technique by presenting experiments on benthic disturbance-recovery dynamics in sediments and microbial degradation of organic material in the capillary fringe. We think this technique is a new tool to further understand how microbial and geochemical processes are linked in (not solely) hydrologic systems.
Chemicals identified in human biological media: a data base. Third annual report, October 1981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cone, M.V.; Baldauf, M.F.; Martin, F.M.
1981-12-01
Part 2 contains the data base in tabular format. There are two sections, the first with records on nondrug substances, and the second with records on drugs. Chemicals in each section are arranged alphabetically by CAS preferred name, CAS registry number, formula, atomic weight, melting point, boiling point, and vapor pressure. Tissues are listed alphabetically with exposure route, analytical method, number of cases, range, and mean - when available in the source document. A variety of information may also be included that is pertinent to the range and mean as well as experimental design, demography, health effects, pathology, morphology, andmore » toxicity. Review articles are included in the data base; however, no data have been extracted from such documents because the original research articles are included.« less
Water quality measurements in San Francisco Bay by the U.S. Geological Survey, 1969–2015
Schraga, Tara S.; Cloern, James E.
2017-01-01
The U.S. Geological Survey (USGS) maintains a place-based research program in San Francisco Bay (USA) that began in 1969 and continues, providing one of the longest records of water-quality measurements in a North American estuary. Constituents include salinity, temperature, light extinction coefficient, and concentrations of chlorophyll-a, dissolved oxygen, suspended particulate matter, nitrate, nitrite, ammonium, silicate, and phosphate. We describe the sampling program, analytical methods, structure of the data record, and how to access all measurements made from 1969 through 2015. We provide a summary of how these data have been used by USGS and other researchers to deepen understanding of how estuaries are structured and function differently from the river and ocean ecosystems they bridge. PMID:28786972
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1979-01-01
Design adequacy of the lead-lag compensator of the frequency loop, accuracy checking of the analytical expression for the electrical motor transfer function, and performance evaluation of the speed control servo of the digital tape recorder used on-board the 1976 Viking Mars Orbiters and Voyager 1977 Jupiter-Saturn flyby spacecraft are analyzed. The transfer functions of the most important parts of a simplified frequency loop used for test simulation are described and ten simulation cases are reported. The first four of these cases illustrate the method of selecting the most suitable transfer function for the hysteresis synchronous motor, while the rest verify and determine the servo performance parameters and alternative servo compensation schemes. It is concluded that the linear methods provide a starting point for the final verification/refinement of servo design by nonlinear time response simulation and that the variation of the parameters of the static/dynamic Coulomb friction is as expected in a long-life space mission environment.
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-01-01
Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757
The Analytical Bibliographer and the Conservator.
ERIC Educational Resources Information Center
Koda, Paul S.
1979-01-01
Discusses areas where the work of analytical bibliographers and conservators overlaps and diverges in relation to their techniques and inquiries in handling physical books. Special attention is paid to their attitudes to physical details, the ways they record information, ethical questions, and the need for a common language. (Author)
NASA Technical Reports Server (NTRS)
1948-01-01
The conference on Turbojet-Engine Thrust-Augmentation Research was organized by the NACA to present in summarized form the results of the latest experimental and analytical investigations conducted at the Lewis Flight Propulsion Laboratory on methods of augmenting the thrust of turbojet engines. The technical discussions are reproduced herewith in the same form in which they were presented. The original presentation in this record are considered as complementary to, rather than substitutes for, the committee's system of complete and formal reports.
Azzara, Alyson J; von Zharen, Wyndylyn M; Newcomb, Joal J
2013-12-01
The Gulf of Mexico is a center of marine activities from seismic exploration to shipping, drilling, platform installation, lightering, and construction, among others. This analysis explored whether sperm whales respond to the passage of vessels using changes in total number of clicks during vessel passages as a proxy for potential variation in behavior. The data for this analysis were collected in 2001 as part of a larger Littoral Acoustic Demonstration Center project using the Environmental Acoustics Recording System buoys. These buoys were bottom moored, autonomous, and self-recording systems consisting of an omni-directional hydrophone and instrument package. Data from 36 days of continuous acoustic monitoring were recorded at a sampling rate of 11.725 kHz, and produced reliable recordings from 5 Hz to ∼5.8 kHz. Multiple preparatory steps were executed including calibration of an automatic click detector. Results indicate a significant decrease (32%) in the number of clicks detected as a ship approached an area. There were also significantly fewer clicks detected after the vessel passed than before (23%).
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karakaya, Mahmut; Qi, Hairong
This paper addresses the communication and energy efficiency in collaborative visual sensor networks (VSNs) for people localization, a challenging computer vision problem of its own. We focus on the design of a light-weight and energy efficient solution where people are localized based on distributed camera nodes integrating the so-called certainty map generated at each node, that records the target non-existence information within the camera s field of view. We first present a dynamic itinerary for certainty map integration where not only each sensor node transmits a very limited amount of data but that a limited number of camera nodes ismore » involved. Then, we perform a comprehensive analytical study to evaluate communication and energy efficiency between different integration schemes, i.e., centralized and distributed integration. Based on results obtained from analytical study and real experiments, the distributed method shows effectiveness in detection accuracy as well as energy and bandwidth efficiency.« less
Temporal abstraction-based clinical phenotyping with Eureka!
Post, Andrew R; Kurc, Tahsin; Willard, Richie; Rathod, Himanshu; Mansour, Michel; Pai, Akshatha Kalsanka; Torian, William M; Agravat, Sanjay; Sturm, Suzanne; Saltz, Joel H
2013-01-01
Temporal abstraction, a method for specifying and detecting temporal patterns in clinical databases, is very expressive and performs well, but it is difficult for clinical investigators and data analysts to understand. Such patterns are critical in phenotyping patients using their medical records in research and quality improvement. We have previously developed the Analytic Information Warehouse (AIW), which computes such phenotypes using temporal abstraction but requires software engineers to use. We have extended the AIW's web user interface, Eureka! Clinical Analytics, to support specifying phenotypes using an alternative model that we developed with clinical stakeholders. The software converts phenotypes from this model to that of temporal abstraction prior to data processing. The model can represent all phenotypes in a quality improvement project and a growing set of phenotypes in a multi-site research study. Phenotyping that is accessible to investigators and IT personnel may enable its broader adoption.
Whitman, Daniel S; Caleo, Suzette; Carpenter, Nichelle C; Horner, Margaret T; Bernerth, Jeremy B
2012-07-01
This article uses meta-analytic methods (k = 38) to examine the relationship between organizational justice climate and unit-level effectiveness. Overall, our results suggest that the relationship between justice and effectiveness is significant (ρ = .40) when both constructs are construed at the collective level. Our results also indicate that distributive justice climate was most strongly linked with unit-level performance (e.g., productivity, customer satisfaction), whereas interactional justice was most strongly related to unit-level processes (e.g., organizational citizenship behavior, cohesion). We also show that a number of factors moderate this relationship, including justice climate strength, the level of referent in the justice measure, the hierarchical level of the unit, and how criteria are classified. We elaborate on these findings and attempt to provide a clearer direction for future research in this area. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
The Savannah River Site's Groundwater Monitoring Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted during the first quarter of 1992. It includes the analytical data, field data, data review, quality control, and other documentation for this program; provides a record of the program's activities; and serves as an official document of the analytical results.
Severi, Mirko; Becagli, Silvia; Traversi, Rita; Udisti, Roberto
2015-11-17
Recently, the increasing interest in the understanding of global climatic changes and on natural processes related to climate yielded the development and improvement of new analytical methods for the analysis of environmental samples. The determination of trace chemical species is a useful tool in paleoclimatology, and the techniques for the analysis of ice cores have evolved during the past few years from laborious measurements on discrete samples to continuous techniques allowing higher temporal resolution, higher sensitivity and, above all, higher throughput. Two fast ion chromatographic (FIC) methods are presented. The first method was able to measure Cl(-), NO3(-) and SO4(2-) in a melter-based continuous flow system separating the three analytes in just 1 min. The second method (called Ultra-FIC) was able to perform a single chromatographic analysis in just 30 s and the resulting sampling resolution was 1.0 cm with a typical melting rate of 4.0 cm min(-1). Both methods combine the accuracy, precision, and low detection limits of ion chromatography with the enhanced speed and high depth resolution of continuous melting systems. Both methods have been tested and validated with the analysis of several hundred meters of different ice cores. In particular, the Ultra-FIC method was used to reconstruct the high-resolution SO4(2-) profile of the last 10,000 years for the EDML ice core, allowing the counting of the annual layers, which represents a key point in dating these kind of natural archives.
NASA Astrophysics Data System (ADS)
Quiers, M.; Perrette, Y.; Etienne, D.; Develle, A. L.; Jacq, K.
2017-12-01
The use of organic proxies increases in paleoenvironmental reconstructions from natural archives. Major advances have been achieved by the development of new highly informative molecular proxies usually linked to specific compounds. While studies focused on targeted compounds, offering a high information degree, advances on bulk organic matter are limited. However, this bulk is the main contributor to carbon cycle and has been shown to be a driver of many mineral or organic compounds transfer and record. Development of target proxies need complementary information on bulk organic matter to understand biases link to controlling factors or analytical methods, and provide a robust interpretation. Fluorescence methods have often been employed to characterize and quantify organic matter. However, these technics are mainly developed for liquid samples, inducing material and resolution loss when working on natural archives (either stalagmite or sediments). High-resolution solid phase fluorescence (SPF) was developed on speleothems. This method allows now to analyse organic matter quality and quantity if procedure to constrain the optical density are adopted. In fact, a calibration method using liquid phase fluorescence (LPF) was developed for speleothem, allowing to quantify organic carbon at high-resolution. We report here an application of such a procedure SPF/LPF measurements on lake sediments. In order to avoid sediment matrix effects on the fluorescence signal, a calibration using LPF measurements was realised. First results using this method provided organic matter quality record of different organic matter compounds (humic-like, protein-like and chlorophylle-like compounds) at high resolution for the sediment core. High resolution organic matter fluxes are obtained in a second time, applying pragmatic chemometrics model (non linear models, partial least square models) on high resolution fluorescence data. SPF method can be considered as a promising tool for high resolution record on organic matter quality and quantity. Potential application of this method will be evocated (lake ecosystem dynamic, changes in trophic levels)
Estimating the rate of biological introductions: Lessepsian fishes in the Mediterranean.
Belmaker, Jonathan; Brokovich, Eran; China, Victor; Golani, Daniel; Kiflawi, Moshe
2009-04-01
Sampling issues preclude the direct use of the discovery rate of exotic species as a robust estimate of their rate of introduction. Recently, a method was advanced that allows maximum-likelihood estimation of both the observational probability and the introduction rate from the discovery record. Here, we propose an alternative approach that utilizes the discovery record of native species to control for sampling effort. Implemented in a Bayesian framework using Markov chain Monte Carlo simulations, the approach provides estimates of the rate of introduction of the exotic species, and of additional parameters such as the size of the species pool from which they are drawn. We illustrate the approach using Red Sea fishes recorded in the eastern Mediterranean, after crossing the Suez Canal, and show that the two approaches may lead to different conclusions. The analytical framework is highly flexible and could provide a basis for easy modification to other systems for which first-sighting data on native and introduced species are available.
Faigen, Zachary; Deyneka, Lana; Ising, Amy; Neill, Daniel; Conway, Mike; Fairchild, Geoffrey; Gunn, Julia; Swenson, David; Painter, Ian; Johnson, Lauren; Kiley, Chris; Streichert, Laura
2015-01-01
Introduction: We document a funded effort to bridge the gap between constrained scientific challenges of public health surveillance and methodologies from academia and industry. Component tasks are the collection of epidemiologists’ use case problems, multidisciplinary consultancies to refine them, and dissemination of problem requirements and shareable datasets. We describe an initial use case and consultancy as a concrete example and challenge to developers. Materials and Methods: Supported by the Defense Threat Reduction Agency Biosurveillance Ecosystem project, the International Society for Disease Surveillance formed an advisory group to select tractable use case problems and convene inter-disciplinary consultancies to translate analytic needs into well-defined problems and to promote development of applicable solution methods. The initial consultancy’s focus was a problem originated by the North Carolina Department of Health and its NC DETECT surveillance system: Derive a method for detection of patient record clusters worthy of follow-up based on free-text chief complaints and without syndromic classification. Results: Direct communication between public health problem owners and analytic developers was informative to both groups and constructive for the solution development process. The consultancy achieved refinement of the asyndromic detection challenge and of solution requirements. Participants summarized and evaluated solution approaches and discussed dissemination and collaboration strategies. Practice Implications: A solution meeting the specification of the use case described above could improve human monitoring efficiency with expedited warning of events requiring follow-up, including otherwise overlooked events with no syndromic indicators. This approach can remove obstacles to collaboration with efficient, minimal data-sharing and without costly overhead. PMID:26834939
New approach to detect seismic surface waves in 1Hz-sampled GPS time series
Houlié, N.; Occhipinti, G.; Blanchard, T.; Shapiro, N.; Lognonné, P.; Murakami, M.
2011-01-01
Recently, co-seismic seismic source characterization based on GPS measurements has been completed in near- and far-field with remarkable results. However, the accuracy of the ground displacement measurement inferred from GPS phase residuals is still depending of the distribution of satellites in the sky. We test here a method, based on the double difference (DD) computations of Line of Sight (LOS), that allows detecting 3D co-seismic ground shaking. The DD method is a quasi-analytically free of most of intrinsic errors affecting GPS measurements. The seismic waves presented in this study produced DD amplitudes 4 and 7 times stronger than the background noise. The method is benchmarked using the GEONET GPS stations recording the Hokkaido Earthquake (2003 September 25th, Mw = 8.3). PMID:22355563
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
NASA Technical Reports Server (NTRS)
James, Mark; Wells, Doug; Allen, Phillip; Wallin, Kim
2017-01-01
Recently proposed modifications to ASTM E399 would provide a new size-insensitive approach to analyzing the force-displacement test record. The proposed size-insensitive linear-elastic fracture toughness, KIsi, targets a consistent 0.5mm crack extension for all specimen sizes by using an offset secant that is a function of the specimen ligament length. The KIsi evaluation also removes the Pmax/PQ criterion and increases the allowable specimen deformation. These latter two changes allow more plasticity at the crack tip, prompting the review undertaken in this work to ensure the validity of this new interpretation of the force-displacement curve. This paper provides a brief review of the proposed KIsi methodology and summarizes a finite element study into the effects of increased crack tip plasticity on the method given the allowance for additional specimen deformation. The study has two primary points of investigation: the effect of crack tip plasticity on compliance change in the force-displacement record and the continued validity of linear-elastic fracture mechanics to describe the crack front conditions. The analytical study illustrates that linear-elastic fracture mechanics assumptions remain valid at the increased deformation limit; however, the influence of plasticity on the compliance change in the test record is problematic. A proposed revision to the validity criteria for the KIsi test method is briefly discussed.
Sensor for detecting and differentiating chemical analytes
Yi, Dechang [Metuchen, NJ; Senesac, Lawrence R [Knoxville, TN; Thundat, Thomas G [Knoxville, TN
2011-07-05
A sensor for detecting and differentiating chemical analytes includes a microscale body having a first end and a second end and a surface between the ends for adsorbing a chemical analyte. The surface includes at least one conductive heating track for heating the chemical analyte and also a conductive response track, which is electrically isolated from the heating track, for producing a thermal response signal from the chemical analyte. The heating track is electrically connected with a voltage source and the response track is electrically connected with a signal recorder. The microscale body is restrained at the first end and the second end and is substantially isolated from its surroundings therebetween, thus having a bridge configuration.
Eggert, Corinne; Moselle, Kenneth; Protti, Denis; Sanders, Dale
2017-01-01
Closed Loop Analytics© is receiving growing interest in healthcare as a term referring to information technology, local data and clinical analytics working together to generate evidence for improvement. The Closed Loop Analytics model consists of three loops corresponding to the decision-making levels of an organization and the associated data within each loop - Patients, Protocols, and Populations. The authors propose that each of these levels should utilize the same ecosystem of electronic health record (EHR) and enterprise data warehouse (EDW) enabled data, in a closed-loop fashion, with that data being repackaged and delivered to suit the analytic and decision support needs of each level, in support of better outcomes.
Miotto, Riccardo; Glicksberg, Benjamin S.; Morgan, Joseph W.; Dudley, Joel T.
2017-01-01
Monitoring and modeling biomedical, health care and wellness data from individuals and converging data on a population scale have tremendous potential to improve understanding of the transition to the healthy state of human physiology to disease setting. Wellness monitoring devices and companion software applications capable of generating alerts and sharing data with health care providers or social networks are now available. The accessibility and clinical utility of such data for disease or wellness research are currently limited. Designing methods for streaming data capture, real-time data aggregation, machine learning, predictive analytics and visualization solutions to integrate wellness or health monitoring data elements with the electronic medical records (EMRs) maintained by health care providers permits better utilization. Integration of population-scale biomedical, health care and wellness data would help to stratify patients for active health management and to understand clinically asymptomatic patients and underlying illness trajectories. In this article, we discuss various health-monitoring devices, their ability to capture the unique state of health represented in a patient and their application in individualized diagnostics, prognosis, clinical or wellness intervention. We also discuss examples of translational bioinformatics approaches to integrating patient-generated data with existing EMRs, personal health records, patient portals and clinical data repositories. Briefly, translational bioinformatics methods, tools and resources are at the center of these advances in implementing real-time biomedical and health care analytics in the clinical setting. Furthermore, these advances are poised to play a significant role in clinical decision-making and implementation of data-driven medicine and wellness care. PMID:26876889
Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation
Reanalysis of a 15-year Archive of IMPROVE Samples
NASA Astrophysics Data System (ADS)
Hyslop, N. P.; White, W. H.; Trzepla, K.
2013-12-01
The IMPROVE (Interagency Monitoring of PROtected Visual Environments) network monitors aerosol concentrations at 170 remote sites throughout the United States. Twenty-four-hour filter samples of particulate matter are collected every third day and analyzed for chemical composition. About 30 of the sites have operated continuously since 1988, and the sustained data record (http://views.cira.colostate.edu/web/) offers a unique window on regional aerosol trends. All elemental analyses have been performed by Crocker Nuclear Laboratory at the University of California in Davis, and sample filters collected since 1995 are archived on campus. The suite of reported elements has remained constant, but the analytical methods employed for their determination have evolved. For example, the elements Na - Mn were determined by PIXE until November 2001, then by XRF analysis in a He-flushed atmosphere through 2004, and by XRF analysis in vacuum since January 2005. In addition to these fundamental changes, incompletely-documented operational factors such as detector performance and calibration details have introduced variations in the measurements. Because the past analytical methods were non-destructive, the archived filters can be re-analyzed with the current analytical systems and protocols. The 15-year sample archives from Great Smoky Mountains, Mount Rainier, and Point Reyes National Parks were selected for reanalysis. The agreement between the new analyses and original determinations varies with element and analytical era (Figure 1). Temporal trends for some elements are affected by these changes in measurement technique while others are not (Figure 2). Figure 1. Repeatability of analyses for sulfur and vanadium at Great Smoky Mountains National Park. Each point shows the ratio of mass loadings determined by the original analysis and recent reanalysis. Major method distinctions are indicated at the top. Figure 2. Trends, based on Thiel-Sen regression, in lead concentrations based on the original and reanalysis data.
Analysis of impulse signals with Hylaty ELF station
NASA Astrophysics Data System (ADS)
Kulak, A.; Mlynarczyk, J.; Ostrowski, M.; Kubisz, J.; Michalec, A.
2012-04-01
Lighting discharges generate electromagnetic field pulses that propagate in the Earth-ionosphere waveguide. The attenuation in the ELF range is so small that the pulses originating from strong atmospheric discharges can be observed even several thousand kilometers away from the individual discharge. The recorded waveform depends on the discharge process, the Earth-ionosphere waveguide properties on the source-receiver path, and the transfer function of the receiver. If the distance from the source is known, an inverse method can be used for reconstructing the current moment waveform and the charge moment of the discharge. In order to reconstruct the source parameters from the recorded signal a reliable model of the radio wave propagation in the Earth-ionosphere waveguide as well as practical signal processing techniques are necessary. We present two methods, both based on analytical formulas. The first method allows for fast calculation of the charge moment of relatively short atmospheric discharges. It is based on peak amplitude measurement of the recorded magnetic component of the ELF EM field and it takes into account the receiver characteristics. The second method, called "inverse channel method" allows reconstructing the complete current moment waveform of strong atmospheric discharges that exhibit the continuing current phase, such as Gigantic Jets and Sprites. The method makes it possible to fully remove from the observed waveform the distortions related to the receiver's impulse response as well as the influence of the Earth-ionosphere propagation channel. Our ELF station is equipped with two magnetic antennas for Bx and By components measurement in the 0.03 to 55 Hz frequency range. ELF Data recording is carried out since 1993, with continuous data acquisition since 2005. The station features low noise level and precise timing. It is battery powered and located in the sparsely populated area, far from major electric power lines, which results in high quality signal recordings and allows for precise calculations of the charge moments of upward discharges and strong cloud-to-ground discharges originating from distant sources. The same data is used for Schumann resonance observation. We demonstrate the use of our methods based on recent recordings from the Hylaty ELF station. We include examples of GJ (Gigantic Jet) and TGF (Terrestrial Gamma-ray Flash) related discharges.
Potyrailo, R A; Ruddy, V P; Hieftje, G M
1999-11-01
A new method is described for the simultaneous determination of absorbance and refractive index of a sample medium. The method is based on measurement of the analyte-modulated modal power distribution (MPD) in a multimode waveguide. In turn, the MPD is quantified by the far-field spatial pattern and intensity of light, i.e., the Fraunhofer diffraction pattern (registered on a CCD camera), that emerges from a multimode optical fiber. Operationally, light that is sent down the fiber interacts with the surrounding analyte-containing medium by means of the evanescent wave at the fiber boundary. The light flux in the propagating beam and the internal reflection angles within the fiber are both affected by optical absorption connected with the analyte and by the refractive index of the analyte-containing medium. In turn, these angles are reflected in the angular divergence of the beam as it leaves the fiber. As a result, the Fraunhofer diffraction pattern of that beam yields two parameters that can, together, be used to deduce refractive index and absorbance. This MPD based detection offers important advantages over traditional evanescent-wave detection strategies which rely on recording only the total transmitted optical power or its lost fraction. First, simultaneous determination of sample refractive index and absorbance is possible at a single probe wavelength. Second, the sensitivity of refractometric and absorption measurements can be controlled simply, either by adjusting the distance between the end face of the fiber and the CCD detector or by monitoring selected modal groups at the fiber output. As a demonstration of these capabilities, several weakly absorbing solutions were examined, with refractive indices in the range from 1.3330 to 1.4553 and with absorption coefficients in the range 0-16 cm-1. The new detection strategy is likely to be important in applications in which sample coloration varies and when it is necessary to compensate for variations in the refractive index of a sample.
Climate reconstruction from borehole temperatures influenced by groundwater flow
NASA Astrophysics Data System (ADS)
Kurylyk, B.; Irvine, D. J.; Tang, W.; Carey, S. K.; Ferguson, G. A. G.; Beltrami, H.; Bense, V.; McKenzie, J. M.; Taniguchi, M.
2017-12-01
Borehole climatology offers advantages over other climate reconstruction methods because further calibration steps are not required and heat is a ubiquitous subsurface property that can be measured from terrestrial boreholes. The basic theory underlying borehole climatology is that past surface air temperature signals are reflected in the ground surface temperature history and archived in subsurface temperature-depth profiles. High frequency surface temperature signals are attenuated in the shallow subsurface, whereas low frequency signals can be propagated to great depths. A limitation of analytical techniques to reconstruct climate signals from temperature profiles is that they generally require that heat flow be limited to conduction. Advection due to groundwater flow can thermally `contaminate' boreholes and result in temperature profiles being rejected for regional climate reconstructions. Although groundwater flow and climate change can result in contrasting or superimposed thermal disturbances, groundwater flow will not typically remove climate change signals in a subsurface thermal profile. Thus, climate reconstruction is still possible in the presence of groundwater flow if heat advection is accommodated in the conceptual and mathematical models. In this study, we derive a new analytical solution for reconstructing surface temperature history from borehole thermal profiles influenced by vertical groundwater flow. The boundary condition for the solution is composed of any number of sequential `ramps', i.e. periods with linear warming or cooling rates, during the instrumented and pre-observational periods. The boundary condition generation and analytical temperature modeling is conducted in a simple computer program. The method is applied to reconstruct climate in Winnipeg, Canada and Tokyo, Japan using temperature profiles recorded in hydrogeologically active environments. The results demonstrate that thermal disturbances due to groundwater flow and climate change must be considered in a holistic manner as opposed to isolating either perturbation as was done in prior analytical studies.
ERIC Educational Resources Information Center
Firat, Mehmet
2016-01-01
Two of the most important outcomes of learning analytics are predicting students' learning and providing effective feedback. Learning Management Systems (LMS), which are widely used to support online and face-to-face learning, provide extensive research opportunities with detailed records of background data regarding users' behaviors. The purpose…
A Descriptive-Analytic Study of the Practice Field Behavior of a Winning Female Coach.
ERIC Educational Resources Information Center
Dodds, Patt; Rife, Frank
A winning collegiate field hockey coach was observed across seventeen practice sessions through one complete competitive season. A category system for the event recording of verbal and nonverbal behaviors delivered to the team and to the sixteen individual players produced descriptive-analytic information about relative behavior frequencies for…
A real-time device for converting Doppler ultrasound audio signals into fluid flow velocity
Hogeman, Cynthia S.; Koch, Dennis W.; Krishnan, Anandi; Momen, Afsana; Leuenberger, Urs A.
2010-01-01
A Doppler signal converter has been developed to facilitate cardiovascular and exercise physiology research. This device directly converts audio signals from a clinical Doppler ultrasound imaging system into a real-time analog signal that accurately represents blood flow velocity and is easily recorded by any standard data acquisition system. This real-time flow velocity signal, when simultaneously recorded with other physiological signals of interest, permits the observation of transient flow response to experimental interventions in a manner not possible when using standard Doppler imaging devices. This converted flow velocity signal also permits a more robust and less subjective analysis of data in a fraction of the time required by previous analytic methods. This signal converter provides this capability inexpensively and requires no modification of either the imaging or data acquisition system. PMID:20173048
Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...
Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.
Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery
Plaut, Alfred B J
2005-02-01
In this paper the author explores the theoretical and technical issues relating to taking notes of analytic sessions, using an introspective approach. The paper discusses the lack of a consistent approach to note taking amongst analysts and sets out to demonstrate that systematic note taking can be helpful to the analyst. The author describes his discovery that an initial phase where as much data was recorded as possible did not prove to be reliably helpful in clinical work and initially actively interfered with recall in subsequent sessions. The impact of the nature of the analytic session itself and the focus of the analyst's interest on recall is discussed. The author then describes how he modified his note taking technique to classify information from sessions into four categories which enabled the analyst to select which information to record in notes. The characteristics of memory and its constructive nature are discussed in relation to the problems that arise in making accurate notes of analytic sessions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.Rex is used to explore tabular data sets containing up to ten million records to help rapidly understand a previously unknown data set. Analysis can quickly identify patterns of interest and the records and fields that capture those patterns. T.Rex contains a growing set of deep analytical tools and supports robust export capabilities that selected data can be incorporated into to other specialized tools for further analysis. T.Rex is flexible in ingesting different types and formats of data, allowing the user to interactively experiment and perform trial and error guesses on the structure of the data; and also has amore » variety of linked visual analytic tools that enable exploration of the data to find relevant content, relationships among content, trends within the content, and capture knowledge about the content. Finally, T.Rex has a rich export capability, to extract relevant subsets of a larger data source, to further analyze their data in other analytic tools.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-06-08
T.Rex is used to explore tabular data sets containing up to ten million records to help rapidly understand a previously unknown data set. Analysis can quickly identify patterns of interest and the records and fields that capture those patterns. T.Rex contains a growing set of deep analytical tools and supports robust export capabilities that selected data can be incorporated into to other specialized tools for further analysis. T.Rex is flexible in ingesting different types and formats of data, allowing the user to interactively experiment and perform trial and error guesses on the structure of the data; and also has amore » variety of linked visual analytic tools that enable exploration of the data to find relevant content, relationships among content, trends within the content, and capture knowledge about the content. Finally, T.Rex has a rich export capability, to extract relevant subsets of a larger data source, to further analyze their data in other analytic tools.« less
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
NASA Astrophysics Data System (ADS)
Asadpour-Zeynali, Karim; Bastami, Mohammad
2010-02-01
In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.
Husebø, Sissel Eikeland; Dieckmann, Peter; Rystedt, Hans; Søreide, Eldar; Friberg, Febe
2013-06-01
Simulation-based education is a learner-active method that may enhance teamwork skills such as leadership and communication. The importance of postsimulation debriefing to promote reflection is well accepted, but many questions concerning whether and how faculty promote reflection remain largely unanswered in the research literature. The aim of this study was therefore to explore the depth of reflection expressed in questions by facilitators and responses from nursing students during postsimulation debriefings. Eighty-one nursing students and 4 facilitators participated. The data were collected in February and March 2008, the analysis being conducted on 24 video-recorded debriefings from simulated resuscitation teamwork involving nursing students only. Using Gibbs' reflective cycle, we graded the facilitators' questions and nursing students' responses into stages of reflection and then correlated these. Facilitators asked most evaluative and fewest emotional questions, whereas nursing students answered most evaluative and analytic responses and fewest emotional responses. The greatest difference between facilitators and nursing students was in the analytic stage. Only 23 (20%) of 117 questions asked by the facilitators were analytic, whereas 45 (35%) of 130 students' responses were rated as analytic. Nevertheless, the facilitators' descriptive questions also elicited student responses in other stages such as evaluative and analytic responses. We found that postsimulation debriefings provide students with the opportunity to reflect on their simulation experience. Still, if the debriefing is going to pave the way for student reflection, it is necessary to work further on structuring the debriefing to facilitate deeper reflection. Furthermore, it is important that facilitators consider what kind of questions they ask to promote reflection. We think future research on debriefing should focus on developing an analytical framework for grading reflective questions. Such research will inform and support facilitators in devising strategies for the promotion of learning through reflection in postsimulation debriefings.
Guided Text Search Using Adaptive Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Symons, Christopher T; Senter, James K
This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interactsmore » with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.« less
ERIC Educational Resources Information Center
Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette
2017-01-01
Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…
Rewinding the waves: tracking underwater signals to their source.
Kadri, Usama; Crivelli, Davide; Parsons, Wade; Colbourne, Bruce; Ryan, Amanda
2017-10-24
Analysis of data, recorded on March 8th 2014 at the Comprehensive Nuclear-Test-Ban Treaty Organisation's hydroacoustic stations off Cape Leeuwin Western Australia, and at Diego Garcia, reveal unique pressure signatures that could be associated with objects impacting at the sea surface, such as falling meteorites, or the missing Malaysian Aeroplane MH370. To examine the recorded signatures, we carried out experiments with spheres impacting at the surface of a water tank, where we observed almost identical pressure signature structures. While the pressure structure is unique to impacting objects, the evolution of the radiated acoustic waves carries information on the source. Employing acoustic-gravity wave theory we present an analytical inverse method to retrieve the impact time and location. The solution was validated using field observations of recent earthquakes, where we were able to calculate the eruption time and location to a satisfactory degree of accuracy. Moreover, numerical validations confirm an error below 0.02% for events at relatively large distances of over 1000 km. The method can be developed to calculate other essential properties such as impact duration and geometry. Besides impacting objects and earthquakes, the method could help in identifying the location of underwater explosions and landslides.
An intelligent crowdsourcing system for forensic analysis of surveillance video
NASA Astrophysics Data System (ADS)
Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.
2015-03-01
Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.
NASA Astrophysics Data System (ADS)
Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili
2012-04-01
In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.
NASA Astrophysics Data System (ADS)
Pignalberi, A.; Pezzopane, M.; Rizzi, R.
2018-03-01
An empirical method to model the lower part of the ionospheric topside region from the F2 layer peak height to about 500-600 km of altitude over the European region is proposed. The method is based on electron density values recorded from December 2013 to June 2016 by Swarm satellites and on foF2 and hmF2 values provided by IRI UP (International Reference Ionosphere UPdate), which is a method developed to update the IRI model relying on the assimilation of foF2 and M(3000)F2 data routinely recorded by a network of European ionosonde stations. Topside effective scale heights are calculated by fitting some definite analytical functions (α-Chapman, β-Chapman, Epstein, and exponential) through the values recorded by Swarm and the ones output by IRI UP, with the assumption that the effective scale height is constant in the altitude range considered. Calculated effective scale heights are then modeled as a function of foF2 and hmF2, in order to be operationally applicable to both ionosonde measurements and ionospheric models, like IRI. The method produces two-dimensional grids of the median effective scale height binned as a function of foF2 and hmF2, for each of the considered topside profiles. A statistical comparison with Constellation Observing System for Meteorology, Ionosphere, and Climate/FORMOsa SATellite-3 collected Radio Occultation profiles is carried out to assess the validity of the proposed method and to investigate which of the considered topside profiles is the best one. The α-Chapman topside function displays the best performance compared to the others and also when compared to the NeQuick topside option of IRI.
A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.
Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J
2015-05-01
In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less
Anumanchipalli, Gopala K.; Dichter, Benjamin; Chaisanguanthum, Kris S.; Johnson, Keith; Chang, Edward F.
2016-01-01
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial—especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship across speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics. PMID:27019106
Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.; ...
2016-03-28
A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less
Cenozoic climate changes: A review based on time series analysis of marine benthic δ18O records
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred; Bickert, Torsten; Lear, Caroline H.; Lohmann, Gerrit
2014-09-01
The climate during the Cenozoic era changed in several steps from ice-free poles and warm conditions to ice-covered poles and cold conditions. Since the 1950s, a body of information on ice volume and temperature changes has been built up predominantly on the basis of measurements of the oxygen isotopic composition of shells of benthic foraminifera collected from marine sediment cores. The statistical methodology of time series analysis has also evolved, allowing more information to be extracted from these records. Here we provide a comprehensive view of Cenozoic climate evolution by means of a coherent and systematic application of time series analytical tools to each record from a compilation spanning the interval from 4 to 61 Myr ago. We quantitatively describe several prominent features of the oxygen isotope record, taking into account the various sources of uncertainty (including measurement, proxy noise, and dating errors). The estimated transition times and amplitudes allow us to assess causal climatological-tectonic influences on the following known features of the Cenozoic oxygen isotopic record: Paleocene-Eocene Thermal Maximum, Eocene-Oligocene Transition, Oligocene-Miocene Boundary, and the Middle Miocene Climate Optimum. We further describe and causally interpret the following features: Paleocene-Eocene warming trend, the two-step, long-term Eocene cooling, and the changes within the most recent interval (Miocene-Pliocene). We review the scope and methods of constructing Cenozoic stacks of benthic oxygen isotope records and present two new latitudinal stacks, which capture besides global ice volume also bottom water temperatures at low (less than 30°) and high latitudes. This review concludes with an identification of future directions for data collection, statistical method development, and climate modeling.
Iron meteorite fragment studied by atomic and nuclear analytical methods
NASA Astrophysics Data System (ADS)
Cesnek, Martin; Štefánik, Milan; Kmječ, Tomáš; Miglierini, Marcel
2016-10-01
Chemical and structural compositions of a fragment of Sikhote-Alin iron meteorite were investigated by X-ray fluorescence analysis (XRF), neutron activation analysis (NAA) and Mössbauer spectroscopy (MS). XRF and NAA revealed the presence of chemical elements which are characteristic for iron meteorites. XRF also showed a significant amount of Si and Al on the surface of the fragment. MS spectra revealed possible presence of α-Fe(Ni, Co) phase with different local Ni concentration. Furthermore, paramagnetic singlet was detected in Mössbauer spectra recorded at room temperature and at 4.2 K.
Recent developments in VSD imaging of small neuronal networks
Hill, Evan S.; Bruno, Angela M.
2014-01-01
Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit mapping, network multifunctionality, the network basis of decision making, and the presence of variably participating neurons in networks. Analytical tools being developed and applied to large-scale VSD imaging data sets are discussed, and the future prospects for this exciting field are considered. PMID:25225295
Three-port beam splitter of a binary fused-silica grating.
Feng, Jijun; Zhou, Changhe; Wang, Bo; Zheng, Jiangjun; Jia, Wei; Cao, Hongchao; Lv, Peng
2008-12-10
A deep-etched polarization-independent binary fused-silica phase grating as a three-port beam splitter is designed and manufactured. The grating profile is optimized by use of the rigorous coupled-wave analysis around the 785 nm wavelength. The physical explanation of the grating is illustrated by the modal method. Simple analytical expressions of the diffraction efficiencies and modal guidelines for the three-port beam splitter grating design are given. Holographic recording technology and inductively coupled plasma etching are used to manufacture the fused-silica grating. Experimental results are in good agreement with the theoretical values.
Schuster, Paul F.; Krabbenhoft, David P.; Naftz, David L.; Cecil, L. DeWayne; Olson, Mark L.; DeWild, John F.; Susong, David D.; Green, Jaromy R.; Abbott, Michael L.
2002-01-01
Mercury (Hg) contamination of aquatic ecosystems and subsequent methylmercury bioaccumulation are significant environmental problems of global extent. At regional to global scales, the primary mechanism of Hg contamination is atmospheric Hg transport. Thus, a better understanding of the long-term history of atmospheric Hg cycling and quantification of the sources is critical for assessing the regional and global impact of anthropogenic Hg emissions. Ice cores collected from the Upper Fremont Glacier (UFG), Wyoming, contain a high-resolution record of total atmospheric Hg deposition (ca. 1720−1993). Total Hg in 97 ice-core samples was determined with trace-metal clean handling methods and low-level analytical procedures to reconstruct the first and most comprehensive atmospheric Hg deposition record of its kind yet available from North America. The record indicates major atmospheric releases of both natural and anthropogenic Hg from regional and global sources. Integrated over the past 270-year ice-core history, anthropogenic inputs contributed 52%, volcanic events 6%, and background sources 42%. More significantly, during the last 100 years, anthropogenic sources contributed 70% of the total Hg input. Unlike the 2−7-fold increase observed from preindustrial times (before 1840) to the mid-1980s in sediment-core records, the UFG record indicates a 20-fold increase for the same period. The sediment-core records, however, are in agreement with the last 10 years of this ice-core record, indicating declines in atmospheric Hg deposition.
A health analytics semantic ETL service for obesity surveillance.
Poulymenopoulou, M; Papakonstantinou, D; Malamateniou, F; Vassilacopoulos, G
2015-01-01
The increasingly large amount of data produced in healthcare (e.g. collected through health information systems such as electronic medical records - EMRs or collected through novel data sources such as personal health records - PHRs, social media, web resources) enable the creation of detailed records about people's health, sentiments and activities (e.g. physical activity, diet, sleep quality) that can be used in the public health area among others. However, despite the transformative potential of big data in public health surveillance there are several challenges in integrating big data. In this paper, the interoperability challenge is tackled and a semantic Extract Transform Load (ETL) service is proposed that seeks to semantically annotate big data to result into valuable data for analysis. This service is considered as part of a health analytics engine on the cloud that interacts with existing healthcare information exchange networks, like the Integrating the Healthcare Enterprise (IHE), PHRs, sensors, mobile applications, and other web resources to retrieve patient health, behavioral and daily activity data. The semantic ETL service aims at semantically integrating big data for use by analytic mechanisms. An illustrative implementation of the service on big data which is potentially relevant to human obesity, enables using appropriate analytic techniques (e.g. machine learning, text mining) that are expected to assist in identifying patterns and contributing factors (e.g. genetic background, social, environmental) for this social phenomenon and, hence, drive health policy changes and promote healthy behaviors where residents live, work, learn, shop and play.
The Savannah River Site's groundwater monitoring program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted by EPD/EMS in the first quarter of 1991. In includes the analytical data, field data, data review, quality control, and other documentation for this program, provides a record of the program's activities and rationale, and serves as an official document of the analytical results.
The Savannah River Site`s Groundwater Monitoring Program. First quarter 1992
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted during the first quarter of 1992. It includes the analytical data, field data, data review, quality control, and other documentation for this program; provides a record of the program`s activities; and serves as an official document of the analytical results.
Classroom Data Analysis with the Five Strands of Mathematical Proficiency
ERIC Educational Resources Information Center
Groth, Randall E.
2017-01-01
Qualitative classroom data from video recordings and students' written work can play important roles in improving mathematics instruction. In order to take full advantage of these data sources, it is helpful to have a strong analytic lens to orient one's reflections on the data. One promising analytic lens is the National Research Council's five…
ERIC Educational Resources Information Center
Giannakos, Michail N.; Chorianopoulos, Konstantinos; Chrisochoides, Nikos
2015-01-01
Online video lectures have been considered an instructional media for various pedagogic approaches, such as the flipped classroom and open online courses. In comparison to other instructional media, online video affords the opportunity for recording student clickstream patterns within a video lecture. Video analytics within lecture videos may…
Guinot, Guillaume; Adnet, Sylvain; Cappetta, Henri
2012-01-01
Modern selachians and their supposed sister group (hybodont sharks) have a long and successful evolutionary history. Yet, although selachian remains are considered relatively common in the fossil record in comparison with other marine vertebrates, little is known about the quality of their fossil record. Similarly, only a few works based on specific time intervals have attempted to identify major events that marked the evolutionary history of this group. Phylogenetic hypotheses concerning modern selachians' interrelationships are numerous but differ significantly and no consensus has been found. The aim of the present study is to take advantage of the range of recent phylogenetic hypotheses in order to assess the fit of the selachian fossil record to phylogenies, according to two different branching methods. Compilation of these data allowed the inference of an estimated range of diversity through time and evolutionary events that marked this group over the past 300 Ma are identified. Results indicate that with the exception of high taxonomic ranks (orders), the selachian fossil record is by far imperfect, particularly for generic and post-Triassic data. Timing and amplitude of the various identified events that marked the selachian evolutionary history are discussed. Some identified diversity events were mentioned in previous works using alternative methods (Early Jurassic, mid-Cretaceous, K/T boundary and late Paleogene diversity drops), thus reinforcing the efficiency of the methodology presented here in inferring evolutionary events. Other events (Permian/Triassic, Early and Late Cretaceous diversifications; Triassic/Jurassic extinction) are newly identified. Relationships between these events and paleoenvironmental characteristics and other groups' evolutionary history are proposed.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
77 FR 41336 - Analytical Methods Used in Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-13
... Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of filing. SUMMARY... proceeding to consider changes in analytical methods used in periodic reporting. This notice addresses... informal rulemaking proceeding to consider changes in the analytical methods approved for use in periodic...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
DOT National Transportation Integrated Search
1974-10-01
The author has brought the review of published analytical methods for determining alcohol in body materials up-to- date. The review deals with analytical methods for alcohol in blood and other body fluids and tissues; breath alcohol methods; factors ...
LeBouf, Ryan F; Virji, Mohammed Abbas; Ranpara, Anand; Stefaniak, Aleksandr B
2017-07-01
This method was designed for sampling select quaternary ammonium (quat) compounds in air or on surfaces followed by analysis using ultraperformance liquid chromatography tandem mass spectrometry. Target quats were benzethonium chloride, didecyldimethylammonium bromide, benzyldimethyldodecylammonium chloride, benzyldimethyltetradecylammonium chloride, and benzyldimethylhexadecylammonium chloride. For air sampling, polytetrafluoroethylene (PTFE) filters are recommended for 15-min to 24-hour sampling. For surface sampling, Pro-wipe® 880 (PW) media was chosen. Samples were extracted in 60:40 acetonitrile:0.1% formic acid for 1 hour on an orbital shaker. Method detection limits range from 0.3 to 2 ng/ml depending on media and analyte. Matrix effects of media are minimized through the use of multiple reaction monitoring versus selected ion recording. Upper confidence limits on accuracy meet the National Institute for Occupational Safety and Health 25% criterion for PTFE and PW media for all analytes. Using PTFE and PW analyzed with multiple reaction monitoring, the method quantifies levels among the different quats compounds with high precision (<10% relative standard deviation) and low bias (<11%). The method is sensitive enough with very low method detection limits to capture quats on air sampling filters with only a 15-min sample duration with a maximum assessed storage time of 103 days before sample extraction. This method will support future exposure assessment and quantitative epidemiologic studies to explore exposure-response relationships and establish levels of quats exposures associated with adverse health effects. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Lismont, Jasmien; Janssens, Anne-Sophie; Odnoletkova, Irina; Vanden Broucke, Seppe; Caron, Filip; Vanthienen, Jan
2016-10-01
The aim of this study is to guide healthcare instances in applying process analytics on healthcare processes. Process analytics techniques can offer new insights in patient pathways, workflow processes, adherence to medical guidelines and compliance with clinical pathways, but also bring along specific challenges which will be examined and addressed in this paper. The following methodology is proposed: log preparation, log inspection, abstraction and selection, clustering, process mining, and validation. It was applied on a case study in the type 2 diabetes mellitus domain. Several data pre-processing steps are applied and clarify the usefulness of process analytics in a healthcare setting. Healthcare utilization, such as diabetes education, is analyzed and compared with diabetes guidelines. Furthermore, we take a look at the organizational perspective and the central role of the GP. This research addresses four challenges: healthcare processes are often patient and hospital specific which leads to unique traces and unstructured processes; data is not recorded in the right format, with the right level of abstraction and time granularity; an overflow of medical activities may cloud the analysis; and analysts need to deal with data not recorded for this purpose. These challenges complicate the application of process analytics. It is explained how our methodology takes them into account. Process analytics offers new insights into the medical services patients follow, how medical resources relate to each other and whether patients and healthcare processes comply with guidelines and regulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaHaye, Nicole L.; Phillips, Mark C.; Duffin, Andrew M.
2016-01-01
Both laser-induced breakdown spectroscopy (LIBS) and laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) are well-established analytical techniques with their own unique advantages and disadvantages. The combination of the two analytical methods is a very promising way to overcome the challenges faced by each method individually. We made a comprehensive comparison of local plasma conditions between nanosecond (ns) and femtosecond (fs) laser ablation (LA) sources in a combined LIBS and LA-ICP-MS system. The optical emission spectra and ICP-MS signal were recorded simultaneously for both ns- and fs-LA and figures of merit of the system were analyzed. Characterization of the plasma was conductedmore » by evaluating temperature and density of the plume under various irradiation conditions using optical emission spectroscopy, and correlations to ns- and fs-LIBS and LA-ICP-MS signal were made. The present study is very useful for providing conditions for a multimodal system as well as giving insight into how laser ablation plume parameters are related to LA-ICP-MS and LIBS results for both ns- and fs-LA.« less
Anslan, Sten; Bahram, Mohammad; Hiiesalu, Indrek; Tedersoo, Leho
2017-11-01
High-throughput sequencing methods have become a routine analysis tool in environmental sciences as well as in public and private sector. These methods provide vast amount of data, which need to be analysed in several steps. Although the bioinformatics may be applied using several public tools, many analytical pipelines allow too few options for the optimal analysis for more complicated or customized designs. Here, we introduce PipeCraft, a flexible and handy bioinformatics pipeline with a user-friendly graphical interface that links several public tools for analysing amplicon sequencing data. Users are able to customize the pipeline by selecting the most suitable tools and options to process raw sequences from Illumina, Pacific Biosciences, Ion Torrent and Roche 454 sequencing platforms. We described the design and options of PipeCraft and evaluated its performance by analysing the data sets from three different sequencing platforms. We demonstrated that PipeCraft is able to process large data sets within 24 hr. The graphical user interface and the automated links between various bioinformatics tools enable easy customization of the workflow. All analytical steps and options are recorded in log files and are easily traceable. © 2017 John Wiley & Sons Ltd.
Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.
Stolper, Charles D; Perer, Adam; Gotz, David
2014-12-01
As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
40 CFR 141.25 - Analytical methods for radioactivity.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods for radioactivity... § 141.25 Analytical methods for radioactivity. (a) Analysis for the following contaminants shall be conducted to determine compliance with § 141.66 (radioactivity) in accordance with the methods in the...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
Lou, Xianwen; van Dongen, Joost L J; Milroy, Lech-Gustav; Meijer, E W
2016-12-30
Ionization in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a very complicated process. It has been reported that quaternary ammonium salts show extremely strong matrix and analyte suppression effects which cannot satisfactorily be explained by charge transfer reactions. Further investigation of the reasons causing these effects can be useful to improve our understanding of the MALDI process. The dried-droplet and modified thin-layer methods were used as sample preparation methods. In the dried-droplet method, analytes were co-crystallized with matrix, whereas in the modified thin-layer method analytes were deposited on the surface of matrix crystals. Model compounds, tetrabutylammonium iodide ([N(Bu) 4 ]I), cesium iodide (CsI), trihexylamine (THA) and polyethylene glycol 600 (PEG 600), were selected as the test analytes given their ability to generate exclusively pre-formed ions, protonated ions and metal ion adducts respectively in MALDI. The strong matrix suppression effect (MSE) observed using the dried-droplet method might disappear using the modified thin-layer method, which suggests that the incorporation of analytes in matrix crystals contributes to the MSE. By depositing analytes on the matrix surface instead of incorporating in the matrix crystals, the competition for evaporation/ionization from charged matrix/analyte clusters could be weakened resulting in reduced MSE. Further supporting evidence for this inference was found by studying the analyte suppression effect using the same two sample deposition methods. By comparing differences between the mass spectra obtained via the two sample preparation methods, we present evidence suggesting that the generation of gas-phase ions from charged matrix/analyte clusters may induce significant suppression of matrix and analyte ions. The results suggest that the generation of gas-phase ions from charged matrix/analyte clusters is an important ionization step in MALDI-MS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Luo, Xu-Biao; Chen, Bo; Yao, Shou-Zhuo
2006-01-01
An isocratic high-performance liquid chromatographic method coupled with electrospray mass spectrometry was developed to determine protopine, allocryptopine, sanguinarine and chelerythrine in fruits of Macleaya cordata. The sample was extracted with hydrochloric acid aqueous solution using microwave-assisted extraction method. The extracts were separated on a C8 reversed-phase HPLC column with acetonitrile:acetate buffer as mobile phase, and full elution of all analytes was realized isocratically within 10 min. The abundance of pseudomolecule ions was recorded using selected ion recording at m/z 354.4, 370.1, 332.5, 348.5 and 338.5 for protopine, allocryptopine, sanguinarine, chelerythrine and the internal standard, jatrorrhizine, respectively. Internal standard curves were used for the quantification of protopine, allocryptopine, sanguinarine and chelerythrine, which showed a linear range of 0.745-74.5, 0.610-61.0, 0.525-105 and 0.375-75 microg/mL, respectively, with correlation coefficients of 0.9995, 0.9992, 0.9993 and 0.9989, and limits of detection of 3.73, 3.05, 1.60 and 1.11 ng/mL, respectively.
Alladio, Eugenio; Biosa, Giulia; Seganti, Fabrizio; Di Corcia, Daniele; Salomone, Alberto; Vincenti, Marco; Baumgartner, Markus R
2018-05-11
The quantitative determination of ethyl glucuronide (EtG) in hair samples is consistently used throughout the world to assess chronic excessive alcohol consumption. For administrative and legal purposes, the analytical results are compared with cut-off values recognized by regulatory authorities and scientific societies. However, it has been recently recognized that the analytical results depend on the hair sample pretreatment procedures, including the crumbling and extraction conditions. A systematic evaluation of the EtG extraction conditions from pulverized scalp hair was conducted by design of experiments (DoE) considering the extraction time, temperature, pH, and solvent composition as potential influencing factors. It was concluded that an overnight extraction at 60°C with pure water at neutral pH represents the most effective conditions to achieve high extraction yields. The absence of differential degradation of the internal standard (isotopically-labeled EtG) under such conditions was confirmed and the overall analytical method was validated according to SGWTOX and ISO17025 criteria. Twenty real hair samples with different EtG content were analyzed with three commonly accepted procedures: (a) hair manually cut in snippets and extracted at room temperature; (b) pulverized hair extracted at room temperature; (c) hair treated with the optimized method. Average increments of EtG concentration around 69% (from a to c) and 29% (from b to c) were recorded. In light of these results, the authors urge the scientific community to undertake an inter-laboratory study with the aim of defining more in detail the optimal hair EtG detection method and verifying the corresponding cut-off level for legal enforcements. This article is protected by copyright. All rights reserved.
Le, Laetitia Minh Maï; Kégl, Balázs; Gramfort, Alexandre; Marini, Camille; Nguyen, David; Cherti, Mehdi; Tfaili, Sana; Tfayli, Ali; Baillet-Guffroy, Arlette; Prognon, Patrice; Chaminade, Pierre; Caudron, Eric
2018-07-01
The use of monoclonal antibodies (mAbs) constitutes one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors. These antibodies are prescribed by the physician and prepared by hospital pharmacists. An analytical control enables the quality of the preparations to be ensured. The aim of this study was to explore the development of a rapid analytical method for quality control. The method used four mAbs (Infliximab, Bevacizumab, Rituximab and Ramucirumab) at various concentrations and was based on recording Raman data and coupling them to a traditional chemometric and machine learning approach for data analysis. Compared to conventional linear approach, prediction errors are reduced with a data-driven approach using statistical machine learning methods. In the latter, preprocessing and predictive models are jointly optimized. An additional original aspect of the work involved on submitting the problem to a collaborative data challenge platform called Rapid Analytics and Model Prototyping (RAMP). This allowed using solutions from about 300 data scientists in collaborative work. Using machine learning, the prediction of the four mAbs samples was considerably improved. The best predictive model showed a combined error of 2.4% versus 14.6% using linear approach. The concentration and classification errors were 5.8% and 0.7%, only three spectra were misclassified over the 429 spectra of the test set. This large improvement obtained with machine learning techniques was uniform for all molecules but maximal for Bevacizumab with an 88.3% reduction on combined errors (2.1% versus 17.9%). Copyright © 2018 Elsevier B.V. All rights reserved.
Finite-analytic numerical solution of heat transfer in two-dimensional cavity flow
NASA Technical Reports Server (NTRS)
Chen, C.-J.; Naseri-Neshat, H.; Ho, K.-S.
1981-01-01
Heat transfer in cavity flow is numerically analyzed by a new numerical method called the finite-analytic method. The basic idea of the finite-analytic method is the incorporation of local analytic solutions in the numerical solutions of linear or nonlinear partial differential equations. In the present investigation, the local analytic solutions for temperature, stream function, and vorticity distributions are derived. When the local analytic solution is evaluated at a given nodal point, it gives an algebraic relationship between a nodal value in a subregion and its neighboring nodal points. A system of algebraic equations is solved to provide the numerical solution of the problem. The finite-analytic method is used to solve heat transfer in the cavity flow at high Reynolds number (1000) for Prandtl numbers of 0.1, 1, and 10.
Perceptions of International Students toward Graduate Record Examination (GRE)
ERIC Educational Resources Information Center
Mupinga, Emily E.; Mupinga, Davison M.
2005-01-01
The Graduate Record Examination (GRE) is an aptitude test, thought to reflect intelligence or the capacity to learn (Larsen & Buss, 2003). It is a standardized admission exam designed to predict performance in graduate school through verbal, quantitative, and analytical reasoning questions. The GRE Board encourages graduate schools,…
On the importance of image formation optics in the design of infrared spectroscopic imaging systems
Mayerich, David; van Dijk, Thomas; Walsh, Michael; Schulmerich, Matthew; Carney, P. Scott
2014-01-01
Infrared spectroscopic imaging provides micron-scale spatial resolution with molecular contrast. While recent work demonstrates that sample morphology affects the recorded spectrum, considerably less attention has been focused on the effects of the optics, including the condenser and objective. This analysis is extremely important, since it will be possible to understand effects on recorded data and provides insight for reducing optical effects through rigorous microscope design. Here, we present a theoretical description and experimental results that demonstrate the effects of commonly-employed cassegranian optics on recorded spectra. We first combine an explicit model of image formation and a method for quantifying and visualizing the deviations in recorded spectra as a function of microscope optics. We then verify these simulations with measurements obtained from spatially heterogeneous samples. The deviation of the computed spectrum from the ideal case is quantified via a map which we call a deviation map. The deviation map is obtained as a function of optical elements by systematic simulations. Examination of deviation maps demonstrates that the optimal optical configuration for minimal deviation is contrary to prevailing practice in which throughput is maximized for an instrument without a sample. This report should be helpful for understanding recorded spectra as a function of the optics, the analytical limits of recorded data determined by the optical design, and potential routes for optimization of imaging systems. PMID:24936526
On the importance of image formation optics in the design of infrared spectroscopic imaging systems.
Mayerich, David; van Dijk, Thomas; Walsh, Michael J; Schulmerich, Matthew V; Carney, P Scott; Bhargava, Rohit
2014-08-21
Infrared spectroscopic imaging provides micron-scale spatial resolution with molecular contrast. While recent work demonstrates that sample morphology affects the recorded spectrum, considerably less attention has been focused on the effects of the optics, including the condenser and objective. This analysis is extremely important, since it will be possible to understand effects on recorded data and provides insight for reducing optical effects through rigorous microscope design. Here, we present a theoretical description and experimental results that demonstrate the effects of commonly-employed cassegranian optics on recorded spectra. We first combine an explicit model of image formation and a method for quantifying and visualizing the deviations in recorded spectra as a function of microscope optics. We then verify these simulations with measurements obtained from spatially heterogeneous samples. The deviation of the computed spectrum from the ideal case is quantified via a map which we call a deviation map. The deviation map is obtained as a function of optical elements by systematic simulations. Examination of deviation maps demonstrates that the optimal optical configuration for minimal deviation is contrary to prevailing practice in which throughput is maximized for an instrument without a sample. This report should be helpful for understanding recorded spectra as a function of the optics, the analytical limits of recorded data determined by the optical design, and potential routes for optimization of imaging systems.
Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home
The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... Currently Approved Total Coliform Analytical Methods AGENCY: Environmental Protection Agency (EPA). ACTION... of currently approved Total Coliform Rule (TCR) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
NASA Astrophysics Data System (ADS)
Nayak, Aditya B.; Price, James M.; Dai, Bin; Perkins, David; Chen, Ding Ding; Jones, Christopher M.
2015-06-01
Multivariate optical computing (MOC), an optical sensing technique for analog calculation, allows direct and robust measurement of chemical and physical properties of complex fluid samples in high-pressure/high-temperature (HP/HT) downhole environments. The core of this MOC technology is the integrated computational element (ICE), an optical element with a wavelength-dependent transmission spectrum designed to allow the detector to respond sensitively and specifically to the analytes of interest. A key differentiator of this technology is it uses all of the information present in the broadband optical spectrum to determine the proportion of the analyte present in a complex fluid mixture. The detection methodology is photometric in nature; therefore, this technology does not require a spectrometer to measure and record a spectrum or a computer to perform calculations on the recorded optical spectrum. The integrated computational element is a thin-film optical element with a specific optical response function designed for each analyte. The optical response function is achieved by fabricating alternating layers of high-index (a-Si) and low-index (SiO2) thin films onto a transparent substrate (BK7 glass) using traditional thin-film manufacturing processes (e.g., ion-assisted e-beam vacuum deposition). A proprietary software and process are used to control the thickness and material properties, including the optical constants of the materials during deposition to achieve the desired optical response function. The ion-assisted deposition is useful for controlling the densification of the film, stoichiometry, and material optical constants as well as to achieve high deposition growth rates and moisture-stable films. However, the ion-source can induce undesirable absorption in the film; and subsequently, modify the optical constants of the material during the ramp-up and stabilization period of the e-gun and ion-source, respectively. This paper characterizes the unwanted absorption in the a-Si thin-film using advanced thin-film metrology methods, including spectroscopic ellipsometry and Fourier transform infrared (FTIR) spectroscopy. The resulting analysis identifies a fundamental mechanism contributing to this absorption and a method for minimizing and accounting for the unwanted absorption in the thin-film such that the exact optical response function can be achieved.
Meteorites and the Evolution of Our Solar System
NASA Technical Reports Server (NTRS)
Nava, David F.
1999-01-01
The study of meteorites has long been of intense interest ever since these objects were discovered to be of extraterrestrial origin. Meteorite research contributes to unraveling the mysteries in understanding the formation and evolution processes of our solar system. Meteorites, of which there are a variety of widely diverse types of chemical and mineralogical compositions, are the most ancient of solar system objects that can be studied in the laboratory. They preserve a unique historical record of the astronomical and astrophysical events of our solar system. This record is being discerned by a host of ever evolving analytical laboratory methods. Recent discoveries of what are believed to be Martian meteorites, lunar meteorites, a meteorite containing indigenous water, and the recovery from the Cretaceous layer of a small meteorite fragment thought to be from the dinosaur-killing asteroid have fueled additional excitement for studying meteorites.
Capturing the spectrum of household food and beverage purchasing behavior: a review.
French, Simone A; Shimotsu, Scott T; Wall, Melanie; Gerlach, Anne Faricy
2008-12-01
The household setting may be the most important level at which to understand the food choices of individuals and how healthful food choices can be promoted. However, there are few available measures of the food purchase behaviors of households and little consensus on the best way to measure it. This review explores the currently available measures of household food purchasing behavior. Three main measures are described, evaluated, and compared: home food inventories, food and beverage purchase records and receipts, and Universal Product Code bar code scanning. The development of coding, aggregation, and analytical methods for these measures of household food purchasing behavior is described. Currently, annotated receipts and records are the most comprehensive, detailed measure of household food purchasing behavior, and are feasible for population-based samples. Universal Product Code scanning is not recommended due to its cost and complexity. Research directions to improve household food purchasing behavior measures are discussed.
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...
The Savannah River Site`s groundwater monitoring program. First quarter 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted by EPD/EMS in the first quarter of 1991. In includes the analytical data, field data, data review, quality control, and other documentation for this program, provides a record of the program`s activities and rationale, and serves as an official document of the analytical results.
Big Data Analytics in Medicine and Healthcare.
Ristevski, Blagoj; Chen, Ming
2018-05-10
This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, J.P.
The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.
Nursing Knowledge: Big Data Science-Implications for Nurse Leaders.
Westra, Bonnie L; Clancy, Thomas R; Sensmeier, Joyce; Warren, Judith J; Weaver, Charlotte; Delaney, Connie W
2015-01-01
The integration of Big Data from electronic health records and other information systems within and across health care enterprises provides an opportunity to develop actionable predictive models that can increase the confidence of nursing leaders' decisions to improve patient outcomes and safety and control costs. As health care shifts to the community, mobile health applications add to the Big Data available. There is an evolving national action plan that includes nursing data in Big Data science, spearheaded by the University of Minnesota School of Nursing. For the past 3 years, diverse stakeholders from practice, industry, education, research, and professional organizations have collaborated through the "Nursing Knowledge: Big Data Science" conferences to create and act on recommendations for inclusion of nursing data, integrated with patient-generated, interprofessional, and contextual data. It is critical for nursing leaders to understand the value of Big Data science and the ways to standardize data and workflow processes to take advantage of newer cutting edge analytics to support analytic methods to control costs and improve patient quality and safety.
Konig, Alexandra; Satt, Aharon; Sorin, Alex; Hoory, Ran; Derreumaux, Alexandre; David, Renaud; Robert, Phillippe H
2018-01-01
Various types of dementia and Mild Cognitive Impairment (MCI) are manifested as irregularities in human speech and language, which have proven to be strong predictors for the disease presence and progress ion. Therefore, automatic speech analytics provided by a mobile application may be a useful tool in providing additional indicators for assessment and detection of early stage dementia and MCI. 165 participants (subjects with subjective cognitive impairment (SCI), MCI patients, Alzheimer's disease (AD) and mixed dementia (MD) patients) were recorded with a mobile application while performing several short vocal cognitive tasks during a regular consultation. These tasks included verbal fluency, picture description, counting down and a free speech task. The voice recordings were processed in two steps: in the first step, vocal markers were extracted using speech signal processing techniques; in the second, the vocal markers were tested to assess their 'power' to distinguish between SCI, MCI, AD and MD. The second step included training automatic classifiers for detecting MCI and AD, based on machine learning methods, and testing the detection accuracy. The fluency and free speech tasks obtain the highest accuracy rates of classifying AD vs. MD vs. MCI vs. SCI. Using the data, we demonstrated classification accuracy as follows: SCI vs. AD = 92% accuracy; SCI vs. MD = 92% accuracy; SCI vs. MCI = 86% accuracy and MCI vs. AD = 86%. Our results indicate the potential value of vocal analytics and the use of a mobile application for accurate automatic differentiation between SCI, MCI and AD. This tool can provide the clinician with meaningful information for assessment and monitoring of people with MCI and AD based on a non-invasive, simple and low-cost method. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan
2017-09-01
Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Measurement of cardiac output in children by pressure-recording analytical method.
Urbano, Javier; López, Jorge; González, Rafael; Solana, María José; Fernández, Sarah N; Bellón, José M; López-Herce, Jesús
2015-02-01
We evaluated two pressure-recording analytical method (PRAM) software versions (v.1 and v.2) to measure cardiac index (CI) in hemodynamically stable critically ill children and investigate factors that influence PRAM values. The working hypothesis was that PRAM CI measurements would stay within normal limits in hemodynamically stable patients. Ninety-five CI PRAM measurements were analyzed in 47 patients aged 1-168 months. Mean CI was 4.1 ± 1.4 L/min/m(2) (range 2.0-7.0). CI was outside limits defined as normal (3-5 L/min/m(2)) in 53.7% of measurements (47.8% with software v.1 and 69.2% with software v.2, p = 0.062). Moreover, 14.7% of measurements were below 2.5 L/min/m(2), and 13.6% were above 6 L/min/m(2). CI was significantly lower in patients with a clearly visible dicrotic notch than in those without (3.7 vs. 4.6 L/min/m(2), p = 0.004) and in children with a radial arterial catheter (3.5 L/min/m(2)) than in those with a brachial (4.4 L/min/m(2), p = 0.021) or femoral catheter (4.7 L/min/m(2), p = 0.005). By contrast, CI was significantly higher in children under 12 months (4.2 vs. 3.6 L/min/m(2), p = 0.034) and weighing under 10 kg (4.2 vs. 3.6 L/min/m(2), p = 0.026). No significant differences were observed between cardiac surgery patients and the rest of children. A high percentage of CI measurements registered by PRAM were outside normal limits in hemodynamically stable, critically ill children. CI measured by PRAM may be influenced by the age, weight, location of catheter, and presence of a dicrotic notch.
Actigraphic Assessment of Motor Activity in Acutely Admitted Inpatients with Bipolar Disorder
Krane-Gartiser, Karoline; Henriksen, Tone Elise Gjotterud; Morken, Gunnar; Vaaler, Arne; Fasmer, Ole Bernt
2014-01-01
Introduction Mania is associated with increased activity, whereas psychomotor retardation is often found in bipolar depression. Actigraphy is a promising tool for monitoring phase shifts and changes following treatment in bipolar disorder. The aim of this study was to compare recordings of motor activity in mania, bipolar depression and healthy controls, using linear and nonlinear analytical methods. Materials and Methods Recordings from 18 acutely hospitalized inpatients with mania were compared to 12 recordings from bipolar depression inpatients and 28 healthy controls. 24-hour actigraphy recordings and 64-minute periods of continuous motor activity in the morning and evening were analyzed. Mean activity and several measures of variability and complexity were calculated. Results Patients with depression had a lower mean activity level compared to controls, but higher variability shown by increased standard deviation (SD) and root mean square successive difference (RMSSD) over 24 hours and in the active morning period. The patients with mania had lower first lag autocorrelation compared to controls, and Fourier analysis showed higher variance in the high frequency part of the spectrum corresponding to the period from 2–8 minutes. Both patient groups had a higher RMSSD/SD ratio compared to controls. In patients with mania we found an increased complexity of time series in the active morning period, compared to patients with depression. The findings in the patients with mania are similar to previous findings in patients with schizophrenia and healthy individuals treated with a glutamatergic antagonist. Conclusion We have found distinctly different activity patterns in hospitalized patients with bipolar disorder in episodes of mania and depression, assessed by actigraphy and analyzed with linear and nonlinear mathematical methods, as well as clear differences between the patients and healthy comparison subjects. PMID:24586883
Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.
Sidek, Khairul Azami; Khalil, Ibrahim
2013-01-01
Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.
Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette
2018-05-10
Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.
2013-01-01
Background : Inspectors from the US Occupational Safety and Health Administration (OSHA) have been collecting industrial hygiene samples since 1972 to verify compliance with Permissible Exposure Limits. Starting in 1979, these measurements were computerized into the Integrated Management Information System (IMIS). In 2010, a dataset of over 1 million personal sample results analysed at OSHA’s central laboratory in Salt Lake City [Chemical Exposure Health Data (CEHD)], only partially overlapping the IMIS database, was placed into public domain via the internet. We undertook this study to inform potential users about the relationship between this newly available OSHA data and IMIS and to offer insight about the opportunities and challenges associated with the use of OSHA measurement data for occupational exposure assessment. Methods : We conducted a literature review of previous uses of IMIS in occupational health research and performed a descriptive analysis of the data recently made available and compared them to the IMIS database for lead, the most frequently sampled agent. Results : The literature review yielded 29 studies reporting use of IMIS data, but none using the CEHD data. Most studies focused on a single contaminant, with silica and lead being most frequently analysed. Sixteen studies addressed potential bias in IMIS, mostly by examining the association between exposure levels and ancillary information. Although no biases of appreciable magnitude were consistently reported across studies and agents, these assessments may have been obscured by selective under-reporting of non-detectable measurements. The CEHD data comprised 1 450 836 records from 1984 to 2009, not counting analytical blanks and erroneous records. Seventy eight agents with >1000 personal samples yielded 1 037 367 records. Unlike IMIS, which contain administrative information (company size, job description), ancillary information in the CEHD data is mostly analytical. When the IMIS and CEHD measurements of lead were merged, 23 033 (39.2%) records were in common to both IMIS and CEHD datasets, 10 681 (18.2%) records were only in IMIS, and 25 012 (42.6%) records were only in the CEHD database. While IMIS-only records represent data analysed in other laboratories, CEHD-only records suggest partial reporting of sampling results by OSHA inspectors into IMIS. For lead, the percentage of non-detects in the CEHD-only data was 71% compared to 42% and 46% in the both-IMIS-CEHD and IMIS-only datasets, respectively, suggesting differential under-reporting of non-detects in IMIS. Conclusions : IMIS and the CEHD datasets represent the biggest source of multi-industry exposure data in the USA and should be considered as a valuable source of information for occupational exposure assessment. The lack of empirical data on biases, adequate interpretation of non-detects in OSHA data, complicated by suspected differential under-reporting, remain the principal challenges to the valid estimation of average exposure conditions. We advocate additional comparisons between IMIS and CEHD data and discuss analytical strategies that may play a key role in meeting these challenges. PMID:22952385
Characterization of electrophysiological propagation by multichannel sensors
Bradshaw, L. Alan; Kim, Juliana H.; Somarajan, Suseela; Richards, William O.; Cheng, Leo K.
2016-01-01
Objective The propagation of electrophysiological activity measured by multichannel devices could have significant clinical implications. Gastric slow waves normally propagate along longitudinal paths that are evident in recordings of serosal potentials and transcutaneous magnetic fields. We employed a realistic model of gastric slow wave activity to simulate the transabdominal magnetogastrogram (MGG) recorded in a multichannel biomagnetometer and to determine characteristics of electrophysiological propagation from MGG measurements. Methods Using MGG simulations of slow wave sources in a realistic abdomen (both superficial and deep sources) and in a horizontally-layered volume conductor, we compared two analytic methods (Second Order Blind Identification, SOBI and Surface Current Density, SCD) that allow quantitative characterization of slow wave propagation. We also evaluated the performance of the methods with simulated experimental noise. The methods were also validated in an experimental animal model. Results Mean square errors in position estimates were within 2 cm of the correct position, and average propagation velocities within 2 mm/s of the actual velocities. SOBI propagation analysis outperformed the SCD method for dipoles in the superficial and horizontal layer models with and without additive noise. The SCD method gave better estimates for deep sources, but did not handle additive noise as well as SOBI. Conclusion SOBI-MGG and SCD-MGG were used to quantify slow wave propagation in a realistic abdomen model of gastric electrical activity. Significance These methods could be generalized to any propagating electrophysiological activity detected by multichannel sensor arrays. PMID:26595907
White, Alec F.; Head-Gordon, Martin; McCurdy, C. William
2017-01-30
The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Alec F.; Head-Gordon, Martin; McCurdy, C. William
The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less
Method for reduction of selected ion intensities in confined ion beams
Eiden, Gregory C.; Barinaga, Charles J.; Koppenaal, David W.
1998-01-01
A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer.
Method for reduction of selected ion intensities in confined ion beams
Eiden, G.C.; Barinaga, C.J.; Koppenaal, D.W.
1998-06-16
A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer. 7 figs.
Larson, S.P.; Mann, W.B.; Steele, T.D.; Susag, R.H.
1976-01-01
Historical records were analyzed to determine effects of population, pollution-control strategy, and other factors on water quality of the Mississippi River. Isopleths of DO (dissolved oxygen) concentrations and lines of equal stream temperature indicated periodic data could be used to guide sampling of certain critical conditions in time and space. Long-term records revealed generally mixed changes in quality in the Mississippi River. Several mean-time series were used to show seasonal variation in water quality and effects of initiation of wastewater treatment in 1938. Kendall 's tau statistical test indicated a significant increase in DO in the upper reach of the river during the period of record. If only the post-1938 period is considered, DO conditions remained fairly constant below the metropolitan plant and biochemical oxygen demand increased throughout the main-stem reach. Significant trends in stream temperature were indicated for winter periods using Kendall 's tau procedure. The Mann-Whitney statistical test gave estimates of a 98-percent confidence interval of the magnitudes of change. (Woodard-USGS)
Method for Operating a Sensor to Differentiate Between Analytes in a Sample
Kunt, Tekin; Cavicchi, Richard E; Semancik, Stephen; McAvoy, Thomas J
1998-07-28
Disclosed is a method for operating a sensor to differentiate between first and second analytes in a sample. The method comprises the steps of determining a input profile for the sensor which will enhance the difference in the output profiles of the sensor as between the first analyte and the second analyte; determining a first analyte output profile as observed when the input profile is applied to the sensor; determining a second analyte output profile as observed when the temperature profile is applied to the sensor; introducing the sensor to the sample while applying the temperature profile to the sensor, thereby obtaining a sample output profile; and evaluating the sample output profile as against the first and second analyte output profiles to thereby determine which of the analytes is present in the sample.
Kim, Taehyun; Lee, Kiyoung; Yang, Wonho; Yu, Seung Do
2012-08-01
Although the global positioning system (GPS) has been suggested as an alternative way to determine time-location patterns, its use has been limited. The purpose of this study was to evaluate a new analytical method of classifying time-location data obtained by GPS. A field technician carried a GPS device while simulating various scripted activities and recorded all movements by the second in an activity diary. The GPS device recorded geological data once every 15 s. The daily monitoring was repeated 18 times. The time-location data obtained by the GPS were compared with the activity diary to determine selection criteria for the classification of the GPS data. The GPS data were classified into four microenvironments (residential indoors, other indoors, transit, and walking outdoors); the selection criteria used were used number of satellites (used-NSAT), speed, and distance from residence. The GPS data were classified as indoors when the used-NSAT was below 9. Data classified as indoors were further classified as residential indoors when the distance from the residence was less than 40 m; otherwise, they were classified as other indoors. Data classified as outdoors were further classified as being in transit when the speed exceeded 2.5 m s(-1); otherwise, they were classified as walking outdoors. The average simple percentage agreement between the time-location classifications and the activity diary was 84.3 ± 12.4%, and the kappa coefficient was 0.71. The average differences between the time diary and the GPS results were 1.6 ± 2.3 h for the time spent in residential indoors, 0.9 ± 1.7 h for the time spent in other indoors, 0.4 ± 0.4 h for the time spent in transit, and 0.8 ± 0.5 h for the time spent walking outdoors. This method can be used to determine time-activity patterns in exposure-science studies.
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, N.B.; Walker, J.F.
1990-01-01
The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Du, Yuanqi; Xia, Ling; Xiao, Xiaohua; Li, Gongke; Chen, Xiaoguang
2018-06-15
Nowadays, the safety of cosmetics is a widespread concern. Amines are common cosmetic additives. Some of them such as amino acids are beneficial. Another kind of amines, however, ε-aminocaproic acid (EACA) is prohibited to add into cosmetics for its adverse reactions. In this study, a simple, rapid, sensitive and eco-friendly one-step ultrasonic-assisted extraction and derivatization (UAE-D) method was developed for determination of EACA and amino acids in cosmetics by coupling with high-performance liquid chromatography (HPLC). By using this sample preparation method, extraction and derivatization of EACA and amino acids were finished in one step in ultrasound field. During this procedure, 4-fluoro-7-nitrobenzofurazan (NBD-F)was applied as derivatization reagent. The extraction conditions including the amount of NBD-F, extraction and derivatization temperature, the ultrasonic vibration time and pH value of the aqueous phase were evaluated. Meanwhile, the extraction mechanism was investigated. Under optimized conditions, the method detection limits were 0.086-0.15 μg/L, and method quantitation limits were 0.29-0.47 μg/L with RSDs less than 3.7% (n = 3). The recoveries of EACA and amino acids obtained from cosmetic samples were in range from 76.9% to 122.3%. Amino acids were found in all selected samples and quantified in range from 1.9 ± 0.9 to 677.2 ± 17.9 μg/kg. And EACA was found and quantified with the contents of 1284.3 ± 22.1 μg/kg in a toner sample. This UAE-D-HPLC method shortened and simplified the sample pretreatment as well as enhanced the sensitivity of analytical method. In our record, only 10 min was needed for the total sample preparation process. And the method detection limits were two orders of magnitude less than literature reports. Furthermore, we reduced the consumption of solvent and minimized the usage of organic solvents, which made our method moving towards green analytical chemistry. In brief, our UAE-D-HPLC method is a simple, rapid, sensitive and eco-friendly analytical method for the determination of EACA and amino acids in cosmetics. Copyright © 2018 Elsevier B.V. All rights reserved.
PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS
Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
Curling, Louise; Kellett, Stephen; Totterdell, Peter; Parry, Glenys; Hardy, Gillian; Berry, Katherine
2018-03-01
The evidence base for the treatment of morbid jealousy with integrative therapies is thin. This study explored the efficacy of cognitive analytic therapy (CAT). An adjudicated hermeneutic single-case efficacy design evaluated the cognitive analytic treatment of a patient meeting diagnostic criteria for obsessive morbid jealousy. A rich case record was developed using a matrix of nomothetic and ideographic quantitative and qualitative outcomes. This record was then debated by sceptic and affirmative research teams. Experienced psychotherapy researchers acted as judges, assessed the original case record, and heard the affirmative-versus-sceptic debate. Judges pronounced an opinion regarding the efficacy of the therapy. The efficacy of CAT was supported by all three judges. Each ruled that change had occurred due to the action of the therapy, beyond any level of reasonable doubt. This research demonstrates the potential usefulness of CAT in treating morbid jealousy and suggests that CAT is conceptually well suited. Suggestions for future clinical and research directions are provided. The relational approach of CAT makes it a suitable therapy for morbid jealousy. The narrative reformulation component of CAT appears to facilitate early change in chronic jealousy patterns. It is helpful for therapists during sessions to use CAT theory to diagrammatically spell out the patterns maintaining jealousy. © 2017 The British Psychological Society.
Buonfiglio, Marzia; Toscano, M; Puledda, F; Avanzini, G; Di Clemente, L; Di Sabato, F; Di Piero, V
2015-03-01
Habituation is considered one of the most basic mechanisms of learning. Habituation deficit to several sensory stimulations has been defined as a trait of migraine brain and also observed in other disorders. On the other hand, analytic information processing style is characterized by the habit of continually evaluating stimuli and it has been associated with migraine. We investigated a possible correlation between lack of habituation of evoked visual potentials and analytic cognitive style in healthy subjects. According to Sternberg-Wagner self-assessment inventory, 15 healthy volunteers (HV) with high analytic score and 15 HV with high global score were recruited. Both groups underwent visual evoked potentials recordings after psychological evaluation. We observed significant lack of habituation in analytical individuals compared to global group. In conclusion, a reduced habituation of visual evoked potentials has been observed in analytic subjects. Our results suggest that further research should be undertaken regarding the relationship between analytic cognitive style and lack of habituation in both physiological and pathophysiological conditions.
Infrared Ion Spectroscopy at Felix: Applications in Peptide Dissociation and Analytical Chemistry
NASA Astrophysics Data System (ADS)
Oomens, Jos
2016-06-01
Infrared free electron lasers such as those in Paris, Berlin and Nijmegen have been at the forefront of the development of infrared ion spectroscopy. In this contribution, I will give an overview of new developments in IR spectroscopy of stored ions at the FELIX Laboratory. In particular, I will focus on recent developments made possible by the coupling of a new commercial ion trap mass spectrometer to the FELIX beamline. The possibility to record IR spectra of mass-selected molecular ions and their reaction products has in recent years shed new light on our understanding of collision induced dissociation (CID) reactions of protonated peptides in mass spectrometry (MS). We now show that it is possible to record IR spectra for the products of electron transfer dissociation (ETD) reactions [M + nH]n+ + A- → [M + nH](n-1)+ + A → {dissociation of analyte} These reactions are now widely used in novel MS-based protein sequencing strategies, but involve complex radical chemistry. The spectroscopic results allow stringent verification of computationally predicted product structures and hence reaction mechanisms and H-atom migration. The sensitivity and high dynamic range of a commercial mass spectrometer also allows us to apply infrared ion spectroscopy to analytes in complex "real-life" mixtures. The ability to record IR spectra with the sensitivity of mass-spectrometric detection is unrivalled in analytical sciences and is particularly useful in the identification of small (biological) molecules, such as in metabolomics. We report preliminary results of a pilot study on the spectroscopic identification of small metabolites in urine and plasma samples.
40 CFR 86.107-98 - Sampling and analytical system.
Code of Federal Regulations, 2012 CFR
2012-07-01
... system (recorder and sensor) shall have an accuracy of ±3 °F (±1.7 °C). The recorder (data processor... ambient temperature sensors, connected to provide one average output, located 3 feet above the floor at... wall. For diurnal emission testing, an additional temperature sensor shall be located underneath the...
40 CFR 86.107-98 - Sampling and analytical system.
Code of Federal Regulations, 2013 CFR
2013-07-01
... system (recorder and sensor) shall have an accuracy of ±3 °F (±1.7 °C). The recorder (data processor... ambient temperature sensors, connected to provide one average output, located 3 feet above the floor at... wall. For diurnal emission testing, an additional temperature sensor shall be located underneath the...
40 CFR 86.107-98 - Sampling and analytical system.
Code of Federal Regulations, 2014 CFR
2014-07-01
... system (recorder and sensor) shall have an accuracy of ±3 °F (±1.7 °C). The recorder (data processor... ambient temperature sensors, connected to provide one average output, located 3 feet above the floor at... wall. For diurnal emission testing, an additional temperature sensor shall be located underneath the...
Small Particle Response to Fluid Motion Using Tethered Particles to Simulate Microgravity
NASA Technical Reports Server (NTRS)
Trolinger, James; L'Esperance, Drew; Rangel, Roger; Coimbra, Carlos; Witherow, William K.; Rogers, Jan; Lal, Ravindra
2003-01-01
This paper reports on ground based work conducted to support the Spaceflight Definition project SHIVA (Spaceflight Holography Investigation in a Virtual Apparatus). SHIVA will advance our understanding of the movement of a particle in a fluid. Gravity usually dominates the equations of motion, but in microgravity as well as on earth other terms can become important. Through an innovative application of fractional differential equations, two members of our team produced the first analytical solution of a fundamental equation of motion, which had only been solved numerically or by approximation before. The general solution predicts that the usually neglected history term becomes important in particle response to a sinusoidal fluid movement when the characteristic viscous time is in the same order as the fluid oscillation period and peaks when the two times are equal. In this case three force terms, the Stokes drag, the added mass, and the history drag must all be included in predicting particle movement. We have developed diagnostic recording methods using holography to save all of the particle field data, allowing the experiment to essentially be transferred from space back to earth in what we call the virtual apparatus for on-earth microgravity experimentation. We can quantify precisely the three-dimensional motion of sets of particles, allowing us to test and apply the new analytical solutions. We are examining the response of particles up to 2 mm radius to fluid oscillation at frequencies up to 80 Hz with amplitudes up to 200 microns. Ground studies to support the flight development program have employed various schemes to simulate microgravity. One of the most reliable and meaningful methods uses spheres tethered to a fine hair suspended in the fluid. We have also investigated particles with nearly neutral buoyancy. Recordings are made at the peak amplitudes of vibration of the cell providing a measure of the ratio of fluid to particle amplitude. The experiment requires precise location of the particle to within microns during recording, and techniques for achieving this are one of the project challenges. Focused microscopic images and diffraction patterns are used. To make the experiment more versatile, the spaceflight system will record holograms both on film and electronically. A cross correlation procedure enables sub pixel accuracies for electronic recordings, partially accommodating the lower spatial resolution of CCDs. The electronic holograms can be down linked providing real time data. Results of the ground experiments, the flight experiment design, and data analysis procedures are reported.
Major advances in testing of dairy products: milk component and dairy product attribute testing.
Barbano, D M; Lynch, J M
2006-04-01
Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.
Beda, Alessandro; Simpson, David M; Faes, Luca
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394
Improving Interoperability by Incorporating UnitsML Into Markup Languages
Celebi, Ismet; Dragoset, Robert A.; Olsen, Karen J.; Schaefer, Reinhold; Kramer, Gary W.
2010-01-01
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this “scientific meta-data” and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML—a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML. PMID:27134778
Improving Interoperability by Incorporating UnitsML Into Markup Languages.
Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W
2010-01-01
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.
Optical asymmetric cryptography based on amplitude reconstruction of elliptically polarized light
NASA Astrophysics Data System (ADS)
Cai, Jianjun; Shen, Xueju; Lei, Ming
2017-11-01
We propose a novel optical asymmetric image encryption method based on amplitude reconstruction of elliptically polarized light, which is free from silhouette problem. The original image is analytically separated into two phase-only masks firstly, and then the two masks are encoded into amplitudes of the orthogonal polarization components of an elliptically polarized light. Finally, the elliptically polarized light propagates through a linear polarizer, and the output intensity distribution is recorded by a CCD camera to obtain the ciphertext. The whole encryption procedure could be implemented by using commonly used optical elements, and it combines diffusion process and confusion process. As a result, the proposed method achieves high robustness against iterative-algorithm-based attacks. Simulation results are presented to prove the validity of the proposed cryptography.
NASA Astrophysics Data System (ADS)
Johnson, Kristina Mary
In 1973 the computerized tomography (CT) scanner revolutionized medical imaging. This machine can isolate and display in two-dimensional cross-sections, internal lesions and organs previously impossible to visualize. The possibility of three-dimensional imaging however is not yet exploited by present tomographic systems. Using multiple-exposure holography, three-dimensional displays can be synthesizing from two-dimensional CT cross -sections. A multiple-exposure hologram is an incoherent superposition of many individual holograms. Intuitively it is expected that holograms recorded with equal energy will reconstruct images with equal brightness. It is found however, that holograms recorded first are brighter than holograms recorded later in the superposition. This phenomena is called Holographic Reciprocity Law Failure (HRLF). Computer simulations of latent image formation in multiple-exposure holography are one of the methods used to investigate HRLF. These simulations indicate that it is the time between individual exposures in the multiple -exposure hologram that is responsible for HRLF. This physical parameter introduces an asymmetry into the latent image formation process that favors the signal of previously recorded holograms over holograms recorded later in the superposition. The origin of this asymmetry lies in the dynamics of latent image formation, and in particular in the decay of single-atom latent image specks, which have lifetimes that are short compared to typical times between exposures. An analytical model is developed for a double exposure hologram that predicts a decrease in the brightness of the second exposure as compared to the first exposure as the time between exposures increases. These results are consistent with the computer simulations. Experiments investigating the influence of this parameter on the diffraction efficiency of reconstructed images in a double exposure hologram are also found to be consistent with the computer simulations and analytical results. From this information, two techniques are presented that correct for HRLF, and succeed in reconstructing multiple holographic images of CT cross-sections with equal brightness. The multiple multiple-exposure hologram is a new hologram that increases the number of equally bright images that can be superimposed on one photographic plate.
77 FR 56176 - Analytical Methods Used in Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... POSTAL REGULATORY COMMISSION 39 CFR Part 3001 [Docket No. RM2012-7; Order No. 1459] Analytical Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of proposed... analytical methods approved for use in periodic reporting.\\1\\ \\1\\ Petition of the United States Postal...
Introduction to Validation of Analytical Methods: Potentiometric Determination of CO[subscript 2
ERIC Educational Resources Information Center
Hipólito-Nájera, A. Ricardo; Moya-Hernandez, M. Rosario; Gomez-Balderas, Rodolfo; Rojas-Hernandez, Alberto; Romero-Romo, Mario
2017-01-01
Validation of analytical methods is a fundamental subject for chemical analysts working in chemical industries. These methods are also relevant for pharmaceutical enterprises, biotechnology firms, analytical service laboratories, government departments, and regulatory agencies. Therefore, for undergraduate students enrolled in majors in the field…
Zhang, Xiaokai; Zhang, Song; Yang, Qian; Cao, Wei; Xie, Yanhua; Qiu, Pengcheng; Wang, Siwang
2015-01-01
Background: Yuanhu Zhitong prescription (YZP) is a famous traditional Chinese medicine formula, which is officially recorded in Chinese Pharmacopoeia for the treatment of stomach pain, hypochondriac pain, headache and dysmenorrhea caused by qi-stagnancy and blood stasis. It is the first report for the simultaneous determination of 12 active components in YZP. Objective: A newly, simple, accurate and reliable method for the separation and determination of 12 active components (protopine, α-allocryptopine, coptisine, xanthotol, palmatine, dehydrocorydaline, glaucine, tetrahydropalmatine, tetrahydroberberine, imperatorin, corydaline, isoimperatorin) in YZP was developed and validated using HPLC-PAD. Materials and Methods: The analytes were performed on a Phenomenex Luna-C18 (2) column (250×4.6 mm, 5.0 μm) with a gradient elution program using a mixture of acetonitrile and 0.1% phosphoric acid water solution (adjusted with triethylamine to pH 5.6) as mobile phase. Analytes were performed at 30°C with a flow rate of 1.0 mL/min. Results: The validated method was applied to analyze four major dosage forms of YZP coming from different manufacturers with good linearity (r2, 0.9981~0.9999), precision (RSD, 0.24~2.89%), repeatability (RSD, 0.15~3.34%), stability (RSD, 0.14~3.35%), recovery (91.13~110.81%) of the 12 components. Conclusion: The proposed method enables the separation and determination of 12 active components in a single run for the quality control of YZP. PMID:25709212
External Standards or Standard Addition? Selecting and Validating a Method of Standardization
NASA Astrophysics Data System (ADS)
Harvey, David T.
2002-05-01
A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.
NASA Astrophysics Data System (ADS)
Hahn, K. E.; Turner, E. C.; Kontak, D. J.; Fayek, M.
2018-02-01
Ancient carbonate rocks commonly contain numerous post-depositional phases (carbonate minerals; quartz) recording successive diagenetic events that can be deciphered and tied to known or inferred geological events using a multi-pronged in situ analytical protocol. The framework voids of large, deep-water microbial carbonate seep-mounds in Arctic Canada (Mesoproterozoic Ikpiarjuk Formation) contain multiple generations of synsedimentary and late cement. An in situ analytical study of the post-seafloor cements used optical and cathodoluminescence petrography, SEM-EDS analysis, fluid inclusion (FI) microthermometry and evaporate mound analysis, LA-ICP-MS analysis, and SIMS δ18O to decipher the mounds' long-term diagenetic history. The six void-filling late cements include, in paragenetic order: inclusion-rich euhedral dolomite (ED), finely crystalline clear dolomite (FCD), hematite-bearing dolomite (HD), coarsely crystalline clear dolomite (CCD), quartz (Q), replacive calcite (RC) and late calcite (LC). Based on the combined analytical results, the following fluid-flow history is defined: (1) ED precipitation by autocementation during shallow burial (fluid 1; Mesoproterozoic); (2) progressive mixing of Ca-rich hydrothermal fluid with the connate fluid, resulting in precipitation of FCD followed by HD (fluid 2; also Mesoproterozoic); (3) precipitation of hydrothermal dolomite (CCD) from high-Ca and K-rich fluids (fluid 3; possibly Mesoproterozoic, but timing unclear); (4) hydrothermal Q precipitation (fluid 4; timing unclear), and (5) RC and LC precipitation from a meteoric-derived water (fluid 5) in or since the Mesozoic. Fluids associated with FCD, HD, and CCD may have been mobilised during deposition of the upper Bylot Supergroup; this time interval was the most tectonically active episode in the region's Mesoproterozoic to Recent history. The entire history of intermittent fluid migration and cement precipitation recorded in seemingly unimportant void-filling mineral phases spans over 1 billion years, and was decipherable only because of the in situ protocol used. The multiple-method in situ analytical protocol employed in this study substantially augments the knowledge of an area's geological history, parts of which cannot be discerned by means other than meticulous study of diagenetic phases, and should become routine in similar studies.
Faculty Forum: The GRE Analytical Writing Test-- Description and Utilization
ERIC Educational Resources Information Center
Briihl, Deborah S.; Wasieleski, David T.
2007-01-01
The authors surveyed graduate programs to see how they use the Graduate Record Examination Analytic Writing (GRE-AW) Test. Only 35% of the graduate programs that responded use the GRE-AW test in their admission policy; of the programs not using it, most do not plan to do so. The programs using the GRE-AW rated it as medium or low in importance in…
Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T
2017-01-01
Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.
Patient-initiated electronic health record amendment requests
Hanauer, David A; Preib, Rebecca; Zheng, Kai; Choi, Sung W
2014-01-01
Background and objective Providing patients access to their medical records offers many potential benefits including identification and correction of errors. The process by which patients ask for changes to be made to their records is called an ‘amendment request’. Little is known about the nature of such amendment requests and whether they result in modifications to the chart. Methods We conducted a qualitative content analysis of all patient-initiated amendment requests that our institution received over a 7-year period. Recurring themes were identified along three analytic dimensions: (1) clinical/documentation area, (2) patient motivation for making the request, and (3) outcome of the request. Results The dataset consisted of 818 distinct requests submitted by 181 patients. The majority of these requests (n=636, 77.8%) were made to rectify incorrect information and 49.7% of all requests were ultimately approved. In 6.6% of the requests, patients wanted valid information removed from their record, 27.8% of which were approved. Among all of the patients requesting a copy of their chart, only a very small percentage (approximately 0.2%) submitted an amendment request. Conclusions The low number of amendment requests may be due to inadequate awareness by patients about how to make changes to their records. To make this approach effective, it will be important to inform patients of their right to view and amend records and about the process for doing so. Increasing patient access to medical records could encourage patient participation in improving the accuracy of medical records; however, caution should be used. PMID:24863430
Yun Chen; Hui Yang
2014-01-01
The rapid advancements of biomedical instrumentation and healthcare technology have resulted in data-rich environments in hospitals. However, the meaningful information extracted from rich datasets is limited. There is a dire need to go beyond current medical practices, and develop data-driven methods and tools that will enable and help (i) the handling of big data, (ii) the extraction of data-driven knowledge, (iii) the exploitation of acquired knowledge for optimizing clinical decisions. This present study focuses on the prediction of mortality rates in Intensive Care Units (ICU) using patient-specific healthcare recordings. It is worth mentioning that postsurgical monitoring in ICU leads to massive datasets with unique properties, e.g., variable heterogeneity, patient heterogeneity, and time asyncronization. To cope with the challenges in ICU datasets, we developed the postsurgical decision support system with a series of analytical tools, including data categorization, data pre-processing, feature extraction, feature selection, and predictive modeling. Experimental results show that the proposed data-driven methodology outperforms traditional approaches and yields better results based on the evaluation of real-world ICU data from 4000 subjects in the database. This research shows great potentials for the use of data-driven analytics to improve the quality of healthcare services.
Riley, William T
2017-01-01
The National Institutes of Health Office of Behavioral and Social Sciences Research (OBSSR) recently released its strategic plan for 2017-2021. This plan focuses on three equally important strategic priorities: 1) improve the synergy of basic and applied behavioral and social sciences research, 2) enhance and promote the research infrastructure, methods, and measures needed to support a more cumulative and integrated approach to behavioral and social sciences research, and 3) facilitate the adoption of behavioral and social sciences research findings in health research and in practice. This commentary focuses on scientific priority two and future directions in measurement science, technology, data infrastructure, behavioral ontologies, and big data methods and analytics that have the potential to transform the behavioral and social sciences into more cumulative, data rich sciences that more efficiently build on prior research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Joint statistics of strongly correlated neurons via dimensionality reduction
NASA Astrophysics Data System (ADS)
Deniz, Taşkın; Rotter, Stefan
2017-06-01
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.
Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L
2012-10-01
Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.
NASA Astrophysics Data System (ADS)
Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2015-06-01
When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.
Stokes, Ian A F; Laible, Jeffrey P; Gardner-Morse, Mack G; Costi, John J; Iatridis, James C
2011-01-01
Intervertebral disks support compressive forces because of their elastic stiffness as well as the fluid pressures resulting from poroelasticity and the osmotic (swelling) effects. Analytical methods can quantify the relative contributions, but only if correct material properties are used. To identify appropriate tissue properties, an experimental study and finite element analytical simulation of poroelastic and osmotic behavior of intervertebral disks were combined to refine published values of disk and endplate properties to optimize model fit to experimental data. Experimentally, nine human intervertebral disks with adjacent hemi-vertebrae were immersed sequentially in saline baths having concentrations of 0.015, 0.15, and 1.5 M and the loss of compressive force at constant height (force relaxation) was recorded over several hours after equilibration to a 300-N compressive force. Amplitude and time constant terms in exponential force-time curve-fits for experimental and finite element analytical simulations were compared. These experiments and finite element analyses provided data dependent on poroelastic and osmotic properties of the disk tissues. The sensitivities of the model to alterations in tissue material properties were used to obtain refined values of five key material parameters. The relaxation of the force in the three bath concentrations was exponential in form, expressed as mean compressive force loss of 48.7, 55.0, and 140 N, respectively, with time constants of 1.73, 2.78, and 3.40 h. This behavior was analytically well represented by a model having poroelastic and osmotic tissue properties with published tissue properties adjusted by multiplying factors between 0.55 and 2.6. Force relaxation and time constants from the analytical simulations were most sensitive to values of fixed charge density and endplate porosity.
Stokes, Ian A. F.; Laible, Jeffrey P.; Gardner-Morse, Mack G.; Costi, John J.; Iatridis, James C.
2011-01-01
Intervertebral disks support compressive forces because of their elastic stiffness as well as the fluid pressures resulting from poroelasticity and the osmotic (swelling) effects. Analytical methods can quantify the relative contributions, but only if correct material properties are used. To identify appropriate tissue properties, an experimental study and finite element analytical simulation of poroelastic and osmotic behavior of intervertebral disks were combined to refine published values of disk and endplate properties to optimize model fit to experimental data. Experimentally, nine human intervertebral disks with adjacent hemi-vertebrae were immersed sequentially in saline baths having concentrations of 0.015, 0.15, and 1.5 M and the loss of compressive force at constant height (force relaxation) was recorded over several hours after equilibration to a 300-N compressive force. Amplitude and time constant terms in exponential force–time curve-fits for experimental and finite element analytical simulations were compared. These experiments and finite element analyses provided data dependent on poroelastic and osmotic properties of the disk tissues. The sensitivities of the model to alterations in tissue material properties were used to obtain refined values of five key material parameters. The relaxation of the force in the three bath concentrations was exponential in form, expressed as mean compressive force loss of 48.7, 55.0, and 140 N, respectively, with time constants of 1.73, 2.78, and 3.40 h. This behavior was analytically well represented by a model having poroelastic and osmotic tissue properties with published tissue properties adjusted by multiplying factors between 0.55 and 2.6. Force relaxation and time constants from the analytical simulations were most sensitive to values of fixed charge density and endplate porosity. PMID:20711754
Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...
Faigen, Zachary; Deyneka, Lana; Ising, Amy; Neill, Daniel; Conway, Mike; Fairchild, Geoffrey; Gunn, Julia; Swenson, David; Painter, Ian; Johnson, Lauren; Kiley, Chris; Streichert, Laura; Burkom, Howard
2015-01-01
We document a funded effort to bridge the gap between constrained scientific challenges of public health surveillance and methodologies from academia and industry. Component tasks are the collection of epidemiologists' use case problems, multidisciplinary consultancies to refine them, and dissemination of problem requirements and shareable datasets. We describe an initial use case and consultancy as a concrete example and challenge to developers. Supported by the Defense Threat Reduction Agency Biosurveillance Ecosystem project, the International Society for Disease Surveillance formed an advisory group to select tractable use case problems and convene inter-disciplinary consultancies to translate analytic needs into well-defined problems and to promote development of applicable solution methods. The initial consultancy's focus was a problem originated by the North Carolina Department of Health and its NC DETECT surveillance system: Derive a method for detection of patient record clusters worthy of follow-up based on free-text chief complaints and without syndromic classification. Direct communication between public health problem owners and analytic developers was informative to both groups and constructive for the solution development process. The consultancy achieved refinement of the asyndromic detection challenge and of solution requirements. Participants summarized and evaluated solution approaches and discussed dissemination and collaboration strategies. A solution meeting the specification of the use case described above could improve human monitoring efficiency with expedited warning of events requiring follow-up, including otherwise overlooked events with no syndromic indicators. This approach can remove obstacles to collaboration with efficient, minimal data-sharing and without costly overhead.
NASA Astrophysics Data System (ADS)
Yong, A.; Hough, S. E.; Cox, B. R.; Rathje, E. M.; Bachhuber, J.; Hulslander, D.; Christiansen, L.; Abrams, M.
2010-12-01
The aftermath of the M7.0 Haiti earthquake of 12 January 2010 witnessed an impressive scientific response from the international community. In addition to conventional post-earthquake investigations, there was also an unprecedented reliance on remote-sensing technologies for scientific investigation and damage assessment. These technologies include sensors from both aerial and space-borne observational platforms. As part of the Haiti earthquake response and recovery effort, we develop a seismic zonation map of Port-au-Prince based on high-resolution satellite imagery as well as data from traditional seismographic monitoring stations and geotechnical site characterizations. Our imagery consists of a global digital elevation model (gDEM) of Hispaniola derived from data recorded by NASA-JPL's Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the multi-platform satellite Terra. To develop our model we also consider recorded waveforms from portable seismographic stations (Hough et al., in review) and 36 geotechnical shear-wave velocity surveys (Cox et al., in review). Following a similar approach developed by Yong et al. (2008; Bull. Seism Soc. Am.), we use both pixel- and object- based imaging analytic methods to systematically identify and extract local terrain features that are expected to amplify seismic ground motion. Using histogram-stretching techniques applied to the rDEM values, followed by multi-resolution, segmentations of the imagery into terrain types, we systematically classify the terrains of Hispaniola. By associating available Vs30 (average shear-wave velocity in the upper 30 meter depth) calculated from the MASW (Multi-channel Analysis of Surface Wave) survey method, we develop a first-order site characterization map. Our results indicate that the terrain-based Vs30 estimates are significantly associated with amplitudes recorded at station sites. We also find that the damage distribution inferred from UNOSAT (UNITAR Operational Satellite Applications Program) data matches our estimates. However, the strongest amplifications are observed at two stations on a foothill ridge, where Vs30 values indicate that amplification should be relatively lower. Hough et al. (2010, this session) conclude that the observations can be explained by topographic amplification along a steep, narrow ridge. On the basis of these preliminary results, we conclude that the terrain-based framework, which characterizes topographic amplification as well as sediment-induced amplification, is needed to develop a microzonation map for Port-au-Prince.
Yang, Yi; Tang, Xiangyang
2014-10-01
Under the existing theoretical framework of x-ray phase contrast imaging methods implemented with Talbot interferometry, the dark-field contrast refers to the reduction in interference fringe visibility due to small-angle x-ray scattering of the subpixel microstructures of an object to be imaged. This study investigates how an object's subpixel microstructures can also affect the phase of the intensity oscillations. Instead of assuming that the object's subpixel microstructures distribute in space randomly, the authors' theoretical derivation starts by assuming that an object's attenuation projection and phase shift vary at a characteristic size that is not smaller than the period of analyzer grating G₂ and a characteristic length dc. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the zeroth- and first-order Fourier coefficients of the x-ray irradiance recorded at each detector cell are derived. Then the concept of complex dark-field contrast is introduced to quantify the influence of the object's microstructures on both the interference fringe visibility and the phase of intensity oscillations. A method based on the phase-attenuation duality that holds for soft tissues and high x-ray energies is proposed to retrieve the imaginary part of the complex dark-field contrast for imaging. Through computer simulation study with a specially designed numerical phantom, they evaluate and validate the derived analytic formulae and the proposed retrieval method. Both theoretical analysis and computer simulation study show that the effect of an object's subpixel microstructures on x-ray phase contrast imaging method implemented with Talbot interferometry can be fully characterized by a complex dark-field contrast. The imaginary part of complex dark-field contrast quantifies the influence of the object's subpixel microstructures on the phase of intensity oscillations. Furthermore, at relatively high energies, for soft tissues it can be retrieved for imaging with a method based on the phase-attenuation duality. The analytic formulae derived in this work to characterize the complex dark-field contrast in x-ray phase contrast imaging method implemented with Talbot interferometry are of significance, which may initiate more activities in the research and development of x-ray differential phase contrast imaging for extensive biomedical applications.
An evolutionary view of chromatography data systems used in bioanalysis.
McDowall, R D
2010-02-01
This is a personal view of how chromatographic peak measurement and analyte quantification for bioanalysis have evolved from the manual methods of 1970 to the electronic working possible in 2010. In four decades there have been major changes from a simple chart recorder output (that was interpreted and quantified manually) through simple automation of peak measurement, calculation of standard curves and quality control values and instrument control to the networked chromatography data systems of today that are capable of interfacing with Laboratory Information Management Systems and other IT applications. The incorporation of electronic signatures to meet regulatory requirements offers a great opportunity for business improvement and electronic working.
Look-normal: the colonized child of developmental science.
Varga, Donna
2011-05-01
This article provides an analysis of the techniques, methods, materials, and discourses of child study observation to illuminate its role in the sociohistorical colonization of childhood. Through analysis of key texts it explains how early 20th-century child study provided for the transcendence of historical, racial, and social contexts for understanding human development. The colonizing project of child study promoted the advancement of Eurocentric culture through a generic "White" development. What a child is and can be, and the meaning of childhood has been disembodied through observation, record keeping, and analytical processes in which time and space are abstracted from behavior, and development symbolized as a universal ideal.
Method of multiplexed analysis using ion mobility spectrometer
Belov, Mikhail E [Richland, WA; Smith, Richard D [Richland, WA
2009-06-02
A method for analyzing analytes from a sample introduced into a Spectrometer by generating a pseudo random sequence of a modulation bins, organizing each modulation bin as a series of submodulation bins, thereby forming an extended pseudo random sequence of submodulation bins, releasing the analytes in a series of analyte packets into a Spectrometer, thereby generating an unknown original ion signal vector, detecting the analytes at a detector, and characterizing the sample using the plurality of analyte signal subvectors. The method is advantageously applied to an Ion Mobility Spectrometer, and an Ion Mobility Spectrometer interfaced with a Time of Flight Mass Spectrometer.
The importance of quality control in validating concentrations ...
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. This paper compares the method performance of six analytical methods used to measure 174 emer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senesac, Larry R; Datskos, Panos G; Sepaniak, Michael J
2006-01-01
In the present work, we have performed analyte species and concentration identification using an array of ten differentially functionalized microcantilevers coupled with a back-propagation artificial neural network pattern recognition algorithm. The array consists of ten nanostructured silicon microcantilevers functionalized by polymeric and gas chromatography phases and macrocyclic receptors as spatially dense, differentially responding sensing layers for identification and quantitation of individual analyte(s) and their binary mixtures. The array response (i.e. cantilever bending) to analyte vapor was measured by an optical readout scheme and the responses were recorded for a selection of individual analytes as well as several binary mixtures. Anmore » artificial neural network (ANN) was designed and trained to recognize not only the individual analytes and binary mixtures, but also to determine the concentration of individual components in a mixture. To the best of our knowledge, ANNs have not been applied to microcantilever array responses previously to determine concentrations of individual analytes. The trained ANN correctly identified the eleven test analyte(s) as individual components, most with probabilities greater than 97%, whereas it did not misidentify an unknown (untrained) analyte. Demonstrated unique aspects of this work include an ability to measure binary mixtures and provide both qualitative (identification) and quantitative (concentration) information with array-ANN-based sensor methodologies.« less
Exhaled breath condensate – from an analytical point of view
Dodig, Slavica; Čepelak, Ivana
2013-01-01
Over the past three decades, the goal of many researchers is analysis of exhaled breath condensate (EBC) as noninvasively obtained sample. A total quality in laboratory diagnostic processes in EBC analysis was investigated: pre-analytical (formation, collection, storage of EBC), analytical (sensitivity of applied methods, standardization) and post-analytical (interpretation of results) phases. EBC analysis is still used as a research tool. Limitations referred to pre-analytical, analytical, and post-analytical phases of EBC analysis are numerous, e.g. low concentrations of EBC constituents, single-analyte methods lack in sensitivity, and multi-analyte has not been fully explored, and reference values are not established. When all, pre-analytical, analytical and post-analytical requirements are met, EBC biomarkers as well as biomarker patterns can be selected and EBC analysis can hopefully be used in clinical practice, in both, the diagnosis and in the longitudinal follow-up of patients, resulting in better outcome of disease. PMID:24266297
NASA Astrophysics Data System (ADS)
Jough, Fooad Karimi Ghaleh; Şensoy, Serhan
2016-12-01
Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.
The hominin fossil record: taxa, grades and clades
Wood, Bernard; Lonergan, Nicholas
2008-01-01
This paper begins by reviewing the fossil evidence for human evolution. It presents summaries of each of the taxa recognized in a relatively speciose hominin taxonomy. These taxa are grouped in grades, namely possible and probable hominins, archaic hominins, megadont archaic hominins, transitional hominins, pre-modern Homo and anatomically modern Homo. The second part of this contribution considers some of the controversies that surround hominin taxonomy and systematics. The first is the vexed question of how you tell an early hominin from an early panin, or from taxa belonging to an extinct clade closely related to the Pan-Homo clade. Secondly, we consider how many species should be recognized within the hominin fossil record, and review the philosophies and methods used to identify taxa within the hominin fossil record. Thirdly, we examine how relationships within the hominin clade are investigated, including descriptions of the methods used to break down an integrated structure into tractable analytical units, and then how cladograms are generated and compared. We then review the internal structure of the hominin clade, including the problem of how many subclades should be recognized within the hominin clade, and we examine the reliability of hominin cladistic hypotheses. The last part of the paper reviews the concepts of a genus, including the criteria that should be used for recognizing genera within the hominin clade. PMID:18380861
Efficient Execution Methods of Pivoting for Bulk Extraction of Entity-Attribute-Value-Modeled Data
Luo, Gang; Frey, Lewis J.
2017-01-01
Entity-attribute-value (EAV) tables are widely used to store data in electronic medical records and clinical study data management systems. Before they can be used by various analytical (e.g., data mining and machine learning) programs, EAV-modeled data usually must be transformed into conventional relational table format through pivot operations. This time-consuming and resource-intensive process is often performed repeatedly on a regular basis, e.g., to provide a daily refresh of the content in a clinical data warehouse. Thus, it would be beneficial to make pivot operations as efficient as possible. In this paper, we present three techniques for improving the efficiency of pivot operations: 1) filtering out EAV tuples related to unneeded clinical parameters early on; 2) supporting pivoting across multiple EAV tables; and 3) conducting multi-query optimization. We demonstrate the effectiveness of our techniques through implementation. We show that our optimized execution method of pivoting using these techniques significantly outperforms the current basic execution method of pivoting. Our techniques can be used to build a data extraction tool to simplify the specification of and improve the efficiency of extracting data from the EAV tables in electronic medical records and clinical study data management systems. PMID:25608318
An analytical computation of magnetic field generated from a cylinder ferromagnet
NASA Astrophysics Data System (ADS)
Taniguchi, Tomohiro
2018-04-01
An analytical formulation to compute a magnetic field generated from an uniformly magnetized cylinder ferromagnet is developed. Exact solutions of the magnetic field generated from the magnetization pointing in an arbitrary direction are derived, which are applicable both inside and outside the ferromagnet. The validities of the present formulas are confirmed by comparing them with demagnetization coefficients estimated in earlier works. The results will be useful for designing practical applications, such as high-density magnetic recording and microwave generators, where nanostructured ferromagnets are coupled to each other through the dipole interactions and show cooperative phenomena such as synchronization. As an example, the magnetic field generated from a spin torque oscillator for magnetic recording based on microwave assisted magnetization reversal is studied.
Tanaka, Tomiji; Watanabe, Kenjiro
2008-02-20
For holographic data storage, it is necessary to adjust the wavelength and direction of the reading beam if the reading and recording temperature do not match. An analytical solution for this adjustment is derived using first-order approximations in a two-dimensional model. The optimum wavelength is a linear function of the temperature difference between recording and reading, and is independent of the direction of the reference beam. However, the optimum direction of incidence is not only a linear function of the temperature difference, but also depends on the direction of the reference beam. The retrieved image, which is produced by a diffracted beam, shrinks or expands slightly according to the temperature difference.
Life cycle management of analytical methods.
Parr, Maria Kristina; Schmidt, Alexander H
2018-01-05
In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.
Galy, Bertrand; Lan, André
2018-03-01
Among the many occupational risks construction workers encounter every day falling from a height is the most dangerous. The objective of this article is to propose a simple analytical design method for horizontal lifelines (HLLs) that considers anchorage flexibility. The article presents a short review of the standards and regulations/acts/codes concerning HLLs in Canada the USA and Europe. A static analytical approach is proposed considering anchorage flexibility. The analytical results are compared with a series of 42 dynamic fall tests and a SAP2000 numerical model. The experimental results show that the analytical method is a little conservative and overestimates the line tension in most cases with a maximum of 17%. The static SAP2000 results show a maximum 2.1% difference with the analytical method. The analytical method is accurate enough to safely design HLLs and quick design abaci are provided to allow the engineer to make quick on-site verification if needed.
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
Microstates in resting-state EEG: current status and future directions.
Khanna, Arjun; Pascual-Leone, Alvaro; Michel, Christoph M; Farzan, Faranak
2015-02-01
Electroencephalography (EEG) is a powerful method of studying the electrophysiology of the brain with high temporal resolution. Several analytical approaches to extract information from the EEG signal have been proposed. One method, termed microstate analysis, considers the multichannel EEG recording as a series of quasi-stable "microstates" that are each characterized by a unique topography of electric potentials over the entire channel array. Because this technique simultaneously considers signals recorded from all areas of the cortex, it is capable of assessing the function of large-scale brain networks whose disruption is associated with several neuropsychiatric disorders. In this review, we first introduce the method of EEG microstate analysis. We then review studies that have discovered significant changes in the resting-state microstate series in a variety of neuropsychiatric disorders and behavioral states. We discuss the potential utility of this method in detecting neurophysiological impairments in disease and monitoring neurophysiological changes in response to an intervention. Finally, we discuss how the resting-state microstate series may reflect rapid switching among neural networks while the brain is at rest, which could represent activity of resting-state networks described by other neuroimaging modalities. We conclude by commenting on the current and future status of microstate analysis, and suggest that EEG microstates represent a promising neurophysiological tool for understanding and assessing brain network dynamics on a millisecond timescale in health and disease. Copyright © 2014 Elsevier Ltd. All rights reserved.
Microstates in Resting-State EEG: Current Status and Future Directions
Khanna, Arjun; Pascual-Leone, Alvaro; Michel, Christoph M.; Farzan, Faranak
2015-01-01
Electroencephalography (EEG) is a powerful method of studying the electrophysiology of the brain with high temporal resolution. Several analytical approaches to extract information from the EEG signal have been proposed. One method, termed microstate analysis, considers the multichannel EEG recording as a series of quasi-stable “microstates” that are each characterized by a unique topography of electric potentials over the entire channel array. Because this technique simultaneously considers signals recorded from all areas of the cortex, it is capable of assessing the function of large-scale brain networks whose disruption is associated with several neuropsychiatric disorders. In this review, we first introduce the method of EEG microstate analysis. We then review studies that have discovered significant changes in the resting-state microstate series in a variety of neuropsychiatric disorders and behavioral states. We discuss the potential utility of this method in detecting neurophysiological impairments in disease and monitoring neurophysiological changes in response to an intervention. Finally, we discuss how the resting-state microstate series may reflect rapid switching among neural networks while the brain is at rest, which could represent activity of resting-state networks described by other neuroimaging modalities. We conclude by commenting on the current and future status of microstate analysis, and suggest that EEG microstates represent a promising neurophysiological tool for understanding and assessing brain network dynamics on a millisecond timescale in health and disease. PMID:25526823
40 CFR 86.107-98 - Sampling and analytical system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... automatic sealing opening of the boot during fueling. There shall be no loss in the gas tightness of the... system (recorder and sensor) shall have an accuracy of ±3 °F (±1.7 °C). The recorder (data processor... ambient temperature sensors, connected to provide one average output, located 3 feet above the floor at...
40 CFR 86.107-98 - Sampling and analytical system.
Code of Federal Regulations, 2011 CFR
2011-07-01
... automatic sealing opening of the boot during fueling. There shall be no loss in the gas tightness of the... system (recorder and sensor) shall have an accuracy of ±3 °F (±1.7 °C). The recorder (data processor... ambient temperature sensors, connected to provide one average output, located 3 feet above the floor at...
IMMUNOCHEMICAL APPLICATIONS IN ENVIRONMENTAL SCIENCE
Immunochemical methods are based on selective antibodies combining with a particular target analyte or analyte group. The specific binding between antibody and analyte can be used to detect environmental contaminants in a variety of sample matrixes. Immunoassay methods provide ...
Huang, Yande; Su, Bao-Ning; Ye, Qingmei; Palaniswamy, Venkatapuram A; Bolgar, Mark S; Raglione, Thomas V
2014-01-01
The classical internal standard quantitative NMR (qNMR) method determines the purity of an analyte by the determination of a solution containing the analyte and a standard. Therefore, the standard must meet the requirements of chemical compatibility and lack of resonance interference with the analyte as well as a known purity. The identification of such a standard can be time consuming and must be repeated for each analyte. In contrast, the external standard qNMR method utilizes a standard with a known purity to calibrate the NMR instrument. The external standard and the analyte are measured separately, thereby eliminating the matter of chemical compatibility and resonance interference between the standard and the analyte. However, the instrumental factors, including the quality of NMR tubes, must be kept the same. Any deviations will compromise the accuracy of the results. An innovative qNMR method reported herein utilizes an internal reference substance along with an external standard to assume the role of the standard used in the traditional internal standard qNMR method. In this new method, the internal reference substance must only be chemically compatible and be free of resonance-interference with the analyte or external standard whereas the external standard must only be of a known purity. The exact purity or concentration of the internal reference substance is not required as long as the same quantity is added to the external standard and the analyte. The new method reduces the burden of searching for an appropriate standard for each analyte significantly. Therefore the efficiency of the qNMR purity assay increases while the precision of the internal standard method is retained. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Brendryen, J.; Hannisdal, B.; Haaga, K. A.; Haflidason, H.; Castro, D. D.; Grasmo, K. J.; Sejrup, H. P.; Edwards, R. L.; Cheng, H.; Kelly, M. J.; Lu, Y.
2016-12-01
Abrupt millennial scale climatic events known as Dansgaard-Oeschger events are a defining feature of the Quaternary climate system dynamics in the North Atlantic and beyond. We present a high-resolution multi-proxy record of ocean-ice sheet interactions in the Norwegian Sea spanning the interval between 50 and 150 ka BP. A comparison with low latitude records indicates a very close connection between the high northern latitude ocean-ice sheet interactions and large scale changes in low latitude atmospheric circulation and hydrology even on sub-millennial scales. The records are placed on a common precise radiometric chronology based on correlations to U/Th dated speleothem records from China and the Alps. This enables a comparison of the records to orbital and other climatically important parameters such as U/Th dated sea-level data from corals and speleothems. We explore the drive-response relationships in these coupled systems with the information transfer (IT) and the convergent cross mapping (CCM) analytical techniques. These methods employ conceptually different approaches to detect the relative strength and directionality of potentially chaotic and nonlinearly coupled systems. IT is a non-parametric measure of information transfer between data records based on transfer entropy, while CCM relies on delay reconstructions using Takens' theorem. This approach enables us to address how the climate system processes interact and how this interaction is affected by external forcing from for example greenhouse gases and orbital variability.
Evaluating Trends in Historical PM2.5 Element Concentrations by Reanalyzing a 15-Year Sample Archive
NASA Astrophysics Data System (ADS)
Hyslop, N. P.; White, W. H.; Trzepla, K.
2014-12-01
The IMPROVE (Interagency Monitoring of PROtected Visual Environments) network monitors aerosol concentrations at 170 remote sites throughout the United States. Twenty-four-hour filter samples of particulate matter are collected every third day and analyzed for chemical composition. About 30 of the sites have operated continuously since 1988, and the sustained data record (http://views.cira.colostate.edu/web/) offers a unique window on regional aerosol trends. All elemental analyses have been performed by Crocker Nuclear Laboratory at the University of California in Davis, and sample filters collected since 1995 are archived on campus. The suite of reported elements has remained constant, but the analytical methods employed for their determination have evolved. For example, the elements Na - Mn were determined by PIXE until November 2001, then by XRF analysis in a He-flushed atmosphere through 2004, and by XRF analysis in vacuum since January 2005. In addition to these fundamental changes, incompletely-documented operational factors such as detector performance and calibration details have introduced variations in the measurements. Because the past analytical methods were non-destructive, the archived filters can be re-analyzed with the current analytical systems and protocols. The 15-year sample archives from Great Smoky Mountains (GRSM), Mount Rainier (MORA), and Point Reyes National Parks (PORE) were selected for reanalysis. The agreement between the new analyses and original determinations varies with element and analytical era. The graph below compares the trend estimates for all the elements measured by IMPROVE based on the original and repeat analyses; the elements identified in color are measured above the detection limit more than 90% of the time. The trend estimates are sensitive to the treatment of non-detect data. The original and reanalysis trends are indistinguishable (have overlapping confidence intervals) for most of the well-detected elements.
Apparent hyperthyroidism caused by biotin-like interference from IgM anti-streptavidin antibodies.
Lam, Leo; Bagg, Warwick; Smith, Geoff; Chiu, Weldon; Middleditch, Martin James; Lim, Julie Ching-Hsia; Kyle, Campbell Vance
2018-05-29
Exclusion of analytical interference is important when there is discrepancy between clinical and laboratory findings. However, interferences on immunoassays are often mistaken as isolated laboratory artefacts. We characterized and report the mechanism of a rare cause of interference in two patients that caused erroneous thyroid function tests, and also affects many other biotin dependent immunoassays. Patient 1 was a 77 y female with worsening fatigue while taking carbimazole over several years. Her thyroid function tests however, were not suggestive of hypothyroidism. Patient 2 was a 25 y female also prescribed carbimazole for apparent primary hyperthyroidism. Despite an elevated FT4, the lowest TSH on record was 0.17mIU/L. In both cases, thyroid function tests performed by an alternative method were markedly different. Further characterization of both patients' serum demonstrated analytical interference on many immunoassays using the biotin-streptavidin interaction. Sandwich assays (e.g. TSH, FSH, TNT, beta-HCG) were falsely low, while competitive assays (e.g. FT4, FT3, TBII) were falsely high. Pre-incubation of serum with streptavidin microparticles removed the analytical interference, initially suggesting the cause of interference was biotin, however, neither patient had been taking biotin. Instead, a ~100kDa IgM immunoglobulin with high affinity to streptavidin was isolated from each patient's serum. The findings confirm IgM anti-streptavidin antibodies as the cause of analytical interference. We describe two patients with apparent hyperthyroidism as a result of analytical interference caused by IgM anti-streptavidin antibodies. Analytical interference identified on one immunoassay should raise the possibility of other affected results. Characterization of interference may help to identify other potentially affected immunoassays. In the case of anti-streptavidin antibodies, the pattern of interference mimics that due to biotin ingestion; however, the degree of interference varies between individual assays and between patients.
NASA Astrophysics Data System (ADS)
Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas
2016-08-01
Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement, replicate measurements are feasible, allowing the method precision to be improved potentially. Further, new analytical approaches are introduced for the accurate correction of the procedural blank and for a consistent detection of measurement outliers, which is based on δ18O-CO2 and the exchange of oxygen between CO2 and the surrounding ice (H2O).
Boduszek, Daniel; Dhingra, Katie
2016-10-01
There is considerable debate about the underlying factor structure of the Beck Hopelessness Scale (BHS) in the literature. An established view is that it reflects a unitary or bidimensional construct in nonclinical samples. There are, however, reasons to reconsider this conceptualization. Based on previous factor analytic findings from both clinical and nonclinical studies, the aim of the present study was to compare 16 competing models of the BHS in a large university student sample (N = 1, 733). Sixteen distinct factor models were specified and tested using conventional confirmatory factor analytic techniques, along with confirmatory bifactor modeling. A 3-factor solution with 2 method effects (i.e., a multitrait-multimethod model) provided the best fit to the data. The reliability of this conceptualization was supported by McDonald's coefficient omega and the differential relationships exhibited between the 3 hopelessness factors ("feelings about the future," "loss of motivation," and "future expectations") and measures of goal disengagement, brooding rumination, suicide ideation, and suicide attempt history. The results provide statistical support for a 3-trait and 2-method factor model, and hence the 3 dimensions of hopelessness theorized by Beck. The theoretical and methodological implications of these findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Modeling and evaluating user behavior in exploratory visual analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reda, Khairi; Johnson, Andrew E.; Papka, Michael E.
Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, however, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This paper presents a methodology for modeling andmore » evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis, and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.« less
NASA Astrophysics Data System (ADS)
Samadi-Maybodi, Abdolraouf; Darzi, S. K. Hassani Nejad
2008-10-01
Resolution of binary mixtures of vitamin B12, methylcobalamin and B12 coenzyme with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable (PLS1), orthogonal signal correction/partial least squares (OSC/PLS), principal component regression (PCR) and hybrid linear analysis (HLA). Data of analysis were obtained from UV-vis spectra. The UV-vis spectra of the vitamin B12, methylcobalamin and B12 coenzyme were recorded in the same spectral conditions. The method of central composite design was used in the ranges of 10-80 mg L -1 for vitamin B12 and methylcobalamin and 20-130 mg L -1 for B12 coenzyme. The models refinement procedure and validation were performed by cross-validation. The minimum root mean square error of prediction (RMSEP) was 2.26 mg L -1 for vitamin B12 with PLS1, 1.33 mg L -1 for methylcobalamin with OSC/PLS and 3.24 mg L -1 for B12 coenzyme with HLA techniques. Figures of merit such as selectivity, sensitivity, analytical sensitivity and LOD were determined for three compounds. The procedure was successfully applied to simultaneous determination of three compounds in synthetic mixtures and in a pharmaceutical formulation.
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Updates to Selected Analytical Methods for Environmental Remediation and Recovery (SAM)
View information on the latest updates to methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), including the newest recommended methods and publications.
Real-time analysis of healthcare using big data analytics
NASA Astrophysics Data System (ADS)
Basco, J. Antony; Senthilkumar, N. C.
2017-11-01
Big Data Analytics (BDA) provides a tremendous advantage where there is a need of revolutionary performance in handling large amount of data that covers 4 characteristics such as Volume Velocity Variety Veracity. BDA has the ability to handle such dynamic data providing functioning effectiveness and exceptionally beneficial output in several day to day applications for various organizations. Healthcare is one of the sectors which generate data constantly covering all four characteristics with outstanding growth. There are several challenges in processing patient records which deals with variety of structured and unstructured format. Inducing BDA in to Healthcare (HBDA) will deal with sensitive patient driven information mostly in unstructured format comprising of prescriptions, reports, data from imaging system, etc., the challenges will be overcome by big data with enhanced efficiency in fetching and storing of data. In this project, dataset alike Electronic Medical Records (EMR) produced from numerous medical devices and mobile applications will be induced into MongoDB using Hadoop framework with Improvised processing technique to improve outcome of processing patient records.
Method of identity analyte-binding peptides
Kauvar, Lawrence M.
1990-01-01
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4-20 amino acids for specific affinity to the analyte.
Approximated analytical solution to an Ebola optimal control problem
NASA Astrophysics Data System (ADS)
Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.
2016-11-01
An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.
NASA Astrophysics Data System (ADS)
Cianciara, Aleksander
2016-09-01
The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.
USDA-ARS?s Scientific Manuscript database
Rill detachment is an important process in rill erosion. The rill detachment rate is the fundamental basis for determination of the parameters of a rill erosion model. In this paper, an analytical method was proposed to estimate the rill detachment rate. The method is based on the exact analytical s...
Sol-Gel Matrices For Direct Colorimetric Detection Of Analytes
Charych, Deborah H.; Sasaki, Darryl; Yamanaka, Stacey
2002-11-26
The present invention relates to methods and compositions for the direct detection of analytes using color changes that occur in immobilized biopolymeric material in response to selective binding of analytes to their surface. In particular, the present invention provides methods and compositions related to the encapsulation of biopolymeric material into metal oxide glass using the sol-gel method.
Sol-gel matrices for direct colorimetric detection of analytes
Charych, Deborah H.; Sasaki, Darryl; Yamanaka, Stacey
2000-01-01
The present invention relates to methods and compositions for the direct detection of analytes using color changes that occur in immobilized biopolymeric material in response to selective binding of analytes to their surface. In particular, the present invention provides methods and compositions related to the encapsulation of biopolymeric material into metal oxide glass using the sol-gel method.
Bhatt, Nejal M; Chavada, Vijay D; Sanyal, Mallika; Shrivastav, Pranav S
2016-11-18
A simple, accurate and precise high-performance thin-layer chromatographic method has been developed and validated for the analysis of proton pump inhibitors (PPIs) and their co-formulated drugs, available as binary combination. Planar chromatographic separation was achieved using a single mobile phase comprising of toluene: iso-propranol: acetone: ammonia 5.0:2.3:2.5:0.2 (v/v/v/v) for the analysis of 14 analytes on aluminium-backed layer of silica gel 60 FG 254 . Densitometric determination of the separated spots was done at 290nm. The method was validated according to ICH guidelines for linearity, precision and accuracy, sensitivity, specificity and robustness. The method showed good linear response for the selected drugs as indicated by the high values of correlation coefficients (≥0.9993). The limit of detection and limit of quantiation were in the range of 6.9-159.2ng/band and 20.8-478.1ng/band respectively for all the analytes. The optimized conditions afforded adequate resolution of each PPI from their co-formulated drugs and provided unambiguous identification of the co-formulated drugs from their homologous retardation factors (hR f ). The only limitation of the method was the inability to separate two PPIs, rabeprazole and lansoprazole from each other. Nevertheless, it is proposed that peak spectra recording and comparison with standard drug spot can be a viable option for assignment of TLC spots. The method performance was assessed by analyzing different laboratory simulated mixtures and some marketed formulations of the selected drugs. The developed method was successfully used to investigate potential counterfeit of PPIs through a series of simulated formulations with good accuracy and precision. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tian, J.; Xie, X.; Jin, H.; Wang, P.; Jian, Z.
2009-12-01
Energy dispersive X-ray fluorescence (XRF) scanning technology provides the most accurate and most economic analytical methods for the determination of major and minor elements of the deep-sea sediment ranging from sodium (11) to uranium (92). Scanning on the smooth core surface by XRF Core scanner is reliable and non-destructive to the sediment, requiring little or no time to prepare the core. This method overcomes the drawback of the traditional analytical method by ICP-AES or ICP-MS which requires long time for sample preparation. Thus, it makes it viable to reconstruct long and high-resolution elemental time series from sediment cores. We have performed relatively elemental concentration analyses on the deep sea sediment cores from ODP site 1143 (southern SCS) down to 190.77 mcd (meters composite depth) by XRF core scanner. The depth resolution of the scanning is 1 cm, equivalent to a time resolution of ~250 years. The age model is based on tuning the benthic foraminiferal d18O at Site 1143 to obliquity and precession (Tian et al., 2002) which indicates that the 190.77 meters long sediment spans the past 5 Myr. We compared the records between 99.5 and 136.46 mcd with the elemental records from the same site obtained by Philips PW 2400 X-ray spectrometer (Wehausen et al., online publication). Comparison reveals, regardless of the absolute changes of the elements, that the elemental records (Si, Ti, Al, Fe, Mn, Ca, K, P, Ba, Rb, Sr) obtained by two methods are nearly the same. Results show that the relative concentration variations of the productivity related elements such as Ba and Ca display distinctive glacial-interglacial cycles for the past 5 Myr. These productivity cycles recorded show one-on-one relationship with the glacial-interglacial cycles of the global ice volume change recorded in the benthic foraminiferal d18O. The glacial-interglacial cycles in productivity and global ice volume changes are consistent with each other not only in amplitude but also in secular variations. The benthic d18O implies the final formation of the northern hemisphere glaciation between ~2.5 Ma and ~3.3 Ma, as indicated by gradually increased values of d18O. During this period, both Ba and Ca show gradually increased values of relative concentration, indicating increased productivity which was probably caused by intensified East Asia summer monsoon. The close relationship of the productivity related elemental variations with benthic foraminiferal d18O reveals that the Plio-Pleistocene variations of the East Asian monsoon have been greatly dominated by global ice volume change. Although the elements related to terrigenous detrital matter composition of site 1143 such as Ti, Fe, As, Co and Ni display distinct glacial-interglacial cycles for the past 5 Myr, they display different patterns in secular variation with that of the benthic foraminiferal d18O. The mismatch indicates that besides northern hemisphere glaciation other multiple processes including changes in provenance and weathering intensity caused by monsoonal climate variability and sea level fluctuations could have affected the terrigenous detrital matter composition of site 1143.
TimeBench: a data model and software library for visual analytics of time-oriented data.
Rind, Alexander; Lammarsch, Tim; Aigner, Wolfgang; Alsallakh, Bilal; Miksch, Silvia
2013-12-01
Time-oriented data play an essential role in many Visual Analytics scenarios such as extracting medical insights from collections of electronic health records or identifying emerging problems and vulnerabilities in network traffic. However, many software libraries for Visual Analytics treat time as a flat numerical data type and insufficiently tackle the complexity of the time domain such as calendar granularities and intervals. Therefore, developers of advanced Visual Analytics designs need to implement temporal foundations in their application code over and over again. We present TimeBench, a software library that provides foundational data structures and algorithms for time-oriented data in Visual Analytics. Its expressiveness and developer accessibility have been evaluated through application examples demonstrating a variety of challenges with time-oriented data and long-term developer studies conducted in the scope of research and student projects.
New method to monitor RF safety in MRI-guided interventions based on RF induced image artefacts.
van den Bosch, Michiel R; Moerland, Marinus A; Lagendijk, Jan J W; Bartels, Lambertus W; van den Berg, Cornelis A T
2010-02-01
Serious tissue heating may occur at the tips of elongated metallic structures used in MRI-guided interventions, such as vascular guidewires, catheters, biopsy needles, and brachytherapy needles. This heating is due to resonating electromagnetic radiofrequency (RF) waves along the structure. Since it is hard to predict the exact length at which resonance occurs under in vivo conditions, there is a need for methods to monitor this resonance behavior. In this study, the authors propose a method based on the RF induced image artefacts and demonstrate its applicability in two phantom experiments. The authors developed an analytical model that describes the RF induced image artefacts as a function of the induced current in an elongated metallic structure placed parallel to the static magnetic field. It describes the total RF field as a sum of the RF fields produced by the transmit coil of the MR scanner and by the elongated metallic structure. Several spoiled gradient echo images with different nominal flip angle settings were acquired to map the B1+ field, which is a quantitative measure for the RF distortion around the structure. From this map, the current was extracted by fitting the analytical model. To investigate the sensitivity of our method we performed two phantom experiments with different setup parameters: One that mimics a brachytherapy needle insertion and one that resembles a guidewire intervention. In the first experiment, a short needle was placed centrally in the MR bore to ensure that the induced currents would be small. In the second experiment, a longer wire was placed in an off-center position to mimic a worst case scenario for the patient. In both experiments, a Luxtron (Santa Clara, CA) fiberoptic temperature sensor was positioned at the structure tip to record the temperature. In the first experiment, no significant temperature increases were measured, while the RF image artefacts and the induced currents in the needle increased with the applied insertion depth. The maximum induced current in the needle was 44 mA. Furthermore, a standing wave pattern became clearly visible for larger insertion depths. In the second experiment, significant temperature increases up to 2.4 degrees C in 1 min were recorded during the image acquisitions. The maximum current value was 1.4 A. In both experiments, a proper estimation of the current in the metallic structure could be made using our analytical model. The authors have developed a method to quantitatively determine the induced current in an elongated metallic structure from its RF distortion. This creates a powerful and sensitive method to investigate the resonant behavior of RF waves along elongated metallic structures used for MRI-guided interventions, for example, to monitor the RF safety or to inspect the influence of coating on the resonance length. Principally, it can be applied under in vivo conditions and for noncylindrical metallic structures such as hip implants by taking their geometry into account.
Contains basic information on the role and origins of the Selected Analytical Methods including the formation of the Homeland Security Laboratory Capacity Work Group and the Environmental Evaluation Analytical Process Roadmap for Homeland Security Events
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
Hu, Jie-Bi; Chen, Yu-Chie; Urban, Pawel L
2012-06-05
A microscale analytical platform integrating microbial cell culture, isotopic labeling, along with visual and mass spectrometric imaging with single-cell resolution has been developed and applied in the monitoring of cellular metabolism in fungal mycelium. The method implements open chips with a two-dimensional surface pattern composed of hydrophobic and hydrophilic zones. Two hydrophilic islands are used as medium reservoirs, while the hydrophobic area constitutes the support for the growing aerial hyphae, which do not have direct contact with the medium. The first island, containing (12)C(6)-glucose medium, was initially inoculated with the mycelium (Neurospora crassa), and following the initial incubation period, the hyphae progressed toward the second medium island, containing an isotopically labeled substrate ((13)C(6)-glucose). The (13)C atoms were gradually incorporated into cellular metabolites, which was revealed by MALDI-MS. The fate of the chitin-biosynthesis precursor, uridine diphosphate N-acetylglucosamine (UDP-GlcNAc), was monitored by recording mass spectra with characteristic isotopic patterns, which indicated the presence of various (12)C/(13)C isotopologues. The method enabled mapping the (13)C-labeled UDP-GlcNAc in fungal mycelium and recording its redistribution in hyphae, directly on the chip.
Diagnostic communication in the memory clinic: a conversation analytic perspective
Peel, Elizabeth
2015-01-01
Objectives: Whether and how patients should be told their dementia diagnosis, has been an area of much debate. While there is now recognition that early diagnosis is important for dementia care little research has looked at how dementia-related diagnostic information is actually verbally communicated. The limited previous research suggests that the absence of explicit terminology (e.g., use of the term Alzheimer's) is problematic. This paper interrogates this assumption through a conversation analysis of British naturalistic memory clinic interaction. Method: This paper is based on video-recordings of communication within a UK memory clinic. Appointments with 29 patients and accompanying persons were recorded, and the corpus was repeatedly listened to, in conjunction with the transcripts in order to identify the segments of talk where there was an action hearable as diagnostic delivery, that is where the clinician is evaluating the patient's condition. Results: Using a conversation analytic approach this analysis suggests that diagnostic communication, which is sensitive and responsive to the patient and their carers, is not predicated on the presence or absence of particular lexical choices. There is inherent complexity regarding dementia diagnosis, especially in the ‘early stages’, which is produced through and reflected in diagnostic talk in clinical encounters. Conclusion: In the context of continuity of dementia care, diagnostic information is communicated in a way that conforms to intersubjective norms of minimizing catastrophic reactions in medical communication, and is sensitive to problems associated with ‘insight’ in terms of delivery and receipt or non-receipt of diagnosis. PMID:25647148
Reidl-Leuthner, Christoph; Viernstein, Alexander; Wieland, Karin; Tomischko, Wolfgang; Sass, Ludwig; Kinger, Gerald; Ofner, Johannes; Lendl, Bernhard
2014-09-16
Two pulsed thermoelectrically cooled mid-infrared distributed feedback quantum cascade lasers (QCLs) were used for the quasi-simultaneous in-line determination of NO and NO2 at the caloric power plant Dürnrohr (Austria). The QCL beams were combined using a bifurcated hollow fiber, sent through the flue tube (inside diameter: 5.5 m), reflected by a retro-reflector and recorded using a fast thermoelectrically cooled mercury-cadmium-telluride detector. The thermal chirp during 300 ns pulses was about 1.2 cm(-1) and allowed scanning of rotational vibrational doublets of the analytes. On the basis of the thermal chirp and the temporal resolution of data acquisition, a spectral resolution of approximately 0.02 cm(-1) was achieved. The recorded rotational vibrational absorption lines were centered at 1900 cm(-1) for NO and 1630 cm(-1) for NO2. Despite water content in the range of 152-235 g/m(3) and an average particle load of 15.8 mg/m(3) in the flue gas, in-line measurements were possible achieving limits of detection of 73 ppb for NO and 91 ppb for NO2 while optimizing for a single analyte. Quasi-simultaneous measurements resulted in limits of detection of 219 ppb for NO and 164 ppb for NO2, respectively. Influences of temperature and pressure on the data evaluation are discussed, and results are compared to an established reference method based on the extractive measurements presented.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
The Use and Abuse of Limits of Detection in Environmental Analytical Chemistry
Brown, Richard J. C.
2008-01-01
The limit of detection (LoD) serves as an important method performance measure that is useful for the comparison of measurement techniques and the assessment of likely signal to noise performance, especially in environmental analytical chemistry. However, the LoD is only truly related to the precision characteristics of the analytical instrument employed for the analysis and the content of analyte in the blank sample. This article discusses how other criteria, such as sampling volume, can serve to distort the quoted LoD artificially and make comparison between various analytical methods inequitable. In order to compare LoDs between methods properly, it is necessary to state clearly all of the input parameters relating to the measurements that have been used in the calculation of the LoD. Additionally, the article discusses that the use of LoDs in contexts other than the comparison of the attributes of analytical methods, in particular when reporting analytical results, may be confusing, less informative than quoting the actual result with an accompanying statement of uncertainty, and may act to bias descriptive statistics. PMID:18690384
Martens, Brian K; DiGennaro, Florence D; Reed, Derek D; Szczech, Frances M; Rosenthal, Blair D
2008-01-01
Descriptive assessment methods have been used in applied settings to identify consequences for problem behavior, thereby aiding in the design of effective treatment programs. Consensus has not been reached, however, regarding the types of data or analytic strategies that are most useful for describing behavior–consequence relations. One promising approach involves the analysis of conditional probabilities from sequential recordings of behavior and events that follow its occurrence. In this paper we review several strategies for identifying contingent relations from conditional probabilities, and propose an alternative strategy known as a contingency space analysis (CSA). Step-by-step procedures for conducting and interpreting a CSA using sample data are presented, followed by discussion of the potential use of a CSA for conducting descriptive assessments, informing intervention design, and evaluating changes in reinforcement contingencies following treatment. PMID:18468280
Robinson, Eleanor M; Trumble, Stephen J; Subedi, Bikram; Sanders, Rebel; Usenko, Sascha
2013-12-06
Lipid-rich matrices are often sinks for lipophilic contaminants, such as pesticides, polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). Typically methods for contaminant extraction and cleanup for lipid-rich matrices require multiple cleanup steps; however, a selective pressurized liquid extraction (SPLE) technique requiring no additional cleanup has been developed for the simultaneous extraction and cleanup of whale earwax (cerumen; a lipid-rich matrix). Whale earwax accumulates in select whale species over their lifetime to form wax earplugs. Typically used as an aging technique in cetaceans, layers or laminae that comprise the earplug are thought to be associated with annual or semiannual migration and feeding patterns. Whale earplugs (earwax) represent a unique matrix capable of recording and archiving whales' lifetime contaminant profiles. This study reports the first analytical method developed for identifying and quantifying lipophilic persistent organic pollutants (POPs) in a whale earplug including organochlorine pesticides, polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). The analytical method was developed using SPLE to extract contaminants from ∼0.25 to 0.5g aliquots of each lamina of sectioned earplug. The SPLE was optimized for cleanup adsorbents (basic alumina, silica gel, and Florisil(®)), adsorbent to sample ratio, and adsorbent order. In the optimized SPLE method, the earwax homogenate was placed within the extraction cell on top of basic alumina (5g), silica gel (15g), and Florisil(®) (10g) and the target analytes were extracted from the homogenate using 1:1 (v/v) dichloromethane:hexane. POPs were analyzed using gas chromatography-mass spectrometry with electron capture negative ionization and electron impact ionization. The average percent recoveries for the POPs were 91% (±6% relative standard deviation), while limits of detection and quantification ranged from 0.00057 to 0.96ngg(-1) and 0.0017 to 2.9ngg(-1), respectively. Pesticides, PCBs, and PBDEs, were measured in a single blue whale (Balaenoptera musculus) cerumen lamina at concentrations ranging from 0.11 to 150ng g(-1). Copyright © 2013 Elsevier B.V. All rights reserved.
Methods for determination of radioactive substances in water and fluvial sediments
Thatcher, Leland Lincoln; Janzer, Victor J.; Edwards, Kenneth W.
1977-01-01
Analytical methods for the determination of some of the more important components of fission or neutron activation product radioactivity and of natural radioactivity found in water are reported. The report for each analytical method includes conditions for application of the method, a summary of the method, interferences, required apparatus and reagents, analytical procedures, calculations, reporting of results, and estimation of precision. The fission product isotopes considered are cesium-137, strontium-90, and ruthenium-106. The natural radioelements and isotopes considered are uranium, lead-210, radium-226, radium-228, tritium, and carbon-14. A gross radioactivity survey method and a uranium isotope ratio method are given. When two analytical methods are in routine use for an individual isotope, both methods are reported with identification of the specific areas of application of each. Techniques for the collection and preservation of water samples to be analyzed for radioactivity are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D; Goodall, John R
2014-01-01
Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit processmore » recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.« less
Method of identity analyte-binding peptides
Kauvar, L.M.
1990-10-16
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4--20 amino acids for specific affinity to the analyte. 5 figs.
Progress and development of analytical methods for gibberellins.
Pan, Chaozhi; Tan, Swee Ngin; Yong, Jean Wan Hong; Ge, Liya
2017-01-01
Gibberellins, as a group of phytohormones, exhibit a wide variety of bio-functions within plant growth and development, which have been used to increase crop yields. Many analytical procedures, therefore, have been developed for the determination of the types and levels of endogenous and exogenous gibberellins. As plant tissues contain gibberellins in trace amounts (usually at the level of nanogram per gram fresh weight or even lower), the sample pre-treatment steps (extraction, pre-concentration, and purification) for gibberellins are reviewed in details. The primary focus of this comprehensive review is on the various analytical methods designed to meet the requirements for gibberellins analyses in complex matrices with particular emphasis on high-throughput analytical methods, such as gas chromatography, liquid chromatography, and capillary electrophoresis, mostly combined with mass spectrometry. The advantages and drawbacks of the each described analytical method are discussed. The overall aim of this review is to provide a comprehensive and critical view on the different analytical methods nowadays employed to analyze gibberellins in complex sample matrices and their foreseeable trends. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Luminescent detection of hydrazine and hydrazine derivatives
Swager, Timothy M [Newton, MA; Thomas, III, Samuel W.
2012-04-17
The present invention generally relates to methods for modulating the optical properties of a luminescent polymer via interaction with a species (e.g., an analyte). In some cases, the present invention provides methods for determination of an analyte by monitoring a change in an optical signal of a luminescent polymer upon exposure to an analyte. Methods of the present invention may be useful for the vapor phase detection of analytes such as explosives and toxins. The present invention also provides methods for increasing the luminescence intensity of a polymer, such as a polymer that has been photobleached, by exposing the luminescent polymer to a species such as a reducing agent.
Method of multi-dimensional moment analysis for the characterization of signal peaks
Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A
2012-10-23
A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng
2014-01-01
Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122
Privacy-preserving matching of similar patients.
Vatsalan, Dinusha; Christen, Peter
2016-02-01
The identification of similar entities represented by records in different databases has drawn considerable attention in many application areas, including in the health domain. One important type of entity matching application that is vital for quality healthcare analytics is the identification of similar patients, known as similar patient matching. A key component of identifying similar records is the calculation of similarity of the values in attributes (fields) between these records. Due to increasing privacy and confidentiality concerns, using the actual attribute values of patient records to identify similar records across different organizations is becoming non-trivial because the attributes in such records often contain highly sensitive information such as personal and medical details of patients. Therefore, the matching needs to be based on masked (encoded) values while being effective and efficient to allow matching of large databases. Bloom filter encoding has widely been used as an efficient masking technique for privacy-preserving matching of string and categorical values. However, no work on Bloom filter-based masking of numerical data, such as integer (e.g. age), floating point (e.g. body mass index), and modulus (numbers wrap around upon reaching a certain value, e.g. date and time), which are commonly required in the health domain, has been presented in the literature. We propose a framework with novel methods for masking numerical data using Bloom filters, thereby facilitating the calculation of similarities between records. We conduct an empirical study on publicly available real-world datasets which shows that our framework provides efficient masking and achieves similar matching accuracy compared to the matching of actual unencoded patient records. Copyright © 2015 Elsevier Inc. All rights reserved.
Fahie, Monifa A; Chen, Min
2015-08-13
The flexible loops decorating the entrance of OmpG nanopore move dynamically during ionic current recording. The gating caused by these flexible loops changes when a target protein is bound. The gating is characterized by parameters including frequency, duration, and open-pore current, and these features combine to reveal the identity of a specific analyte protein. Here, we show that OmpG nanopore equipped with a biotin ligand can distinguish glycosylated and deglycosylated isoforms of avidin by their differences in surface charge. Our studies demonstrate that the direct interaction between the nanopore and analyte surface, induced by the electrostatic attraction between the two molecules, is essential for protein isoform detection. Our technique is remarkably sensitive to the analyte surface, which may provide a useful tool for glycoprotein profiling.
Quality of Big Data in health care.
Sukumar, Sreenivas R; Natarajan, Ramachandran; Ferrell, Regina K
2015-01-01
The current trend in Big Data analytics and in particular health information technology is toward building sophisticated models, methods and tools for business, operational and clinical intelligence. However, the critical issue of data quality required for these models is not getting the attention it deserves. The purpose of this paper is to highlight the issues of data quality in the context of Big Data health care analytics. The insights presented in this paper are the results of analytics work that was done in different organizations on a variety of health data sets. The data sets include Medicare and Medicaid claims, provider enrollment data sets from both public and private sources, electronic health records from regional health centers accessed through partnerships with health care claims processing entities under health privacy protected guidelines. Assessment of data quality in health care has to consider: first, the entire lifecycle of health data; second, problems arising from errors and inaccuracies in the data itself; third, the source(s) and the pedigree of the data; and fourth, how the underlying purpose of data collection impact the analytic processing and knowledge expected to be derived. Automation in the form of data handling, storage, entry and processing technologies is to be viewed as a double-edged sword. At one level, automation can be a good solution, while at another level it can create a different set of data quality issues. Implementation of health care analytics with Big Data is enabled by a road map that addresses the organizational and technological aspects of data quality assurance. The value derived from the use of analytics should be the primary determinant of data quality. Based on this premise, health care enterprises embracing Big Data should have a road map for a systematic approach to data quality. Health care data quality problems can be so very specific that organizations might have to build their own custom software or data quality rule engines. Today, data quality issues are diagnosed and addressed in a piece-meal fashion. The authors recommend a data lifecycle approach and provide a road map, that is more appropriate with the dimensions of Big Data and fits different stages in the analytical workflow.
Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech
2015-01-01
Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Application of capability indices and control charts in the analytical method control strategy.
Oliva, Alexis; Llabres Martinez, Matías
2017-08-01
In this study, we assessed the usefulness of control charts in combination with the process capability indices, C pm and C pk , in the control strategy of an analytical method. The traditional X-chart and moving range chart were used to monitor the analytical method over a 2-year period. The results confirmed that the analytical method is in-control and stable. Different criteria were used to establish the specifications limits (i.e. analyst requirements) for fixed method performance (i.e. method requirements). If the specification limits and control limits are equal in breadth, the method can be considered "capable" (C pm = 1), but it does not satisfy the minimum method capability requirements proposed by Pearn and Shu (2003). Similar results were obtained using the C pk index. The method capability was also assessed as a function of method performance for fixed analyst requirements. The results indicate that the method does not meet the requirements of the analytical target approach. A real-example data of a SEC with light-scattering detection method was used as a model whereas previously published data were used to illustrate the applicability of the proposed approach. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
Farzanehfar, Vahid; Faizi, Mehrdad; Naderi, Nima; Kobarfard, Farzad
2017-01-01
Dibutyl phthalate (DBP) is a phthalic acid ester and is widely used in polymeric products to make them more flexible. DBP is found in almost every plastic material and is believed to be persistent in the environment. Various analytical methods have been used to measure DBP in different matrices. Considering the ubiquitous nature of DBP, the most important challenge in DBP analyses is the contamination of even analytical grade organic solvents with this compound and lack of availability of a true blank matrix to construct the calibration line. Standard addition method or using artificial matrices reduce the precision and accuracy of the results. In this study a surrogate analyte approach that is based on using deuterium labeled analyte (DBP-d4) to construct the calibration line was applied to determine DBP in hexane samples. PMID:28496469
AmO 2 Analysis for Analytical Method Testing and Assessment: Analysis Support for AmO 2 Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, Kevin John; Bland, Galey Jean; Fulwyler, James Brent
Americium oxide samples will be measured for various analytes to support AmO 2 production. The key analytes that are currently requested by the Am production customer at LANL include total Am content, Am isotopics, Pu assay, Pu isotopics, and trace element content including 237Np content. Multiple analytical methods will be utilized depending on the sensitivity, accuracy and precision needs of the Am matrix. Traceability to the National Institute of Standards and Technology (NIST) will be achieved, where applicable, by running NIST traceable quality control materials. This given that there are no suitable AmO 2 reference materials currently available for requestedmore » analytes. The primary objective is to demonstrate the suitability of actinide analytical chemistry methods to support AmO 2 production operations.« less
Analytical methods for gelatin differentiation from bovine and porcine origins and food products.
Nhari, Raja Mohd Hafidz Raja; Ismail, Amin; Che Man, Yaakob B
2012-01-01
Usage of gelatin in food products has been widely debated for several years, which is about the source of gelatin that has been used, religion, and health. As an impact, various analytical methods have been introduced and developed to differentiate gelatin whether it is made from porcine or bovine sources. The analytical methods comprise a diverse range of equipment and techniques including spectroscopy, chemical precipitation, chromatography, and immunochemical. Each technique can differentiate gelatins for certain extent with advantages and limitations. This review is focused on overview of the analytical methods available for differentiation of bovine and porcine gelatin and gelatin in food products so that new method development can be established. © 2011 Institute of Food Technologists®
Modeling landslide recurrence in Seattle, Washington, USA
Salciarini, Diana; Godt, Jonathan W.; Savage, William Z.; Baum, Rex L.; Conversini, Pietro
2008-01-01
To manage the hazard associated with shallow landslides, decision makers need an understanding of where and when landslides may occur. A variety of approaches have been used to estimate the hazard from shallow, rainfall-triggered landslides, such as empirical rainfall threshold methods or probabilistic methods based on historical records. The wide availability of Geographic Information Systems (GIS) and digital topographic data has led to the development of analytic methods for landslide hazard estimation that couple steady-state hydrological models with slope stability calculations. Because these methods typically neglect the transient effects of infiltration on slope stability, results cannot be linked with historical or forecasted rainfall sequences. Estimates of the frequency of conditions likely to cause landslides are critical for quantitative risk and hazard assessments. We present results to demonstrate how a transient infiltration model coupled with an infinite slope stability calculation may be used to assess shallow landslide frequency in the City of Seattle, Washington, USA. A module called CRF (Critical RainFall) for estimating deterministic rainfall thresholds has been integrated in the TRIGRS (Transient Rainfall Infiltration and Grid-based Slope-Stability) model that combines a transient, one-dimensional analytic solution for pore-pressure response to rainfall infiltration with an infinite slope stability calculation. Input data for the extended model include topographic slope, colluvial thickness, initial water-table depth, material properties, and rainfall durations. This approach is combined with a statistical treatment of rainfall using a GEV (General Extreme Value) probabilistic distribution to produce maps showing the shallow landslide recurrence induced, on a spatially distributed basis, as a function of rainfall duration and hillslope characteristics.
Unwin, Ian; Jansen-van der Vliet, Martine; Westenbrink, Susanne; Presser, Karl; Infanger, Esther; Porubska, Janka; Roe, Mark; Finglas, Paul
2016-02-15
The EuroFIR Document and Data Repositories are being developed as accessible collections of source documents, including grey literature, and the food composition data reported in them. These Repositories will contain source information available to food composition database compilers when selecting their nutritional data. The Document Repository was implemented as searchable bibliographic records in the Europe PubMed Central database, which links to the documents online. The Data Repository will contain original data from source documents in the Document Repository. Testing confirmed the FoodCASE food database management system as a suitable tool for the input, documentation and quality assessment of Data Repository information. Data management requirements for the input and documentation of reported analytical results were established, including record identification and method documentation specifications. Document access and data preparation using the Repositories will provide information resources for compilers, eliminating duplicated work and supporting unambiguous referencing of data contributing to their compiled data. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fluorescence analysis of ubiquinone and its application in quality control of medical supplies
NASA Astrophysics Data System (ADS)
Timofeeva, Elvira O.; Gorbunova, Elena V.; Chertov, Aleksandr N.
2017-02-01
The presence of antioxidant issues such as redox potential imbalance in human body is a very important question for modern clinical diagnostics. Implementation of fluorescence analysis into optical diagnostics of such wide distributed in a human body antioxidant as ubiquinone is one of the steps for development of the device with a view to clinical diagnostics of redox potential. Recording of fluorescence was carried out with spectrometer using UV irradiation source with thin band (max at 287 and 330 nm) as a background radiation. Concentrations of ubiquinone from 0.25 to 2.5 mmol/l in explored samples were used for investigation. Recording data was processed using correlation analysis and differential analytical technique. The fourth derivative spectrum of fluorescence spectrum provided the basis for a multicomponent analysis of the solutions. As a technique in clinical diagnostics fluorescence analysis with processing method including differential spectrophotometry, it is step forward towards redox potential calculation and quality control in pharmacy for better health care.
Mapping hard magnetic recording disks by TOF-SIMS
NASA Astrophysics Data System (ADS)
Spool, A.; Forrest, J.
2008-12-01
Mapping of hard magnetic recording disks by TOF-SIMS was performed both to produce significant analytical results for the understanding of the disk surface and the head disk interface in hard disk drives, and as an example of a macroscopic non-rectangular mapping problem for the technique. In this study, maps were obtained by taking discrete samples of the disk surface at set intervals in R and Θ. Because both in manufacturing, and in the disk drive, processes that may affect the disk surface are typically circumferential in nature, changes in the surface are likely to be blurred in the Θ direction. An algorithm was developed to determine the optimum relative sampling ratio in R and Θ. The results confirm what the experience of the analysts suggested, that changes occur more rapidly on disks in the radial direction, and that more sampling in the radial direction is desired. The subsequent use of statistical methods principle component analysis (PCA), maximum auto-correlation factors (MAF), and the algorithm inverse distance weighting (IDW) are explored.
The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234
Mudge, Elizabeth M; Brown, Paula N
2016-01-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823
The Importance of Method Selection in Determining Product Integrity for Nutrition Research.
Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N
2016-03-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.
Ragan, Eric D; Endert, Alex; Sanyal, Jibonananda; Chen, Jian
2016-01-01
While the primary goal of visual analytics research is to improve the quality of insights and findings, a substantial amount of research in provenance has focused on the history of changes and advances throughout the analysis process. The term, provenance, has been used in a variety of ways to describe different types of records and histories related to visualization. The existing body of provenance research has grown to a point where the consolidation of design knowledge requires cross-referencing a variety of projects and studies spanning multiple domain areas. We present an organizational framework of the different types of provenance information and purposes for why they are desired in the field of visual analytics. Our organization is intended to serve as a framework to help researchers specify types of provenance and coordinate design knowledge across projects. We also discuss the relationships between these factors and the methods used to capture provenance information. In addition, our organization can be used to guide the selection of evaluation methodology and the comparison of study outcomes in provenance research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric; Alex, Endert; Sanyal, Jibonananda
While the primary goal of visual analytics research is to improve the quality of insights and findings, a substantial amount of research in provenance has focused on the history of changes and advances throughout the analysis process. The term, provenance, has been used in a variety of ways to describe different types of records and histories related to visualization. The existing body of provenance research has grown to a point where the consolidation of design knowledge requires cross-referencing a variety of projects and studies spanning multiple domain areas. We present an organizational framework of the different types of provenance informationmore » and purposes for why they are desired in the field of visual analytics. Our organization is intended to serve as a framework to help researchers specify types of provenance and coordinate design knowledge across projects. We also discuss the relationships between these factors and the methods used to capture provenance information. In addition, our organization can be used to guide the selection of evaluation methodology and the comparison of study outcomes in provenance research« less
A practical guide to big data research in psychology.
Chen, Eric Evan; Wojcik, Sean P
2016-12-01
The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Ragan, Eric; Alex, Endert; Sanyal, Jibonananda; ...
2016-01-01
While the primary goal of visual analytics research is to improve the quality of insights and findings, a substantial amount of research in provenance has focused on the history of changes and advances throughout the analysis process. The term, provenance, has been used in a variety of ways to describe different types of records and histories related to visualization. The existing body of provenance research has grown to a point where the consolidation of design knowledge requires cross-referencing a variety of projects and studies spanning multiple domain areas. We present an organizational framework of the different types of provenance informationmore » and purposes for why they are desired in the field of visual analytics. Our organization is intended to serve as a framework to help researchers specify types of provenance and coordinate design knowledge across projects. We also discuss the relationships between these factors and the methods used to capture provenance information. In addition, our organization can be used to guide the selection of evaluation methodology and the comparison of study outcomes in provenance research« less
Matched spectral filter based on reflection holograms for analyte identification.
Cao, Liangcai; Gu, Claire
2009-12-20
A matched spectral filter set that provides automatic preliminary analyte identification is proposed and analyzed. Each matched spectral filter in the set containing the multiple spectral peaks corresponding to the Raman spectrum of a substance is capable of collecting the specified spectrum into the detector simultaneously. The filter set is implemented by multiplexed volume holographic reflection gratings. The fabrication of a matched spectral filter in an Fe:LiNbO(3) crystal is demonstrated to match the Raman spectrum of the sample Rhodamine 6G (R6G). An interference alignment method is proposed and used in the fabrication to ensure that the multiplexed gratings are in the same direction at a high angular accuracy of 0.0025 degrees . Diffused recording beams are used to control the bandwidth of the spectral peaks. The reflection spectrum of the filter is characterized using a modified Raman spectrometer. The result of the filter's reflection spectrum matches that of the sample R6G. A library of such matched spectral filters will facilitate a fast detection with a higher sensitivity and provide a capability for preliminary molecule identification.
Manickum, Thavrin; John, Wilson
2015-07-01
The availability of national test centers to offer a routine service for analysis and quantitation of some selected steroid hormones [natural estrogens (17-β-estradiol, E2; estrone, E1; estriol, E3), synthetic estrogen (17-α-ethinylestradiol, EE2), androgen (testosterone), and progestogen (progesterone)] in wastewater matrix was investigated; corresponding internationally used chemical- and immuno-analytical test methods were reviewed. The enzyme-linked immunosorbent assay (ELISA) (immuno-analytical technique) was also assessed for its suitability as a routine test method to quantitate the levels of these hormones at a sewage/wastewater treatment plant (WTP) (Darvill, Pietermaritzburg, South Africa), over a 2-year period. The method performance and other relevant characteristics of the immuno-analytical ELISA method were compared to the conventional chemical-analytical methodology, like gas/liquid chromatography-mass spectrometry (GC/LC-MS), and GC-LC/tandem mass spectrometry (MSMS), for quantitation of the steroid hormones in wastewater and environmental waters. The national immuno-analytical ELISA technique was found to be sensitive (LOQ 5 ng/L, LOD 0.2-5 ng/L), accurate (mean recovery 96%), precise (RSD 7-10%), and cost-effective for screening and quantitation of these steroid hormones in wastewater and environmental water matrix. A survey of the most current international literature indicates a fairly equal use of the LC-MS/MS, GC-MS/MS (chemical-analytical), and ELISA (immuno-analytical) test methods for screening and quantitation of the target steroid hormones in both water and wastewater matrix. Internationally, the observed sensitivity, based on LOQ (ng/L), for the steroid estrogens E1, E2, EE2, is, in decreasing order: LC-MSMS (0.08-9.54) > GC-MS (1) > ELISA (5) (chemical-analytical > immuno-analytical). At the national level, the routine, unoptimized chemical-analytical LC-MSMS method was found to lack the required sensitivity for meeting environmental requirements for steroid hormone quantitation. Further optimization of the sensitivity of the chemical-analytical LC-tandem mass spectrometry methods, especially for wastewater screening, in South Africa is required. Risk assessment studies showed that it was not practical to propose standards or allowable limits for the steroid estrogens E1, E2, EE2, and E3; the use of predicted-no-effect concentration values of the steroid estrogens appears to be appropriate for use in their risk assessment in relation to aquatic organisms. For raw water sources, drinking water, raw and treated wastewater, the use of bioassays, with trigger values, is a useful screening tool option to decide whether further examination of specific endocrine activity may be warranted, or whether concentrations of such activity are of low priority, with respect to health concerns in the human population. The achievement of improved quantitation limits for immuno-analytical methods, like ELISA, used for compound quantitation, and standardization of the method for measuring E2 equivalents (EEQs) used for biological activity (endocrine: e.g., estrogenic) are some areas for future EDC research.
Malhat, Farag; Kasiotis, Konstantinos M; Shalaby, Shehata
2018-02-05
Cyantraniliprole is an anthranilic diamide insecticide, belonging to the ryanoid class, with a broad range of applications against several pests. In the presented work, a reliable analytical technique employing high-performance liquid chromatography coupled with photodiode array detector (HPLC-DAD) for analyzing cyantraniliprole residues in tomato was developed. The method was then applied to field-incurred tomato samples collected after applications under open field conditions. The latter aimed to ensure the safe application of cyantraniliprole to tomato and contribute the derived residue data to the risk assessment under field conditions. Sample preparation involved a single step extraction with acetonitrile and sodium chloride for partitioning. The extract was purified utilizing florisil as cleanup reagent. The developed method was further evaluated by comparing the analytical results with those obtained using the QuEChERS technique. The novel method outbalanced QuEChERS regarding matrix interferences in the analysis, while it met all guideline criteria. Hence, it showed excellent linearity over the assayed concentration and yielded satisfactory recovery rate in the range of 88.9 to 96.5%. The half-life of degradation of cyantraniliprole was determined at 2.6 days. Based on the Codex MRL, the pre-harvest interval (PHI) for cyantraniliprole on tomato was 3 days, after treatment at the recommended dose. To our knowledge, the present work provides the first record on PHI determination of cyantraniliprole in tomato under open field conditions in Egypt and the broad Mediterranean region.
2017-06-16
Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III Training and Testing Sarah A. Blackstock Joseph O...December 2017 4. TITLE AND SUBTITLE Quantifying Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III...Navy’s Phase III Study Areas as described in each Environmental Impact Statement/ Overseas Environmental Impact Statement and describes the methods
Differential homogeneous immunosensor device
Malmros, Mark K.; Gulbinski, III, Julian
1990-04-10
There is provided a novel method of testing for the presence of an analyte in a fluid suspected of containing the same. In this method, in the presence of the analyte, a substance capable of modifying certain characteristics of the substrate is bound to the substrate and the change in these qualities is measured. While the method may be modified for carrying out quantitative differential analyses, it eliminates the need for washing analyte from the substrate which is characteristic of prior art methods.
Niu, Xun; Terekhov, Alexander V.; Latash, Mark L.; Zatsiorsky, Vladimir M.
2013-01-01
The goal of the research is to reconstruct the unknown cost (objective) function(s) presumably used by the neural controller for sharing the total force among individual fingers in multi-finger prehension. The cost function was determined from experimental data by applying the recently developed Analytical Inverse Optimization (ANIO) method (Terekhov et al 2010). The core of the ANIO method is the Theorem of Uniqueness that specifies conditions for unique (with some restrictions) estimation of the objective functions. In the experiment, subjects (n=8) grasped an instrumented handle and maintained it at rest in the air with various external torques, loads, and target grasping forces applied to the object. The experimental data recorded from 80 trials showed a tendency to lie on a 2-dimensional hyperplane in the 4-dimensional finger-force space. Because the constraints in each trial were different, such a propensity is a manifestation of a neural mechanism (not the task mechanics). In agreement with the Lagrange principle for the inverse optimization, the plane of experimental observations was close to the plane resulting from the direct optimization. The latter plane was determined using the ANIO method. The unknown cost function was reconstructed successfully for each performer, as well as for the group data. The cost functions were found to be quadratic with non-zero linear terms. The cost functions obtained with the ANIO method yielded more accurate results than other optimization methods. The ANIO method has an evident potential for addressing the problem of optimization in motor control. PMID:22104742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.W.; Heinrich, R.R.; Graczyk, D.G.
The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for fiscal year 1988 (October 1987 through September 1988). The Analytical Chemistry Laboratory is a full-cost recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routinemore » standard analyses to unique problems that require significant development of methods and techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.W.; Heinrich, R.R.; Graczyk, D.G.
The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1989 (October 1988 through September 1989). The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standardmore » analyses to unique problems that require significant development of methods and techniques.« less
NASA Astrophysics Data System (ADS)
Semenistaya, E. N.; Virus, E. D.; Rodchenkov, G. M.
2009-04-01
the possibility of selective determination of testosterone and epitestosterone glucuronides in urine by high-performance liquid chromatography/high-resolution mass spectrometry using solid phase microextraction on a meps cartridge was studied. the effect of the biological matrix on the spectra of conjugated steroids can be taken into account by using the spectra of conjugates recorded for urine samples after hydrolysis as reference spectra. the conditions of fragmentation in the ion source were optimized for separate analytes. this method was used for analyzing real samples with different testosterone/epitestosterone ratios. variations in conjugate contents and qualitative changes in the steroid profile of endogenic compounds were observed.
Overview of Piezoelectric Biosensors, Immunosensors and DNA Sensors and Their Applications.
Pohanka, Miroslav
2018-03-19
Piezoelectric biosensors are a group of analytical devices working on a principle of affinity interaction recording. A piezoelectric platform or piezoelectric crystal is a sensor part working on the principle of oscillations change due to a mass bound on the piezoelectric crystal surface. In this review, biosensors having their surface modified with an antibody or antigen, with a molecularly imprinted polymer, with genetic information like single stranded DNA, and biosensors with bound receptors of organic of biochemical origin, are presented and discussed. The mentioned recognition parts are frequently combined with use of nanoparticles and applications in this way are also introduced. An overview of the current literature is given and the methods presented are commented upon.
Time Analysis of Building Dynamic Response Under Seismic Action. Part 1: Theoretical Propositions
NASA Astrophysics Data System (ADS)
Ufimtcev, E. M.
2017-11-01
The first part of the article presents the main provisions of the analytical approach - the time analysis method (TAM) developed for the calculation of the elastic dynamic response of rod structures as discrete dissipative systems (DDS) and based on the investigation of the characteristic matrix quadratic equation. The assumptions adopted in the construction of the mathematical model of structural oscillations as well as the features of seismic forces’ calculating and recording based on the data of earthquake accelerograms are given. A system to resolve equations is given to determine the nodal (kinematic and force) response parameters as well as the stress-strain state (SSS) parameters of the system’s rods.
Nanopore with Transverse Nanoelectrodes for Electrical Characterization and Sequencing of DNA
Gierhart, Brian C.; Howitt, David G.; Chen, Shiahn J.; Zhu, Zhineng; Kotecki, David E.; Smith, Rosemary L.; Collins, Scott D.
2009-01-01
A DNA sequencing device which integrates transverse conducting electrodes for the measurement of electrode currents during DNA translocation through a nanopore has been nanofabricated and characterized. A focused electron beam (FEB) milling technique, capable of creating features on the order of 1 nm in diameter, was used to create the nanopore. The device was characterized electrically using gold nanoparticles as an artificial analyte with both DC and AC measurement methods. Single nanoparticle/electrode interaction events were recorded. A low-noise, high-speed transimpedance current amplifier for the detection of nano to picoampere currents at microsecond time scales was designed, fabricated and tested for future integration with the nanopore device. PMID:19584949
Nanopore with Transverse Nanoelectrodes for Electrical Characterization and Sequencing of DNA.
Gierhart, Brian C; Howitt, David G; Chen, Shiahn J; Zhu, Zhineng; Kotecki, David E; Smith, Rosemary L; Collins, Scott D
2008-06-16
A DNA sequencing device which integrates transverse conducting electrodes for the measurement of electrode currents during DNA translocation through a nanopore has been nanofabricated and characterized. A focused electron beam (FEB) milling technique, capable of creating features on the order of 1 nm in diameter, was used to create the nanopore. The device was characterized electrically using gold nanoparticles as an artificial analyte with both DC and AC measurement methods. Single nanoparticle/electrode interaction events were recorded. A low-noise, high-speed transimpedance current amplifier for the detection of nano to picoampere currents at microsecond time scales was designed, fabricated and tested for future integration with the nanopore device.
NASA Technical Reports Server (NTRS)
Yalowitz, Jeffrey S.; Schroer, Michael A.; Dickson, John E., Jr.
1992-01-01
This final report describes work performed by SRS Technologies for the NASA Marshall Space Flight Center under Contract NAS8-39077, entitled 'Integrated Receiver-Decoder Dropout Study'. The purpose of the study was to determine causes of signal fading effects on ultra-high-frequency (UHF) range safety transmissions to the Space Shuttle during flyout. Of particular interest were deep fades observed at the External Tank (ET) Integrated Receiver-Decoder (IRD) during the flyout interval between solid rocket booster separation and ET separation. Analytical and simulation methods were employed in this study to assess observations captured in flight telemetry data records. Conclusions based on the study are presented in this report, and recommendations are given for future experimental validation of the results.
Ray, Chris; Saracco, James; Jenkins, Kurt J.; Huff, Mark; Happe, Patricia J.; Ransom, Jason I.
2017-01-01
During 2015-2016, we completed development of a new analytical framework for landbird population monitoring data from the National Park Service (NPS) North Coast and Cascades Inventory and Monitoring Network (NCCN). This new tool for analysis combines several recent advances in modeling population status and trends using point-count data and is designed to supersede the approach previously slated for analysis of trends in the NCCN and other networks, including the Sierra Nevada Network (SIEN). Advances supported by the new model-based approach include 1) the use of combined data on distance and time of detection to estimate detection probability without assuming perfect detection at zero distance, 2) seamless accommodation of variation in sampling effort and missing data, and 3) straightforward estimation of the effects of downscaled climate and other local habitat characteristics on spatial and temporal trends in landbird populations. No changes in the current field protocol are necessary to facilitate the new analyses. We applied several versions of the new model to data from each of 39 species recorded in the three mountain parks of the NCCN, estimating trends and climate relationships for each species during 2005-2014. Our methods and results are also reported in a manuscript in revision for the journal Ecosphere (hereafter, Ray et al.). Here, we summarize the methods and results outlined in depth by Ray et al., discuss benefits of the new analytical framework, and provide recommendations for its application to synthetic analyses of long-term data from the NCCN and SIEN. All code necessary for implementing the new analyses is provided within the Appendices to this report, in the form of fully annotated scripts written in the open-access programming languages R and JAGS.
Record statistics of financial time series and geometric random walks
NASA Astrophysics Data System (ADS)
Sabir, Behlool; Santhanam, M. S.
2014-09-01
The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.
Visual analytics for aviation safety: A collaborative approach to sensemaking
NASA Astrophysics Data System (ADS)
Wade, Andrew
Visual analytics, the "science of analytical reasoning facilitated by interactive visual interfaces", is more than just visualization. Understanding the human reasoning process is essential for designing effective visualization tools and providing correct analyses. This thesis describes the evolution, application and evaluation of a new method for studying analytical reasoning that we have labeled paired analysis. Paired analysis combines subject matter experts (SMEs) and tool experts (TE) in an analytic dyad, here used to investigate aircraft maintenance and safety data. The method was developed and evaluated using interviews, pilot studies and analytic sessions during an internship at the Boeing Company. By enabling a collaborative approach to sensemaking that can be captured by researchers, paired analysis yielded rich data on human analytical reasoning that can be used to support analytic tool development and analyst training. Keywords: visual analytics, paired analysis, sensemaking, boeing, collaborative analysis.
Pesticide manufacturers must develop and submit analytical methods for their pesticide products to support registration of their products under FIFRA. Learn about these methods as well as SOPs for testing of antimicrobial products against three organisms.
Huang, Xiaoke; Zhao, Ye; Yang, Jing; Zhang, Chong; Ma, Chao; Ye, Xinyue
2016-01-01
We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.
SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal
Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.
2017-01-01
Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758
Semi-analytic valuation of stock loans with finite maturity
NASA Astrophysics Data System (ADS)
Lu, Xiaoping; Putri, Endah R. M.
2015-10-01
In this paper we study stock loans of finite maturity with different dividend distributions semi-analytically using the analytical approximation method in Zhu (2006). Stock loan partial differential equations (PDEs) are established under Black-Scholes framework. Laplace transform method is used to solve the PDEs. Optimal exit price and stock loan value are obtained in Laplace space. Values in the original time space are recovered by numerical Laplace inversion. To demonstrate the efficiency and accuracy of our semi-analytic method several examples are presented, the results are compared with those calculated using existing methods. We also present a calculation of fair service fee charged by the lender for different loan parameters.
Methods of use for sensor based fluid detection devices
NASA Technical Reports Server (NTRS)
Lewis, Nathan S. (Inventor)
2001-01-01
Methods of use and devices for detecting analyte in fluid. A system for detecting an analyte in a fluid is described comprising a substrate having a sensor comprising a first organic material and a second organic material where the sensor has a response to permeation by an analyte. A detector is operatively associated with the sensor. Further, a fluid delivery appliance is operatively associated with the sensor. The sensor device has information storage and processing equipment, which is operably connected with the device. This device compares a response from the detector with a stored ideal response to detect the presence of analyte. An integrated system for detecting an analyte in a fluid is also described where the sensing device, detector, information storage and processing device, and fluid delivery device are incorporated in a substrate. Methods for use for the above system are also described where the first organic material and a second organic material are sensed and the analyte is detected with a detector operatively associated with the sensor. The method provides for a device, which delivers fluid to the sensor and measures the response of the sensor with the detector. Further, the response is compared to a stored ideal response for the analyte to determine the presence of the analyte. In different embodiments, the fluid measured may be a gaseous fluid, a liquid, or a fluid extracted from a solid. Methods of fluid delivery for each embodiment are accordingly provided.
Document is intended to provide general guidelines for use byEPA and EPA-contracted laboratories when disposing of samples and associated analytical waste following use of the analytical methods listed in SAM.
ERIC Educational Resources Information Center
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April
2014-01-01
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
The NIH analytical methods and reference materials program for dietary supplements.
Betz, Joseph M; Fisher, Kenneth D; Saldanha, Leila G; Coates, Paul M
2007-09-01
Quality of botanical products is a great uncertainty that consumers, clinicians, regulators, and researchers face. Definitions of quality abound, and include specifications for sanitation, adventitious agents (pesticides, metals, weeds), and content of natural chemicals. Because dietary supplements (DS) are often complex mixtures, they pose analytical challenges and method validation may be difficult. In response to product quality concerns and the need for validated and publicly available methods for DS analysis, the US Congress directed the Office of Dietary Supplements (ODS) at the National Institutes of Health (NIH) to accelerate an ongoing methods validation process, and the Dietary Supplements Methods and Reference Materials Program was created. The program was constructed from stakeholder input and incorporates several federal procurement and granting mechanisms in a coordinated and interlocking framework. The framework facilitates validation of analytical methods, analytical standards, and reference materials.
Manso, J; García-Barrera, T; Gómez-Ariza, J L; González, A G
2014-02-01
The present paper describes a method based on the extraction of analytes by multiple hollow fibre liquid-phase microextraction and detection by ion-trap mass spectrometry and electron capture detectors after gas chromatographic separation. The limits of detection are in the range of 0.13-0.67 μg kg(-1), five orders of magnitude lower than those reached with the European Commission Official method of analysis, with three orders of magnitude of linear range (from the quantification limits to 400 μg kg(-1) for all the analytes) and recoveries in fortified olive oils in the range of 78-104 %. The main advantages of the analytical method are the absence of sample carryover (due to the disposable nature of the membranes), high enrichment factors in the range of 79-488, high throughput and low cost. The repeatability of the analytical method ranged from 8 to 15 % for all the analytes, showing a good performance.
Han, Thomas Yong-Jin; Valdez, Carlos A; Olson, Tammy Y; Kim, Sung Ho; Satcher, Jr., Joe H
2015-04-21
In one embodiment, a system includes a plurality of metal nanoparticles functionalized with a plurality of organic molecules tethered thereto, wherein the plurality of organic molecules preferentially interact with one or more analytes when placed in proximity therewith. According to another embodiment, a method for detecting analytes includes contacting a fluid having one or more analytes of interest therein with a plurality of metal nanoparticles, each metal nanoparticle having a plurality of organic molecules tethered thereto, and detecting Raman scattering from an analyte of interest from the fluid, the analyte interacting with one or more of the plurality of organic molecules. In another embodiment, a method includes chemically modifying a plurality of cyclodextrin molecules at a primary hydroxyl moiety to create a chemical handle, and tethering the plurality of cyclodextrin molecules to a metal nanoparticle using the chemical handle. Other systems and methods for detecting analytes are also described.
Numerical and Experimental Studies on Impact Loaded Concrete Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo
2006-07-01
An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less
SAM Companion Documents and Sample Collection Procedures provide information intended to complement the analytical methods listed in Selected Analytical Methods for Environmental Remediation and Recovery (SAM).
Differential homogeneous immunosensor device
Malmros, M.K.; Gulbinski, J. III.
1990-04-10
There is provided a novel method of testing for the presence of an analyte in a fluid suspected of containing the same. In this method, in the presence of the analyte, a substance capable of modifying certain characteristics of the substrate is bound to the substrate and the change in these qualities is measured. While the method may be modified for carrying out quantitative differential analyses, it eliminates the need for washing the analyte from the substrate which is characteristic of prior art methods. 12 figs.
Electrocardiogram Scanner-System Requirements
DOT National Transportation Integrated Search
1973-03-01
An experimental and analytical study has been conducted to establish the feasibility for scanning and digitizing electrocardiogram records. The technical requirements and relative costs for two systems are discussed herein. One is designed to automat...
Applying an analytical method to study neutron behavior for dosimetry
NASA Astrophysics Data System (ADS)
Shirazi, S. A. Mousavi
2016-12-01
In this investigation, a new dosimetry process is studied by applying an analytical method. This novel process is associated with a human liver tissue. The human liver tissue has compositions including water, glycogen and etc. In this study, organic compound materials of liver are decomposed into their constituent elements based upon mass percentage and density of every element. The absorbed doses are computed by analytical method in all constituent elements of liver tissue. This analytical method is introduced applying mathematical equations based on neutron behavior and neutron collision rules. The results show that the absorbed doses are converged for neutron energy below 15MeV. This method can be applied to study the interaction of neutrons in other tissues and estimating the absorbed dose for a wide range of neutron energy.
Deployment of Analytics into the Healthcare Safety Net: Lessons Learned
Hartzband, David; Jacobs, Feygele
2016-01-01
Background As payment reforms shift healthcare reimbursement toward value-based payment programs, providers need the capability to work with data of greater complexity, scope and scale. This will in many instances necessitate a change in understanding of the value of data, and the types of data needed for analysis to support operations and clinical practice. It will also require the deployment of different infrastructure and analytic tools. Community health centers, which serve more than 25 million people and together form the nation’s largest single source of primary care for medically underserved communities and populations, are expanding and will need to optimize their capacity to leverage data as new payer and organizational models emerge. Methods To better understand existing capacity and help organizations plan for the strategic and expanded uses of data, a project was initiated that deployed contemporary, Hadoop-based, analytic technology into several multi-site community health centers (CHCs) and a primary care association (PCA) with an affiliated data warehouse supporting health centers across the state. An initial data quality exercise was carried out after deployment, in which a number of analytic queries were executed using both the existing electronic health record (EHR) applications and in parallel, the analytic stack. Each organization carried out the EHR analysis using the definitions typically applied for routine reporting. The analysis deploying the analytic stack was carried out using those common definitions established for the Uniform Data System (UDS) by the Health Resources and Service Administration.1 In addition, interviews with health center leadership and staff were completed to understand the context for the findings. Results The analysis uncovered many challenges and inconsistencies with respect to the definition of core terms (patient, encounter, etc.), data formatting, and missing, incorrect and unavailable data. At a population level, apparent underreporting of a number of diagnoses, specifically obesity and heart disease, was also evident in the results of the data quality exercise, for both the EHR-derived and stack analytic results. Conclusion Data awareness, that is, an appreciation of the importance of data integrity, data hygiene2 and the potential uses of data, needs to be prioritized and developed by health centers and other healthcare organizations if analytics are to be used in an effective manner to support strategic objectives. While this analysis was conducted exclusively with community health center organizations, its conclusions and recommendations may be more broadly applicable. PMID:28210424
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael
2017-01-01
Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.
Analysis of Antarctic glacigenic sediment provenance through geochemical and petrologic applications
NASA Astrophysics Data System (ADS)
Licht, Kathy J.; Hemming, Sidney R.
2017-05-01
The number of provenance studies of glacigenic sediments in Antarctica has increased dramatically over the past decade, providing an enhanced understanding of ice sheet history and dynamics, along with the broader geologic history. Such data have been used to assess glacial erosion patterns at the catchment scale, flow path reconstructions over a wide range of scales, and ice sheet fluctuations indicated by iceberg rafted debris in circumantarctic glacial marine sediments. It is notable that even though most of the bedrock of the continent is ice covered and inaccessible, provenance data can provide such valuable information about Antarctic ice and can even be used to infer buried rock types along with their geo- and thermochronologic history. Glacigenic sediments provide a broader array of provenance analysis opportunities than any other sediment type because of their wide range of grain sizes, and in this paper we review methods and examples from all size fractions that have been applied to the Antarctic glacigenic sedimentary record. Interpretations of these records must take careful consideration of the choice of analytical methods, uneven patterns of erosion, and spatial variability in sediment transport and rock types, which all may lead to a preferential identification of different elements of sources in the provenance analyses. Because of this, we advocate a multi-proxy approach and highlight studies that demonstrate the value of selecting complementary provenance methods.
Rui, Zeng; Rong-Zheng, Yue; Hong-Yu, Qiu; Jing, Zeng; Xue-Hong, Wan; Chuan, Zuo
2015-01-01
Background Problem-based learning (PBL) is a pedagogical approach based on problems. Specifically, it is a student-centered, problem-oriented teaching method that is conducted through group discussions. The aim of our study is to explore the effects of PBL in diagnostic teaching for Chinese medical students. Methods A prospective, randomized controlled trial was conducted. Eighty junior clinical medical students were randomly divided into two groups. Forty students were allocated to a PBL group and another 40 students were allocated to a control group using the traditional teaching method. Their scores in the practice skills examination, ability to write and analyze medical records, and results on the stage test and behavior observation scale were compared. A questionnaire was administered in the PBL group after class. Results There were no significant differences in scores for writing medical records, content of interviewing, physical examination skills, and stage test between the two groups. However, compared with the control group, the PBL group had significantly higher scores on case analysis, interviewing skills, and behavioral observation scales. Conclusion The questionnaire survey revealed that PBL could improve interest in learning, cultivate an ability to study independently, improve communication and analytical skills, and good team cooperation spirit. However, there were some shortcomings in systematization of imparting knowledge. PBL has an obvious advantage in teaching with regard to diagnostic practice. PMID:25848334
In vivo recording of aerodynamic force with an aerodynamic force platform: from drones to birds.
Lentink, David; Haselsteiner, Andreas F; Ingersoll, Rivers
2015-03-06
Flapping wings enable flying animals and biomimetic robots to generate elevated aerodynamic forces. Measurements that demonstrate this capability are based on experiments with tethered robots and animals, and indirect force calculations based on measured kinematics or airflow during free flight. Remarkably, there exists no method to measure these forces directly during free flight. Such in vivo recordings in freely behaving animals are essential to better understand the precise aerodynamic function of their flapping wings, in particular during the downstroke versus upstroke. Here, we demonstrate a new aerodynamic force platform (AFP) for non-intrusive aerodynamic force measurement in freely flying animals and robots. The platform encloses the animal or object that generates fluid force with a physical control surface, which mechanically integrates the net aerodynamic force that is transferred to the earth. Using a straightforward analytical solution of the Navier-Stokes equation, we verified that the method is accurate. We subsequently validated the method with a quadcopter that is suspended in the AFP and generates unsteady thrust profiles. These independent measurements confirm that the AFP is indeed accurate. We demonstrate the effectiveness of the AFP by studying aerodynamic weight support of a freely flying bird in vivo. These measurements confirm earlier findings based on kinematics and flow measurements, which suggest that the avian downstroke, not the upstroke, is primarily responsible for body weight support during take-off and landing.
Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
Actigraphic assessment of motor activity in acutely admitted inpatients with bipolar disorder.
Krane-Gartiser, Karoline; Henriksen, Tone Elise Gjotterud; Morken, Gunnar; Vaaler, Arne; Fasmer, Ole Bernt
2014-01-01
Mania is associated with increased activity, whereas psychomotor retardation is often found in bipolar depression. Actigraphy is a promising tool for monitoring phase shifts and changes following treatment in bipolar disorder. The aim of this study was to compare recordings of motor activity in mania, bipolar depression and healthy controls, using linear and nonlinear analytical methods. Recordings from 18 acutely hospitalized inpatients with mania were compared to 12 recordings from bipolar depression inpatients and 28 healthy controls. 24-hour actigraphy recordings and 64-minute periods of continuous motor activity in the morning and evening were analyzed. Mean activity and several measures of variability and complexity were calculated. Patients with depression had a lower mean activity level compared to controls, but higher variability shown by increased standard deviation (SD) and root mean square successive difference (RMSSD) over 24 hours and in the active morning period. The patients with mania had lower first lag autocorrelation compared to controls, and Fourier analysis showed higher variance in the high frequency part of the spectrum corresponding to the period from 2-8 minutes. Both patient groups had a higher RMSSD/SD ratio compared to controls. In patients with mania we found an increased complexity of time series in the active morning period, compared to patients with depression. The findings in the patients with mania are similar to previous findings in patients with schizophrenia and healthy individuals treated with a glutamatergic antagonist. We have found distinctly different activity patterns in hospitalized patients with bipolar disorder in episodes of mania and depression, assessed by actigraphy and analyzed with linear and nonlinear mathematical methods, as well as clear differences between the patients and healthy comparison subjects.
Sandstrom, Mark W.; Stroppel, Max E.; Foreman, William T.; Schroeder, Michael P.
2001-01-01
A method for the isolation and analysis of 21 parent pesticides and 20 pesticide degradates in natural-water samples is described. Water samples are filtered to remove suspended particulate matter and then are pumped through disposable solid-phase-extraction columns that contain octadecyl-bonded porous silica to extract the analytes. The columns are dried by using nitrogen gas, and adsorbed analytes are eluted with ethyl acetate. Extracted analytes are determined by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring of three characteristic ions. The upper concentration limit is 2 micrograms per liter (?g/L) for most analytes. Single-operator method detection limits in reagent-water samples range from 0.00 1 to 0.057 ?g/L. Validation data also are presented for 14 parent pesticides and 20 degradates that were determined to have greater bias or variability, or shorter holding times than the other compounds. The estimated maximum holding time for analytes in pesticide-grade water before extraction was 4 days. The estimated maximum holding time for analytes after extraction on the dry solid-phase-extraction columns was 7 days. An optional on-site extraction procedure allows for samples to be collected and processed at remote sites where it is difficult to ship samples to the laboratory within the recommended pre-extraction holding time. The method complements existing U.S. Geological Survey Method O-1126-95 (NWQL Schedules 2001 and 2010) by using identical sample preparation and comparable instrument analytical conditions so that sample extracts can be analyzed by either method to expand the range of analytes determined from one water sample.
Angeli, T R; Du, P; Paskaranandavadivel, N; Sathar, S; Hall, A; Asirvatham, S J; Farrugia, G; Windsor, J A; Cheng, L K; O'Grady, G
2017-05-01
Gastric motility is coordinated by bioelectrical slow waves, and gastric dysrhythmias are reported in motility disorders. High-resolution (HR) mapping has advanced the accurate assessment of gastric dysrhythmias, offering promise as a diagnostic technique. However, HR mapping has been restricted to invasive surgical serosal access. This study investigates the feasibility of HR mapping from the gastric mucosal surface. Experiments were conducted in vivo in 14 weaner pigs. Reference serosal recordings were performed with flexible-printed-circuit (FPC) arrays (128-192 electrodes). Mucosal recordings were performed by two methods: (i) FPC array aligned directly opposite the serosal array, and (ii) cardiac mapping catheter modified for gastric mucosal recordings. Slow-wave propagation and morphology characteristics were quantified and compared between simultaneous serosal and mucosal recordings. Slow-wave activity was consistently recorded from the mucosal surface from both electrode arrays. Mucosally recorded slow-wave propagation was consistent with reference serosal activation pattern, frequency (P≥.3), and velocity (P≥.4). However, mucosally recorded slow-wave morphology exhibited reduced amplitude (65-72% reduced, P<.001) and wider downstroke width (18-31% wider, P≤.02), compared to serosal data. Dysrhythmias were successfully mapped and classified from the mucosal surface, accorded with serosal data, and were consistent with known dysrhythmic mechanisms in the porcine model. High-resolution gastric electrical mapping was achieved from the mucosal surface, and demonstrated consistent propagation characteristics with serosal data. However, mucosal signal morphology was attenuated, demonstrating necessity for optimized electrode designs and analytical algorithms. This study demonstrates feasibility of endoscopic HR mapping, providing a foundation for advancement of minimally invasive spatiotemporal gastric mapping as a clinical and scientific tool. © 2016 John Wiley & Sons Ltd.
Montgomery, L D; Montgomery, R W; Guisado, R
1995-05-01
This investigation demonstrates the feasibility of mental workload assessment by rheoencephalographic (REG) and multichannel electroencephalographic (EEG) monitoring. During the performance of this research, unique testing, analytical and display procedures were developed for REG and EEG monitoring that extend the current state of the art and provide valuable tools for the study of cerebral circulatory and neural activity during cognition. REG records are analyzed to provide indices of the right and left hemisphere hemodynamic changes that take place during each test sequence. The EEG data are modeled using regression techniques and mathematically transformed to provide energy-density distributions of the scalp electrostatic field. These procedures permit concurrent REG/EEG cognitive testing not possible with current techniques. The introduction of a system for recording and analysis of cognitive REG/EEG test sequences facilitates the study of learning and memory disorders, dementia and other encephalopathies.
NASA Technical Reports Server (NTRS)
Montgomery, L. D.; Montgomery, R. W.; Guisado, R.
1995-01-01
This investigation demonstrates the feasibility of mental workload assessment by rheoencephalographic (REG) and multichannel electroencephalographic (EEG) monitoring. During the performance of this research, unique testing, analytical and display procedures were developed for REG and EEG monitoring that extend the current state of the art and provide valuable tools for the study of cerebral circulatory and neural activity during cognition. REG records are analyzed to provide indices of the right and left hemisphere hemodynamic changes that take place during each test sequence. The EEG data are modeled using regression techniques and mathematically transformed to provide energy-density distributions of the scalp electrostatic field. These procedures permit concurrent REG/EEG cognitive testing not possible with current techniques. The introduction of a system for recording and analysis of cognitive REG/EEG test sequences facilitates the study of learning and memory disorders, dementia and other encephalopathies.
Inferring time-varying recharge from inverse analysis of long-term water levels
NASA Astrophysics Data System (ADS)
Dickinson, Jesse E.; Hanson, R. T.; Ferré, T. P. A.; Leake, S. A.
2004-07-01
Water levels in aquifers typically vary in response to time-varying rates of recharge, suggesting the possibility of inferring time-varying recharge rates on the basis of long-term water level records. Presumably, in the southwestern United States (Arizona, Nevada, New Mexico, southern California, and southern Utah), rates of mountain front recharge to alluvial aquifers depend on variations in precipitation rates due to known climate cycles such as the El Niño-Southern Oscillation index and the Pacific Decadal Oscillation. This investigation examined the inverse application of a one-dimensional analytical model for periodic flow described by Lloyd R. Townley in 1995 to estimate periodic recharge variations on the basis of variations in long-term water level records using southwest aquifers as the case study. Time-varying water level records at various locations along the flow line were obtained by simulation of forward models of synthetic basins with applied sinusoidal recharge of either a single period or composite of multiple periods of length similar to known climate cycles. Periodic water level components, reconstructed using singular spectrum analysis (SSA), were used to calibrate the analytical model to estimate each recharge component. The results demonstrated that periodic recharge estimates were most accurate in basins with nearly uniform transmissivity and the accuracy of the recharge estimates depends on monitoring well location. A case study of the San Pedro Basin, Arizona, is presented as an example of calibrating the analytical model to real data.
Inferring time‐varying recharge from inverse analysis of long‐term water levels
Dickinson, Jesse; Hanson, R.T.; Ferré, T.P.A.; Leake, S.A.
2004-01-01
Water levels in aquifers typically vary in response to time‐varying rates of recharge, suggesting the possibility of inferring time‐varying recharge rates on the basis of long‐term water level records. Presumably, in the southwestern United States (Arizona, Nevada, New Mexico, southern California, and southern Utah), rates of mountain front recharge to alluvial aquifers depend on variations in precipitation rates due to known climate cycles such as the El Niño‐Southern Oscillation index and the Pacific Decadal Oscillation. This investigation examined the inverse application of a one‐dimensional analytical model for periodic flow described by Lloyd R. Townley in 1995 to estimate periodic recharge variations on the basis of variations in long‐term water level records using southwest aquifers as the case study. Time‐varying water level records at various locations along the flow line were obtained by simulation of forward models of synthetic basins with applied sinusoidal recharge of either a single period or composite of multiple periods of length similar to known climate cycles. Periodic water level components, reconstructed using singular spectrum analysis (SSA), were used to calibrate the analytical model to estimate each recharge component. The results demonstrated that periodic recharge estimates were most accurate in basins with nearly uniform transmissivity and the accuracy of the recharge estimates depends on monitoring well location. A case study of the San Pedro Basin, Arizona, is presented as an example of calibrating the analytical model to real data.
Video analysis for insight and coding: Examples from tutorials in introductory physics
NASA Astrophysics Data System (ADS)
Scherr, Rachel E.
2009-12-01
The increasing ease of video recording offers new opportunities to create richly detailed records of classroom activities. These recordings, in turn, call for research methodologies that balance generalizability with interpretive validity. This paper shares methodology for two practices of video analysis: (1) gaining insight into specific brief classroom episodes and (2) developing and applying a systematic observational protocol for a relatively large corpus of video data. These two aspects of analytic practice are illustrated in the context of a particular research interest but are intended to serve as general suggestions.
NASA Technical Reports Server (NTRS)
Rogers, J. W.
1975-01-01
The results of an experimental investigation on recording information on thermoplastic are given. A description was given of a typical fabrication configuration, the recording sequence, and the samples which were examined. There are basically three configurations which can be used for the recording of information on thermoplastic. The most popular technique uses corona which furnishes free charge. The necessary energy for deformation is derived from a charge layer atop the thermoplastic. The other two techniques simply use a dc potential in place of the corona for deformation energy.
Method and apparatus for detecting an analyte
Allendorf, Mark D [Pleasanton, CA; Hesketh, Peter J [Atlanta, GA
2011-11-29
We describe the use of coordination polymers (CP) as coatings on microcantilevers for the detection of chemical analytes. CP exhibit changes in unit cell parameters upon adsorption of analytes, which will induce a stress in a static microcantilever upon which a CP layer is deposited. We also describe fabrication methods for depositing CP layers on surfaces.
This SOP describes the method used for preparing surrogate recovery standard and internal standard solutions for the analysis of polar target analytes. It also describes the method for preparing calibration standard solutions for polar analytes used for gas chromatography/mass sp...
2014-05-01
scientific and technological work is carried out by Technical Teams, created under one or more of these eight bodies, for specific research activities...level records, with a secondary focus on strategic level records. Its work covered NATO records as well as NATO Troop Contributing Nation (TCN...fonctionnalités de recherche, récupération et visualisation peuvent être rapides et conviviales. Il conclut également que les freins à la création d’un
NASA Astrophysics Data System (ADS)
Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young
2017-05-01
This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.
Temperature effects on tunable cw Alexandrite lasers under diode end-pumping.
Kerridge-Johns, William R; Damzen, Michael J
2018-03-19
Diode pumped Alexandrite is a promising route to high power, efficient and inexpensive lasers with a broad (701 nm to 858 nm) gain bandwidth; however, there are challenges with its complex laser dynamics. We present an analytical model applied to experimental red diode end-pumped Alexandrite lasers, which enabled a record 54 % slope efficiency with an output power of 1.2 W. A record lowest lasing wavelength (714 nm) and record tuning range (104 nm) was obtained by optimising the crystal temperature between 8 °C and 105 °C in the vibronic mode. The properties of Alexandrite and the analytical model were examined to understand and give general rules in optimising Alexandrite lasers, along with their fundamental efficiency limits. It was found that the lowest threshold laser wavelength was not necessarily the most efficient, and that higher and lower temperatures were optimal for longer and shorter laser wavelengths, respectively. The pump excited to ground state absorption ratio was measured to decrease from 0.8 to 0.7 by changing the crystal temperature from 10 °C to 90 °C.
Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO
NASA Technical Reports Server (NTRS)
Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.
2016-01-01
A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.
NASA Astrophysics Data System (ADS)
Anderson, J.; Johnson, J. B.; Steele, A. L.; Anzieta, J. C.; Ortiz, H. D.; Hall, M. L.; Ruiz, M. C.
2014-12-01
Acoustic recordings reveal a variety of volcanic activities during an exceptionally loud vulcanian eruption at Tungurahua. A period of several months of mild surface activity came to an abrupt end with the emission of a powerful blast wave heard at least 180 km away. Sensors 2080 m from the vent recorded a stepped rise to its maximum overpressure of 1220 Pa (corresponding to a sound pressure level of 156 dB) and its unusually long dominant period of 5.6 s. We discuss source processes that produced the blast wave, considering that wave propagation could be nonlinear near the vent because of high overpressures. More than an hour of acoustic activity was recorded after the blast wave, including sound from falling ballistics, reflections of the blast wave from nearby mountains, pyroclastic density currents, and acoustic tremor at the vent. Glitches in the acoustic records related to plume lightning were also serendipitously observed, although thunder could not be unambiguously identified. We discuss acoustic signatures of falling ballistics and pyroclastic density currents and how array-style deployments and analytic methods can be used to reveal them. Placement of sensors high on the volcano's slopes facilitated resolving these distinct processes. This study demonstrates that near-vent, array-style acoustic installations can be used to monitor various types of volcanic activity.
Just the two of us: misalignment of theory and methods in examining dyadic phenomena.
Krasikova, Dina V; LeBreton, James M
2012-07-01
Many organizational phenomena such as leader-member exchange, mentoring, coaching, interpersonal conflict and cooperation, negotiation, performance appraisal, and the employment interview involve inherently dyadic relationships and interactions. Even when theories explicitly acknowledge the dyadic nature of such phenomena, it is not uncommon to observe a disconnection or misalignment between the level of theory and method. Our purpose in the current paper is to discuss how organizational scholars might better align these components of their research endeavors. We discuss how recent developments involving the actor-partner interdependence model (APIM) and reciprocal one-with-many (OWM) models are applicable to studying dyadic phenomena in organizations. The emphasis is on preanalytic considerations associated with collecting and organizing reciprocal dyadic data, types of research questions that APIM and reciprocal OWM models can help answer, and specific analytic techniques involved in testing dyadic hypotheses. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, Donald R.; Shutthanandan, Vaithiyalingam
Nano-sized objects are increasingly important as biomaterials and their surfaces play critical roles in determining their beneficial or deleterious behaviors in biological systems. Important characteristics of nanomaterials that impact their application in many areas are described with a strong focus on the importance of particle surfaces and surface characterization. Understanding aspects of the inherent nature of nano-objects and the important role that surfaces play in these applications is a universal need for any research or product development using such materials in biological applications. The role of surface analysis methods in collecting critical information about the nature of particle surfaces andmore » physicochemical properties of nano-objects is described along with the importance of including sample history and analysis results in a record of provenance information regarding specific batches of nano-objects.« less
Measurement invariance study of the training satisfaction questionnaire (TSQ).
Sanduvete-Chaves, Susana; Holgado-Tello, F Pablo; Chacón-Moscoso, Salvador; Barbero-García, M Isabel
2013-01-01
This article presents an empirical measurement invariance study in the substantive area of satisfaction evaluation in training programs. Specifically, it (I) provides an empirical solution to the lack of explicit measurement models of satisfaction scales, offering a way of analyzing and operationalizing the substantive theoretical dimensions; (II) outlines and discusses the analytical consequences of considering the effects of categorizing supposedly continuous variables, which are not usually taken into account; (III) presents empirical results from a measurement invariance study based on 5,272 participants' responses to a training satisfaction questionnaire in three different organizations and in two different training methods, taking into account the factor structure of the measured construct and the ordinal nature of the recorded data; and (IV) describes the substantive implications in the area of training satisfaction evaluation, such as the usefulness of the training satisfaction questionnaire to measure satisfaction in different organizations and different training methods. It also discusses further research based on these findings.
Recent developments in computer vision-based analytical chemistry: A tutorial review.
Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J
2015-10-29
Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Big data and visual analytics in anaesthesia and health care.
Simpao, A F; Ahumada, L M; Rehman, M A
2015-09-01
Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Bereiter, Bernhard; Maechler, Lars; Schmitt, Jochen; Walther, Remo; Tuzson, Béla; Scheidegger, Philipp; Emmenegger, Lukas; Fischer, Hubertus
2017-04-01
Ice cores are unique archives of ancient air providing the only direct record of past greenhouse gases - key in reconstructing the roles of greenhouse gases in past climate changes. The European Partnership in Ice Core Sciences (EuroPICS) plans to drill an ice core extending over 1.5 Ma, nearly doubling the time span of the existing greenhouse record and covering the time period of the Mid Pleistocene Transition. The ice covering the time interval from 1-1.5 Ma is expected to be close to the bedrock and, due to glacial flow, extremely thinned. A 10,000 yr glacial/interglacial transition can be compressed in 1 m of ice. The targeted 100 yr resolution therefore constrains the sample size to 15-30 g containing only 1-2ml STP air. Within the deepSlice project we aim to unlock such atmospheric archives in extremely thinned ice by developing a novel coupled semi-continuous sublimation extraction/laser spectroscopy system. Vacuum sublimation, with an infrared source, has been chosen as extraction method as it allows 100% gas extraction of all gas species from ice without changing the isotopic composition of CO2. In order to reduce ice waste and accelerate sample throughput, we are building a sublimation extraction system that is able to continuously sublimate an ice-core section and subsequently collect discrete full air samples. For the gas analytics, we develop a custom-made mid-infrared laser spectrometer allowing simultaneous measurement of the CO2, CH4 and N2O concentrations as well as the isotopic composition of CO2 on air samples of only 1-2 ml STP. The two systems will be coupled via cryo-trapping of the sample air in dip tubes, followed by expansion of the sample air into the laser spectrometer. Due to the nondestructive laser technique, the air sample can be recollected and reused for further analytics.
Patient Health Record Systems Scope and Functionalities: Literature Review and Future Directions
2017-01-01
Background A new generation of user-centric information systems is emerging in health care as patient health record (PHR) systems. These systems create a platform supporting the new vision of health services that empowers patients and enables patient-provider communication, with the goal of improving health outcomes and reducing costs. This evolution has generated new sets of data and capabilities, providing opportunities and challenges at the user, system, and industry levels. Objective The objective of our study was to assess PHR data types and functionalities through a review of the literature to inform the health care informatics community, and to provide recommendations for PHR design, research, and practice. Methods We conducted a review of the literature to assess PHR data types and functionalities. We searched PubMed, Embase, and MEDLINE databases from 1966 to 2015 for studies of PHRs, resulting in 1822 articles, from which we selected a total of 106 articles for a detailed review of PHR data content. Results We present several key findings related to the scope and functionalities in PHR systems. We also present a functional taxonomy and chronological analysis of PHR data types and functionalities, to improve understanding and provide insights for future directions. Functional taxonomy analysis of the extracted data revealed the presence of new PHR data sources such as tracking devices and data types such as time-series data. Chronological data analysis showed an evolution of PHR system functionalities over time, from simple data access to data modification and, more recently, automated assessment, prediction, and recommendation. Conclusions Efforts are needed to improve (1) PHR data quality through patient-centered user interface design and standardized patient-generated data guidelines, (2) data integrity through consolidation of various types and sources, (3) PHR functionality through application of new data analytics methods, and (4) metrics to evaluate clinical outcomes associated with automated PHR system use, and costs associated with PHR data storage and analytics. PMID:29141839
Bruno, C; Patin, F; Bocca, C; Nadal-Desbarats, L; Bonnier, F; Reynier, P; Emond, P; Vourc'h, P; Joseph-Delafont, K; Corcia, P; Andres, C R; Blasco, H
2018-01-30
Metabolomics is an emerging science based on diverse high throughput methods that are rapidly evolving to improve metabolic coverage of biological fluids and tissues. Technical progress has led researchers to combine several analytical methods without reporting the impact on metabolic coverage of such a strategy. The objective of our study was to develop and validate several analytical techniques (mass spectrometry coupled to gas or liquid chromatography and nuclear magnetic resonance) for the metabolomic analysis of small muscle samples and evaluate the impact of combining methods for more exhaustive metabolite covering. We evaluated the muscle metabolome from the same pool of mouse muscle samples after 2 metabolite extraction protocols. Four analytical methods were used: targeted flow injection analysis coupled with mass spectrometry (FIA-MS/MS), gas chromatography coupled with mass spectrometry (GC-MS), liquid chromatography coupled with high-resolution mass spectrometry (LC-HRMS), and nuclear magnetic resonance (NMR) analysis. We evaluated the global variability of each compound i.e., analytical (from quality controls) and extraction variability (from muscle extracts). We determined the best extraction method and we reported the common and distinct metabolites identified based on the number and identity of the compounds detected with low analytical variability (variation coefficient<30%) for each method. Finally, we assessed the coverage of muscle metabolic pathways obtained. Methanol/chloroform/water and water/methanol were the best extraction solvent for muscle metabolome analysis by NMR and MS, respectively. We identified 38 metabolites by nuclear magnetic resonance, 37 by FIA-MS/MS, 18 by GC-MS, and 80 by LC-HRMS. The combination led us to identify a total of 132 metabolites with low variability partitioned into 58 metabolic pathways, such as amino acid, nitrogen, purine, and pyrimidine metabolism, and the citric acid cycle. This combination also showed that the contribution of GC-MS was low when used in combination with other mass spectrometry methods and nuclear magnetic resonance to explore muscle samples. This study reports the validation of several analytical methods, based on nuclear magnetic resonance and several mass spectrometry methods, to explore the muscle metabolome from a small amount of tissue, comparable to that obtained during a clinical trial. The combination of several techniques may be relevant for the exploration of muscle metabolism, with acceptable analytical variability and overlap between methods However, the difficult and time-consuming data pre-processing, processing, and statistical analysis steps do not justify systematically combining analytical methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Kangas, Michael J; Burks, Raychelle M; Atwater, Jordyn; Lukowicz, Rachel M; Garver, Billy; Holmes, Andrea E
2018-02-01
With the increasing availability of digital imaging devices, colorimetric sensor arrays are rapidly becoming a simple, yet effective tool for the identification and quantification of various analytes. Colorimetric arrays utilize colorimetric data from many colorimetric sensors, with the multidimensional nature of the resulting data necessitating the use of chemometric analysis. Herein, an 8 sensor colorimetric array was used to analyze select acid and basic samples (0.5 - 10 M) to determine which chemometric methods are best suited for classification quantification of analytes within clusters. PCA, HCA, and LDA were used to visualize the data set. All three methods showed well-separated clusters for each of the acid or base analytes and moderate separation between analyte concentrations, indicating that the sensor array can be used to identify and quantify samples. Furthermore, PCA could be used to determine which sensors showed the most effective analyte identification. LDA, KNN, and HQI were used for identification of analyte and concentration. HQI and KNN could be used to correctly identify the analytes in all cases, while LDA correctly identified 95 of 96 analytes correctly. Additional studies demonstrated that controlling for solvent and image effects was unnecessary for all chemometric methods utilized in this study.
Considerations for monitoring raptor population trends based on counts of migrants
Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.
1989-01-01
Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.
Steuer, Andrea E; Forss, Anna-Maria; Dally, Annika M; Kraemer, Thomas
2014-11-01
In the context of driving under the influence of drugs (DUID), not only common drugs of abuse may have an influence, but also medications with similar mechanisms of action. Simultaneous quantification of a variety of drugs and medications relevant in this context allows faster and more effective analyses. Therefore, multi-analyte approaches have gained more and more popularity in recent years. Usually, calibration curves for such procedures contain a mixture of all analytes, which might lead to mutual interferences. In this study we investigated whether the use of such mixtures leads to reliable results for authentic samples containing only one or two analytes. Five hundred microliters of whole blood were extracted by routine solid-phase extraction (SPE, HCX). Analysis was performed on an ABSciex 3200 QTrap instrument with ESI+ in scheduled MRM mode. The method was fully validated according to international guidelines including selectivity, recovery, matrix effects, accuracy and precision, stabilities, and limit of quantification. The selected SPE provided recoveries >60% for all analytes except 6-monoacetylmorphine (MAM) with coefficients of variation (CV) below 15% or 20% for quality controls (QC) LOW and HIGH, respectively. Ion suppression >30% was found for benzoylecgonine, hydrocodone, hydromorphone, MDA, oxycodone, and oxymorphone at QC LOW, however CVs were always below 10% (n=6 different whole blood samples). Accuracy and precision criteria were fulfilled for all analytes except for MAM. Systematic investigation of accuracy determined for QC MED in a multi-analyte mixture compared to samples containing only single analytes revealed no relevant differences for any analyte, indicating that a multi-analyte calibration is suitable for the presented method. Comparison of approximately 60 samples to a former GC-MS method showed good correlation. The newly validated method was successfully applied to more than 1600 routine samples and 3 proficiency tests. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Level of Digitization in Dutch Hospitals and the Lengths of Stay of Patients with Colorectal Cancer.
van Poelgeest, Rube; van Groningen, Julia T; Daniels, John H; Roes, Kit C; Wiggers, Theo; Wouters, Michel W; Schrijvers, Guus
2017-05-01
A substantial amount of research has been published on the association between the use of electronic medical records (EMRs) and quality outcomes in U.S. hospitals, while limited research has focused on the Western European experience. The purpose of this study is to explore the association between the use of EMR technologies in Dutch hospitals and length of stay after colorectal cancer surgery. Two data sets were leveraged for this study; the HIMSS Analytics Electronic Medical Record Adoption Model (EMRAM SM ) and the Dutch surgical colorectal audit (DSCA). The HIMSS Analytics EMRAM score was used to define a Dutch hospital's electronic medical records (EMR) capabilities while the DSCA was used to profile colorectal surgery quality outcomes (specifically total length of stay (LOS) in the hospital and the LOS in ICU). A total of 73 hospitals with a valid EMRAM score and associated DSCA patients (n = 30.358) during the study period (2012-2014) were included in the comparative set. A multivariate regression method was used to test differences adjusted for case mix, year of surgery, surgical technique and for complications, as well as stratifying for academic affiliated hospitals and general hospitals. A significant negative association was observed to exist between the total LOS (relative median LOS 0,974, CI 95% 0.959-0,989) of patients treated in advanced EMR hospitals (high EMRAM score cohort) versus patients treated at less advanced EMR care settings, once the data was adjusted for the case mix, year of surgery and type of surgery (laparoscopy or laparotomy). Adjusting for complications in a subgroup of general hospitals (n = 39) yielded essentially the same results (relative median LOS 0,934, CI 95% 0,915-0,954). No consistent significant associations were found with respect to LOS on the ICU. The findings of this study suggest advanced EMR capabilities support a healthcare provider's efforts to achieve desired quality outcomes and efficiency in Western European hospitals.
NASA Astrophysics Data System (ADS)
Luo, Ning; Illman, Walter A.
2016-09-01
Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.
A new tool for the evaluation of the analytical procedure: Green Analytical Procedure Index.
Płotka-Wasylka, J
2018-05-01
A new means for assessing analytical protocols relating to green analytical chemistry attributes has been developed. The new tool, called GAPI (Green Analytical Procedure Index), evaluates the green character of an entire analytical methodology, from sample collection to final determination, and was created using such tools as the National Environmental Methods Index (NEMI) or Analytical Eco-Scale to provide not only general but also qualitative information. In GAPI, a specific symbol with five pentagrams can be used to evaluate and quantify the environmental impact involved in each step of an analytical methodology, mainly from green through yellow to red depicting low, medium to high impact, respectively. The proposed tool was used to evaluate analytical procedures applied in the determination of biogenic amines in wine samples, and polycyclic aromatic hydrocarbon determination by EPA methods. GAPI tool not only provides an immediately perceptible perspective to the user/reader but also offers exhaustive information on evaluated procedures. Copyright © 2018 Elsevier B.V. All rights reserved.
Modern analytical chemistry in the contemporary world
NASA Astrophysics Data System (ADS)
Šíma, Jan
2016-12-01
Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among sciences and in the contemporary world is discussed. Its interdisciplinary character and the necessity of the collaboration between analytical chemists and other experts in order to effectively solve the actual problems of the human society and the environment are emphasized. The importance of the analytical method validation in order to obtain the accurate and precise results is highlighted. The invalid results are not only useless; they can often be even fatal (e.g., in clinical laboratories). The curriculum of analytical chemistry at schools and universities is discussed. It is referred to be much broader than traditional equilibrium chemistry coupled with a simple description of individual analytical methods. Actually, the schooling of analytical chemistry should closely connect theory and practice.
Method Development in Forensic Toxicology.
Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona
2017-01-01
In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Lores, Marta; Llompart, Maria; Alvarez-Rivera, Gerardo; Guerra, Eugenia; Vila, Marlene; Celeiro, Maria; Lamas, J Pablo; Garcia-Jares, Carmen
2016-04-07
Cosmetic products placed on the market and their ingredients, must be safe under reasonable conditions of use, in accordance to the current legislation. Therefore, regulated and allowed chemical substances must meet the regulatory criteria to be used as ingredients in cosmetics and personal care products, and adequate analytical methodology is needed to evaluate the degree of compliance. This article reviews the most recent methods (2005-2015) used for the extraction and the analytical determination of the ingredients included in the positive lists of the European Regulation of Cosmetic Products (EC 1223/2009): comprising colorants, preservatives and UV filters. It summarizes the analytical properties of the most relevant analytical methods along with the possibilities of fulfilment of the current regulatory issues. The cosmetic legislation is frequently being updated; consequently, the analytical methodology must be constantly revised and improved to meet safety requirements. The article highlights the most important advances in analytical methodology for cosmetics control, both in relation to the sample pretreatment and extraction and the different instrumental approaches developed to solve this challenge. Cosmetics are complex samples, and most of them require a sample pretreatment before analysis. In the last times, the research conducted covering this aspect, tended to the use of green extraction and microextraction techniques. Analytical methods were generally based on liquid chromatography with UV detection, and gas and liquid chromatographic techniques hyphenated with single or tandem mass spectrometry; but some interesting proposals based on electrophoresis have also been reported, together with some electroanalytical approaches. Regarding the number of ingredients considered for analytical control, single analyte methods have been proposed, although the most useful ones in the real life cosmetic analysis are the multianalyte approaches. Copyright © 2016 Elsevier B.V. All rights reserved.
77 FR 73694 - Privacy Act of 1974: Update Existing System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-11
... survey response, and in the production of summary descriptive statistics and analytical studies in... participation in an agency's Upward Mobility Program or other personnel program designed to broaden an employee...
Petrenko, Christie L. M.; Friend, Angela; Garrido, Edward F.; Taussig, Heather N.; Culhane, Sara E.
2012-01-01
Objectives Attempts to understand the effects of maltreatment subtypes on childhood functioning are complicated by the fact that children often experience multiple subtypes. This study assessed the effects of maltreatment subtypes on the cognitive, academic, and mental health functioning of preadolescent youth in out-of-home care using both “variable-centered” and “person-centered” statistical analytic approaches to modeling multiple subtypes of maltreatment. Methods Participants included 334 preadolescent youth (ages 9 to 11) placed in out-of-home care due to maltreatment. The occurrence and severity of maltreatment subtypes (physical abuse, sexual abuse, physical neglect, and supervisory neglect) were coded from child welfare records. The relationships between maltreatment subtypes and children’s cognitive, academic, and mental health functioning were evaluated with the following approaches: “Variable-centered” analytic methods: Regression approach: Multiple regression was used to estimate the effects of each maltreatment subtype (separate analyses for occurrence and severity), controlling for the other subtypes. Hierarchical approach: Contrast coding was used in regression analyses to estimate the effects of discrete maltreatment categories that were assigned based on a subtype occurrence hierarchy (sexual abuse > physical abuse > physical neglect > supervisory neglect). “Person-centered” analytic method: Latent class analysis was used to group children with similar maltreatment severity profiles into discrete classes. The classes were then compared to determine if they differed in terms of their ability to predict functioning. Results The approaches identified similar relationships between maltreatment subtypes and children’s functioning. The most consistent findings indicated that maltreated children who experienced physical or sexual abuse were at highest risk for caregiver-reported externalizing behavior problems, and those who experienced physical abuse and/or physical neglect were more likely to have higher levels of caregiver-reported internalizing problems. Children experiencing predominantly low severity supervisory neglect had relatively better functioning than other maltreated youth. Conclusions Many of the maltreatment subtype differences identified within the maltreated sample in the current study are consistent with those from previous research comparing maltreated youth to non-maltreated comparison groups. Results do not support combining supervisory and physical neglect. The “variable-centered” and “person-centered” analytic approaches produced complementary results. Advantages and disadvantages of each approach are discussed. PMID:22947490
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutmacher, R.; Crawford, R.
This comprehensive guide to the analytical capabilities of Lawrence Livermore Laboratory's General Chemistry Division describes each analytical method in terms of its principle, field of application, and qualitative and quantitative uses. Also described are the state and quantity of sample required for analysis, processing time, available instrumentation, and responsible personnel.
NASA Astrophysics Data System (ADS)
Woodka, Marc D.; Brunschwig, Bruce S.; Lewis, Nathan S.
2008-03-01
Linear sensor arrays made from small molecule/carbon black composite chemiresistors placed in a low headspace volume chamber, with vapor delivered at low flow rates, allowed for the extraction of chemical information that significantly increased the ability of the sensor arrays to identify vapor mixture components and to quantify their concentrations. Each sensor sorbed vapors from the gas stream to various degrees. Similar to gas chromatography, species having high vapor pressures were separated from species having low vapor pressures. Instead of producing typical sensor responses representative of thermodynamic equilibrium between each sensor and an unchanging vapor phase, sensor responses varied depending on the position of the sensor in the chamber and the time from the beginning of the analyte exposure. This spatiotemporal (ST) array response provided information that was a function of time as well as of the position of the sensor in the chamber. The responses to pure analytes and to multi-component analyte mixtures comprised of hexane, decane, ethyl acetate, chlorobenzene, ethanol, and/or butanol, were recorded along each of the sensor arrays. Use of a non-negative least squares (NNLS) method for analysis of the ST data enabled the correct identification and quantification of the composition of 2-, 3-, 4- and 5-component mixtures from arrays using only 4 chemically different sorbent films and sensor training on pure vapors only. In contrast, when traditional time- and position-independent sensor response information was used, significant errors in mixture identification were observed. The ability to correctly identify and quantify constituent components of vapor mixtures through the use of such ST information significantly expands the capabilities of such broadly cross-reactive arrays of sensors.
A general analytical platform and strategy in search for illegal drugs.
Johansson, Monika; Fransson, Dick; Rundlöf, Torgny; Huynh, Ngoc-Hang; Arvidsson, Torbjörn
2014-11-01
An effective screening procedure to identify and quantify active pharmaceutical substances in suspected illegal medicinal products is described. The analytical platform, consisting of accurate mass determination with liquid chromatography time-of-flight mass spectrometry (LC-QTOF-MS) in combination with nuclear magnetic resonance (NMR) spectroscopy provides an excellent analytical tool to screen for unknowns in medicinal products, food supplements and herbal formulations. This analytical approach has been successfully applied to analyze thousands of samples. The general screening method usually starts with a methanol extraction of tablets/capsules followed by liquid chromatographic separation on a Halo Phenyl-Hexyl column (2.7μm; 100mm×2.1mm) using an acetonitrile/0.1% formic acid gradient as eluent. The accurate mass of peaks of interest was recorded and a search made against an in-house database containing approximately 4200 substances, mostly pharmaceutical compounds. The search could be general or tailored against different classes of compounds. Hits were confirmed by analyzing a reference substance and/or by NMR. Quantification was normally performed with quantitative NMR (qNMR) spectroscopy. Applications for weight-loss substances like sibutramine and orlistat, sexual potency enhancement (PDE-5 inhibitors), and analgesic drugs are presented in this study. We have also identified prostaglandin analogues in eyelash growth serum, exemplified by isopropyl cloprostenate and bimatoprost. For creams and ointments, matrix solid-phase dispersion (MSPD) was found to give a clean extracts with high recovery prior to LC-MS analyses. The structural elucidation of cetilistat, a new weight-loss substance recently found in illegal medicines purchased over the Internet, is also presented. Copyright © 2014 Elsevier B.V. All rights reserved.
Moskovets, Eugene; Misharin, Alexander; Laiko, Viktor; Doroshenko, Vladimir
2016-07-15
A comparative MS study was conducted on the analytical performance of two matrix-assisted laser desorption/ionization (MALDI) sources that operated at either low pressure (∼1Torr) or at atmospheric pressure. In both cases, the MALDI sources were attached to a linear ion trap mass spectrometer equipped with a two-stage ion funnel. The obtained results indicate that the limits of detection, in the analysis of identical peptide samples, were much lower with the source that was operated slightly below the 1-Torr pressure. In the low-pressure (LP) MALDI source, ion signals were observed at a laser fluence that was considerably lower than the one determining the appearance of ion signals in the atmospheric pressure (AP) MALDI source. When the near-threshold laser fluences were used to record MALDI MS spectra at 1-Torr and 750-Torr pressures, the level of chemical noise at the 1-Torr pressure was much lower compared to that at AP. The dependency of the analyte ion signals on the accelerating field which dragged the ions from the MALDI plate to the MS analyzer are presented for the LP and AP MALDI sources. The study indicates that the laser fluence, background gas pressure, and field accelerating the ions away from a MALDI plate were the main parameters which determined the ion yield, signal-to-noise (S/N) ratios, the fragmentation of the analyte ions, and adduct formation in the LP and AP MALDI MS methods. The presented results can be helpful for a deeper insight into the mechanisms responsible for the ion formation in MALDI. Copyright © 2016 Elsevier Inc. All rights reserved.