ERIC Educational Resources Information Center
O'Connell, Susan; Croskey, Suzanne G.
2008-01-01
The National Council of Teachers of Mathematics' (NCTM's) Process Standards support teaching that helps students develop independent, effective mathematical thinking. The books in the Heinemann Math Process Standards Series give every middle grades math teacher the opportunity to explore each standard in depth. The series offers friendly,…
ERIC Educational Resources Information Center
Schackow, Joy Bronston; O'Connell, Susan
2008-01-01
The National Council of Teachers of Mathematics' (NCTM's) Process Standards support teaching that helps students develop independent, effective mathematical thinking. The books in the Heinemann Math Process Standards Series give every middle grades math teacher the opportunity to explore each standard in depth. The series offers friendly,…
Math Process Standards Series, Grades 3-5
ERIC Educational Resources Information Center
O'Connell, Susan, Ed.
2008-01-01
NCTM's Process Standards support teaching that helps upper elementary level children develop independent, effective mathematical thinking. The books in the Heinemann Math Process Standards Series give every intermediate-grades teacher the opportunity to explore each standard in depth. With language and examples that don't require prior math…
Math Process Standards Series, Grades PreK-2 [with CD-ROMs
ERIC Educational Resources Information Center
O'Connell, Susan, Ed.
2007-01-01
The National Council of Teachers of Mathematics (NCTM)'s Process Standards support teaching that helps children develop independent, effective mathematical thinking. The books in the Heinemann Math Process Standards Series give every primary teacher the opportunity to explore each standard in depth. With language and examples that don't require…
CFD Process Pre- and Post-processing Automation in Support of Space Propulsion
NASA Technical Reports Server (NTRS)
Dorney, Suzanne M.
2003-01-01
The use of Computational Fluid Dynamics or CFD has become standard practice in the design and analysis of the major components used for space propulsion. In an attempt to standardize and improve the CFD process a series of automated tools have been developed. Through the use of these automated tools the application of CFD to the design cycle has been improved and streamlined. This paper presents a series of applications in which deficiencies were identified in the CFD process and corrected through the development of automated tools.
Borrowing as a Process in the Standardization of Language.
ERIC Educational Resources Information Center
Byron, Janet
This paper suggests that new approaches are needed in the study of language standardization. One such approach is the consideration of standardization in terms of processes, i.e., in terms of series of related events, rather than as a group of unrelated discrete happenings. Borrowing is one recurring feature in language standardization, and in…
2014-06-01
27000 series, COBIT, the British Standards Institution’s BS 25999, and ISO 24762 includes quantitative process measurements that can be used to...the NIST special publications 800 series, the International Organization for Standards ( ISO ) and International Electrotechnical Commission (IEC
The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis
NASA Astrophysics Data System (ADS)
Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.
2017-12-01
The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.
ERIC Educational Resources Information Center
Mancevice, Nicole; Lozano, Maritza; Jones, Barbara; Tobiason, Glory; Heritage, Margaret; Chang, Sandy; Herman, Joan
2015-01-01
This resource is part of a series produced by the Center for Standards and Assessment Implementation (CSAI) to assist educators as they use College and Career Ready Standards (CCRS) to plan instruction for diverse learners. Although the processes described in this resource use the Common Core State Standards (CCSS; National Governors Association…
15 CFR 200.105 - Standard reference data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... for application in energy, environment and health, industrial process design, materials durability... Institute of Physics, in the National Standard Reference Data System reports as the NSRDS-NIST series, and...
CERT Resilience Management Model (CERT-RMM) V1.1: NIST Special Publication Crosswalk Version 1
2011-11-01
Organization for Standards ( ISO ) and International Electrotechnical Commission (IEC) 27000 series, COBIT, the British Standards Institution’s BS 25999...and ISO 24762 • includes quantitative process measurements that can be used to ensure operational resilience processes are performing as intended
About Hemispheric Differences in the Processing of Temporal Intervals
ERIC Educational Resources Information Center
Grondin, S.; Girard, C.
2005-01-01
The purpose of the present study was to identify differences between cerebral hemispheres for processing temporal intervals ranging from .9 to 1.4s. The intervals to be judged were marked by series of brief visual signals located in the left or the right visual field. Series of three (two standards and one comparison) or five intervals (four…
ERIC Educational Resources Information Center
Schmitz, Bernhard; Wiese, Bettina S.
2006-01-01
The present study combines a standardized diary approach with time-series analysis methods to investigate the process of self-regulated learning. Based on a process-focused adaptation of Zimmerman's (2000) learning model, an intervention (consisting of four weekly training sessions) to increase self-regulated learning was developed. The diaries…
Interactive Digital Signal Processor
NASA Technical Reports Server (NTRS)
Mish, W. H.
1985-01-01
Interactive Digital Signal Processor, IDSP, consists of set of time series analysis "operators" based on various algorithms commonly used for digital signal analysis. Processing of digital signal time series to extract information usually achieved by applications of number of fairly standard operations. IDSP excellent teaching tool for demonstrating application for time series operators to artificially generated signals.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-21
...-step calculation process to convert the time- series of costs and benefits into annualized values... costs and savings, for the time-series of costs and benefits using discount rates of three and seven... that the time-series of costs and benefits from which the annualized values were determined would be a...
NASA Astrophysics Data System (ADS)
Basri, Shuib; O'Connor, Rory V.
This paper is concerned with understanding the issues that affect the adoption of software process standards by Very Small Entities (VSEs), their needs from process standards and their willingness to engage with the new ISO/IEC 29110 standard in particular. In order to achieve this goal, a series of industry data collection studies were undertaken with a collection of VSEs. A twin track approach of a qualitative data collection (interviews and focus groups) and quantitative data collection (questionnaire) were undertaken. Data analysis was being completed separately and the final results were merged, using the coding mechanisms of grounded theory. This paper serves as a roadmap for both researchers wishing to understand the issues of process standards adoption by very small companies and also for the software process standards community.
Using Microsoft Excel to Assess Standards: A "Techtorial". Article #2 in a 6-Part Series
ERIC Educational Resources Information Center
Mears, Derrick
2009-01-01
Standards-based assessment is a term currently being used quite often in educational reform discussions. The philosophy behind this initiative is to utilize "standards" or "benchmarks" to focus instruction and assessments of student learning. The National Standards for Physical Education (NASPE, 2004) provide a framework to guide this process for…
Molenaar, Peter C M
2008-01-01
It is argued that general mathematical-statistical theorems imply that standard statistical analysis techniques of inter-individual variation are invalid to investigate developmental processes. Developmental processes have to be analyzed at the level of individual subjects, using time series data characterizing the patterns of intra-individual variation. It is shown that standard statistical techniques based on the analysis of inter-individual variation appear to be insensitive to the presence of arbitrary large degrees of inter-individual heterogeneity in the population. An important class of nonlinear epigenetic models of neural growth is described which can explain the occurrence of such heterogeneity in brain structures and behavior. Links with models of developmental instability are discussed. A simulation study based on a chaotic growth model illustrates the invalidity of standard analysis of inter-individual variation, whereas time series analysis of intra-individual variation is able to recover the true state of affairs. (c) 2007 Wiley Periodicals, Inc.
Process for dezincing galvanized steel
Morgan, W.A.; Dudek, F.J.; Daniels, E.J.
1998-07-14
A process is described for removing zinc from galvanized steel. The galvanized steel is immersed in an electrolyte containing at least about 15% by weight of sodium or potassium hydroxide and having a temperature of at least about 75 C and the zinc is galvanically corroded from the surface of the galvanized steel. The material serving as the cathode is principally a material having a standard electrode potential which is intermediate of the standard electrode potentials of zinc and cadmium in the electrochemical series. The corrosion rate may be accelerated by (1) increasing the number density of corrosion sites in the galvanized steel by mechanically abrading or deforming the galvanized steel, (2) heating the galvanized steel to form an alloy of zinc on the surface of the galvanized steel, (3) mixing the galvanized steel with a material having a standard electrode potential which is intermediate of the standard electrode potentials of zinc and cadmium in the electrochemical series, or (4) moving the galvanized steel relative to itself and to the electrolyte while immersed in the electrolyte. 1 fig.
Process for dezincing galvanized steel
Morgan, William A.; Dudek, Frederick J.; Daniels, Edward J.
1998-01-01
A process for removing zinc from galvanized steel. The galvanized steel is immersed in an electrolyte containing at least about 15% by weight of sodium or potassium hydroxide and having a temperature of at least about 75.degree. C. and the zinc is galvanically corroded from the surface of the galvanized steel. The material serving as the cathode is principally a material having a standard electrode potential which is intermediate of the standard electrode potentials of zinc and cadmium in the electrochemical series. The corrosion rate may be accelerated by (i) increasing the number density of corrosion sites in the galvanized steel by mechanically abrading or deforming the galvanized steel, (ii) heating the galvanized steel to form an alloy of zinc on the surface of the galvanized steel, (iii) mixing the galvanized steel with a material having a standard electrode potential which is intermediate of the standard electrode potentials of zinc and cadmium in the electrochemical series, or (iv) moving the galvanized steel relative to itself and to the electrolyte while immersed in the electrolyte.
NASA Astrophysics Data System (ADS)
Eberle, J.; Schmullius, C.
2017-12-01
Increasing archives of global satellite data present a new challenge to handle multi-source satellite data in a user-friendly way. Any user is confronted with different data formats and data access services. In addition the handling of time-series data is complex as an automated processing and execution of data processing steps is needed to supply the user with the desired product for a specific area of interest. In order to simplify the access to data archives of various satellite missions and to facilitate the subsequent processing, a regional data and processing middleware has been developed. The aim of this system is to provide standardized and web-based interfaces to multi-source time-series data for individual regions on Earth. For further use and analysis uniform data formats and data access services are provided. Interfaces to data archives of the sensor MODIS (NASA) as well as the satellites Landsat (USGS) and Sentinel (ESA) have been integrated in the middleware. Various scientific algorithms, such as the calculation of trends and breakpoints of time-series data, can be carried out on the preprocessed data on the basis of uniform data management. Jupyter Notebooks are linked to the data and further processing can be conducted directly on the server using Python and the statistical language R. In addition to accessing EO data, the middleware is also used as an intermediary between the user and external databases (e.g., Flickr, YouTube). Standardized web services as specified by OGC are provided for all tools of the middleware. Currently, the use of cloud services is being researched to bring algorithms to the data. As a thematic example, an operational monitoring of vegetation phenology is being implemented on the basis of various optical satellite data and validation data from the German Weather Service. Other examples demonstrate the monitoring of wetlands focusing on automated discovery and access of Landsat and Sentinel data for local areas.
ISO 9000 Quality Systems: Application to Higher Education.
ERIC Educational Resources Information Center
Clery, Roger G.
This paper describes and explains the 20 elements of the International Organization for Standards 9000 (ISO 9000) series, a model for quality assurance in the business processes of design/development, production, installation and servicing. The standards were designed in 1987 to provide a common denominator for business quality particularly to…
An introduction to chaotic and random time series analysis
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1989-01-01
The origin of chaotic behavior and the relation of chaos to randomness are explained. Two mathematical results are described: (1) a representation theorem guarantees the existence of a specific time-domain model for chaos and addresses the relation between chaotic, random, and strictly deterministic processes; (2) a theorem assures that information on the behavior of a physical system in its complete state space can be extracted from time-series data on a single observable. Focus is placed on an important connection between the dynamical state space and an observable time series. These two results lead to a practical deconvolution technique combining standard random process modeling methods with new embedded techniques.
ERIC Educational Resources Information Center
Weaver, Kim M.
2005-01-01
In this unit, elementary students design and build a lunar plant growth chamber using the Engineering Design Process. The purpose of the unit is to help students understand and apply the design process as it relates to plant growth on the moon. This guide includes six lessons, which meet a number of national standards and benchmarks in…
Modified Confidence Intervals for the Mean of an Autoregressive Process.
1985-08-01
Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our
NASA Technical Reports Server (NTRS)
Jester, Peggy L.; Hancock, David W., III
1999-01-01
This document provides the Data Management Plan for the GLAS Standard Data Software (SDS) supporting the GLAS instrument of the EOS ICESat Spacecraft. The SDS encompasses the ICESat Science Investigator-led Processing System (I-SIPS) Software and the Instrument Support Facility (ISF) Software. This Plan addresses the identification, authority, and description of the interface nodes associated with the GLAS Standard Data Products and the GLAS Ancillary Data.
Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA
NASA Astrophysics Data System (ADS)
Montillet, Jean-Philippe; Yu, Kegen
2015-04-01
Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).
On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sinha, A. K.
1973-01-01
Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.
Effectiveness of the Department of Defense Information Assurance Accreditation Process
2013-03-01
meeting the requirements of ISO 27001, Information Security Management System. ISO 27002 provides “security techniques” or best practices that can be...efforts to the next level and implement a recognized standard such as the International Organization for Standards ( ISO ) 27000 Series of standards...implemented by an organization as part of their certification effort.15 Most likely, the main motivation a company would have for achieving an ISO
Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.
Malkin, Zinovy
2016-04-01
The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Systematic review of the methodological and reporting quality of case series in surgery.
Agha, R A; Fowler, A J; Lee, S-Y; Gundogan, B; Whitehurst, K; Sagoo, H K; Jeong, K J L; Altman, D G; Orgill, D P
2016-09-01
Case series are an important and common study type. No guideline exists for reporting case series and there is evidence of key data being missed from such reports. The first step in the process of developing a methodologically sound reporting guideline is a systematic review of literature relevant to the reporting deficiencies of case series. A systematic review of methodological and reporting quality in surgical case series was performed. The electronic search strategy was developed by an information specialist and included MEDLINE, Embase, Cochrane Methods Register, Science Citation Index and Conference Proceedings Citation index, from the start of indexing to 5 November 2014. Independent screening, eligibility assessments and data extraction were performed. Included articles were then analysed for five areas of deficiency: failure to use standardized definitions, missing or selective data (including the omission of whole cases or important variables), transparency or incomplete reporting, whether alternative study designs were considered, and other issues. Database searching identified 2205 records. Through the process of screening and eligibility assessments, 92 articles met inclusion criteria. Frequencies of methodological and reporting issues identified were: failure to use standardized definitions (57 per cent), missing or selective data (66 per cent), transparency or incomplete reporting (70 per cent), whether alternative study designs were considered (11 per cent) and other issues (52 per cent). The methodological and reporting quality of surgical case series needs improvement. The data indicate that evidence-based guidelines for the conduct and reporting of case series may be useful. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.
PRISM software—Processing and review interface for strong-motion data
Jones, Jeanne M.; Kalkan, Erol; Stephens, Christopher D.; Ng, Peter
2017-11-28
Rapidly available and accurate ground-motion acceleration time series (seismic recordings) and derived data products are essential to quickly providing scientific and engineering analysis and advice after an earthquake. To meet this need, the U.S. Geological Survey National Strong Motion Project has developed a software package called PRISM (Processing and Review Interface for Strong-Motion data). PRISM automatically processes strong-motion acceleration records, producing compatible acceleration, velocity, and displacement time series; acceleration, velocity, and displacement response spectra; Fourier amplitude spectra; and standard earthquake-intensity measures. PRISM is intended to be used by strong-motion seismic networks, as well as by earthquake engineers and seismologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-08-01
Documents relevant to the development and implementation of the California energy insulation standards for new residential buildings were evaluated and a survey was conducted to determine problems encountered in the implementation, enforcement, and design aspects of the standards. The impact of the standards on enforcement agencies, designers, builders and developers, manufacturers and suppliers, consumers, and the building process in general is summarized. The impact on construction costs and energy savings varies considerably because of the wide variation in prior insulation practices and climatic conditions in California. The report concludes with a series of recommendations covering all levels of government andmore » the building process. (MCW)« less
ERIC Educational Resources Information Center
Maerlender, Arthur
2010-01-01
Auditory processing disorders (APDs) are of interest to educators and clinicians, as they impact school functioning. Little work has been completed to demonstrate how children with APDs perform on clinical tests. In a series of studies, standard clinical (psychometric) tests from the Wechsler Intelligence Scale for Children, Fourth Edition…
Facilitation of learning: part 1.
Warburton, Tyler; Trish, Houghton; Barry, Debbie
2016-04-06
This article, the fourth in a series of 11, discusses the context for the facilitation of learning. It outlines the main principles and theories for understanding the process of learning, including examples which link these concepts to practice. The practical aspects of using these theories in a practice setting will be discussed in the fifth article of this series. Together, these two articles will provide mentors and practice teachers with knowledge of the learning process, which will enable them to meet the second domain of the Nursing and Midwifery Council's Standards to Support Learning and Assessment in Practice on facilitation of learning.
Terry, Ellen L; France, Christopher R; Bartley, Emily J; Delventura, Jennifer L; Kerr, Kara L; Vincent, Ashley L; Rhudy, Jamie L
2011-09-01
Temporal summation of pain (TS-pain) is the progressive increase in pain ratings during a series of noxious stimulations. TS-pain has been used to make inferences about sensitization of spinal nociceptive processes; however, pain report can be biased thereby leading to problems with this inference. Temporal summation of the nociceptive flexion reflex (TS-NFR, a physiological measure of spinal nociception) can potentially overcome report bias, but there have been few attempts (generally with small Ns) to standardize TS-NFR procedures. In this study, 50 healthy participants received 25 series of noxious electric stimulations to evoke TS-NFR and TS-pain. Goals were to: 1) determine the stimulation frequency that best elicits TS-NFR and reduces electromyogram (EMG) contamination from muscle tension, 2) determine the minimum number of stimulations per series before NFR summation asymptotes, 3) compare NFR definition intervals (90-150ms vs. 70-150ms post-stimulation), and 4) compare TS-pain and TS-NFR when different stimulation frequencies are used. Results indicated TS-NFR should be elicited by a series of three stimuli delivered at 2.0Hz and TS-NFR should be defined from a 70-150ms post-stimulation scoring interval. Unfortunately, EMG contamination from muscle tension was greatest during 2.0Hz series. Discrepancies were noted between TS-NFR and TS-pain which raise concerns about using pain ratings to infer changes in spinal nociceptive processes. And finally, some individuals did not have reliable NFRs when the stimulation intensity was set at NFR threshold during TS-NFR testing; therefore, a higher intensity is needed. Implications of findings are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
2018-02-01
All papers published in this volume of Journal of Physics: Conference Series have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2018-04-01
All papers published in this volume of Journal of Physics: Conference Series have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2018-05-01
All papers published in this volume of Journal of Physics: Conference Series have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2018-03-01
All papers published in this volume of Journal of Physics: Conference Series have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
Time series behaviour of the number of Air Asia passengers: A distributional approach
NASA Astrophysics Data System (ADS)
Asrah, Norhaidah Mohd; Djauhari, Maman Abdurachman
2013-09-01
The common practice to time series analysis is by fitting a model and then further analysis is conducted on the residuals. However, if we know the distributional behavior of time series, the analyses in model identification, parameter estimation, and model checking are more straightforward. In this paper, we show that the number of Air Asia passengers can be represented as a geometric Brownian motion process. Therefore, instead of using the standard approach in model fitting, we use an appropriate transformation to come up with a stationary, normally distributed and even independent time series. An example in forecasting the number of Air Asia passengers will be given to illustrate the advantages of the method.
Minimum entropy density method for the time series analysis
NASA Astrophysics Data System (ADS)
Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae
2009-01-01
The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
A primer on standards setting as it applies to surgical education and credentialing.
Cendan, Juan; Wier, Daryl; Behrns, Kevin
2013-07-01
Surgical technological advances in the past three decades have led to dramatic reductions in the morbidity associated with abdominal procedures and permanently altered the surgical practice landscape. Significant changes continue apace including surgical robotics, natural orifice-based surgery, and single-incision approaches. These disruptive technologies have on occasion been injurious to patients, and high-stakes assessment before adoption of new technologies would be reasonable. We reviewed the drivers for well-established psychometric techniques available for the standards-setting process. We present a series of examples that are relevant in the surgical domain including standards setting for knowledge and skills assessments. Defensible standards for knowledge and procedural skills will likely become part of surgical clinical practice. Understanding the methodology for determining standards should position the surgical community to assist in the process and lead within their clinical settings as standards are considered that may affect patient safety and physician credentialing.
NASA Astrophysics Data System (ADS)
2018-03-01
All papers published in this volume of IOP Conference Series: Earth and Environmental Science have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2018-05-01
All papers published in this volume of IOP Conference Series: Materials Science and Engineering have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2017-11-01
All papers published in this volume of IOP Conference Series: Materials Science and Engineering have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2017-10-01
All papers published in this volume of IOP Conference Series: Materials Science and Engineering have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2017-09-01
All papers published in this volume of IOP Conference Series: Materials Science and Engineering have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2018-02-01
All papers published in this volume of IOP Conference Series: Materials Science and Engineering have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2017-12-01
All papers published in this volume of IOP Conference Series: Materials Science and Engineering have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
NASA Astrophysics Data System (ADS)
2018-03-01
All papers published in this volume of IOP Conference Series: Materials Science and Engineering have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
Food Production and Processing Considerations of Allergenic Food Ingredients: A Review
Alvarez, Pedro A.; Boye, Joyce I.
2012-01-01
Although most consumers show no adverse symptoms to food allergens, health consequences for sensitized individuals can be very serious. As a result, the Codex General Standard for the Labelling of Prepackaged Foods has specified a series of allergenic ingredients/substances requiring mandatory declaration when present in processed prepackaged food products. Countries adhering to international standards are required to observe this minimum of eight substances, but additional priority allergens are included in the list in some countries. Enforcement agencies have traditionally focused their effort on surveillance of prepackaged goods, but there is a growing need to apply a bottom-up approach to allergen risk management in food manufacturing starting from primary food processing operations in order to minimize the possibility of allergen contamination in finished products. The present paper aims to review food production considerations that impact allergen risk management, and it is directed mainly to food manufacturers and policy makers. Furthermore, a series of food ingredients and the allergenic fractions identified from them, as well as the current methodology used for detection of these allergenic foods, is provided. PMID:22187573
Mutual information estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.
Using Process and Inqury to Teach Content: Projectile Motion and Graphing
ERIC Educational Resources Information Center
Rhea, Marilyn; Lucido, Patricia; Gregerson-Malm, Cheryl
2005-01-01
These series of lessons uses the process of student inquiry to teach the concepts of force and motion identified in the National Science Education Standards for grades 5-8. The lesson plan also uses technology as a teaching tool through the use of interactive Web sites. The lessons are built on the 5-E format and feature imbedded assessments.
NASA atomic hydrogen standards program: An update
NASA Technical Reports Server (NTRS)
Reinhardt, V. S.; Kaufmann, D. C.; Adams, W. A.; Deluca, J. J.; Soucy, J. L.
1976-01-01
Comparisons are made between the NP series and the NX series of hydrogen masers. A field operable hydrogen maser (NR series) is also described. Atomic hydrogen primary frequency standards are in development stages. Standards are being developed for a hydrogen beam frequency standard and for a concertina hydrogen maser.
40 CFR 439.31 - Special definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... STANDARDS (CONTINUED) PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY Chemical Synthesis Products § 439.31 Special definitions. For the purpose of this subpart: (a) Chemical synthesis means using one or a series of chemical reactions in the manufacturing process of a specified product. (b) Product means any...
40 CFR 439.31 - Special definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS (CONTINUED) PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY Chemical Synthesis Products § 439.31 Special definitions. For the purpose of this subpart: (a) Chemical synthesis means using one or a series of chemical reactions in the manufacturing process of a specified product. (b) Product means any...
40 CFR 439.31 - Special definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS (CONTINUED) PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY Chemical Synthesis Products § 439.31 Special definitions. For the purpose of this subpart: (a) Chemical synthesis means using one or a series of chemical reactions in the manufacturing process of a specified product. (b) Product means any...
The European Standard Series and its additions: are they of any use in 2013?
Castelain, Michel; Assier, Haudrey; Baeck, Marie; Bara, Corina; Barbaud, Annick; Castelain, Florence; Felix, Brigitte; Ferrie Le Bouedec, Marie Christine; Frick, Christian; Girardin, Pascal; Jacobs, Marie Claude; Jelen, Gilbert; Lartigaud, Isabelle; Raison-Peyron, Nadia; Tennstedt, Dominique; Tetard, Florence; Vigan, Martine; Waton, Julie
2014-01-01
This study has two purposes:--to know whether the European standard series is still the key reference when it comes to contact dermatitis, i.e., are its components still the most frequently involved allergens in contact dermatitis nowadays?--to assess the results of the European standard series among French and Belgian dermatologists/allergists as, so far, most of them have failed to provide statistical data within the European community of allergists/dermatologists. 18 participants from 2 dermatology and allergy centres in Belgium and 11 centres in France collected their results from 3,073 patients tested in 2011. They assessed the relevance of some tests as well as that of the standard series and additional series to establish an etiological diagnosis of contact dermatitis. These results, together with the history of the European standard series, have shown that some allergens are obsolete and that others should be included in a new standard series for which we are making a few suggestions.
Higuchi, Toshihiro
2015-10-01
Radiation protection standards for the general population have constituted one of the most controversial subjects in the history of atomic energy uses. This paper reexamines the process in which the first such standards evolved in the early postwar period. While the existing literature has emphasized a "collusion" between the standard-setters and users, the paper seeks to examine the horizontal relationship among the standard-setters. It first examines a series of expert consultations between the United States and the United Kingdom. Representing a different configuration of power and interest, the two failed to agree on the assessment of genetic damage and cancer induction whose occurrence might have no threshold and therefore be dependent on the population size. This stalemate prevented the International Commission on Radiological Protection (ICRP), established in 1950, from formulating separate guidelines for the general public. Situations radically changed when the Bikini incident in 1954 led to the creation of more scientific panels. One such panel under the U.S. Academy of Sciences enabled the geneticists to bridge their internal divide, unanimously naming 100 mSv as the genetically permissible dose for the general population. Not to be outdone, ICRP publicized its own guidelines for the same purpose. The case examined in this paper shows that the standard-setting process is best understood as a series of "epistemic negotiations" among and within the standard-setters, whose agendas were determined from the outset but whose outcomes were not.
NASA Technical Reports Server (NTRS)
Schoenfeld, A. D.; Yu, Y.
1973-01-01
Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.
40 CFR 439.31 - Special definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY Chemical Synthesis Products § 439.31 Special definitions. For the purpose of this subpart: (a) Chemical synthesis means using one or a series of chemical reactions in the manufacturing process of a specified product. (b) Product means any pharmaceutical product...
40 CFR 439.31 - Special definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY Chemical Synthesis Products § 439.31 Special definitions. For the purpose of this subpart: (a) Chemical synthesis means using one or a series of chemical reactions in the manufacturing process of a specified product. (b) Product means any pharmaceutical product...
77 FR 50112 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... the UPD helps to protect the integrity of ACF's award selection process. All ACF discretionary grant... instructions; the Standard Form 424 series, which requests basic information, budget information, and... Planning, Research and Evaluation, 370 L'Enfant Promenade SW., Washington, DC 20447, Attn: ACF Reports...
Complex-valued time-series correlation increases sensitivity in FMRI analysis.
Kociuba, Mary C; Rowe, Daniel B
2016-07-01
To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.
Time Series Forecasting of the Number of Malaysia Airlines and AirAsia Passengers
NASA Astrophysics Data System (ADS)
Asrah, N. M.; Nor, M. E.; Rahim, S. N. A.; Leng, W. K.
2018-04-01
The standard practice in forecasting process involved by fitting a model and further analysis on the residuals. If we know the distributional behaviour of the time series data, it can help us to directly analyse the model identification, parameter estimation, and model checking. In this paper, we want to compare the distributional behaviour data from the number of Malaysia Airlines (MAS) and AirAsia passenger’s. From the previous research, the AirAsia passengers are govern by geometric Brownian motion (GBM). The data were normally distributed, stationary and independent. Then, GBM was used to forecast the number of AirAsia passenger’s. The same methods were applied to MAS data and the results then were compared. Unfortunately, the MAS data were not govern by GBM. Then, the standard approach in time series forecasting will be applied to MAS data. From this comparison, we can conclude that the number of AirAsia passengers are always in peak season rather than MAS passengers.
77 FR 21778 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
... the integrity of ACF's award selection process. All ACF discretionary grant programs are required to... Standard Form 424 series, which requests basic information, budget information, and assurances; the Project... Administration for Children and Families, Office of Planning, Research and Evaluation, 370 L'Enfant Promenade SW...
He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian
2015-02-01
Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.
Statistical tests for power-law cross-correlated processes
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene
2011-12-01
For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.
Processing of meteorological data with ultrasonic thermoanemometers
NASA Astrophysics Data System (ADS)
Telminov, A. E.; Bogushevich, A. Ya.; Korolkov, V. A.; Botygin, I. A.
2017-11-01
The article describes a software system intended for supporting scientific researches of the atmosphere during the processing of data gathered by multi-level ultrasonic complexes for automated monitoring of meteorological and turbulent parameters in the ground layer of the atmosphere. The system allows to process files containing data sets of temperature instantaneous values, three orthogonal components of wind speed, humidity and pressure. The processing task execution is done in multiple stages. During the first stage, the system executes researcher's query for meteorological parameters. At the second stage, the system computes series of standard statistical meteorological field properties, such as averages, dispersion, standard deviation, asymmetry coefficients, excess, correlation etc. The third stage is necessary to prepare for computing the parameters of atmospheric turbulence. The computation results are displayed to user and stored at hard drive.
NASA Astrophysics Data System (ADS)
Wibawa, Teja A.; Lehodey, Patrick; Senina, Inna
2017-02-01
Geo-referenced catch and fishing effort data of the bigeye tuna fisheries in the Indian Ocean over 1952-2014 were analyzed and standardized to facilitate population dynamics modeling studies. During this 62-year historical period of exploitation, many changes occurred both in the fishing techniques and the monitoring of activity. This study includes a series of processing steps used for standardization of spatial resolution, conversion and standardization of catch and effort units, raising of geo-referenced catch into nominal catch level, screening and correction of outliers, and detection of major catchability changes over long time series of fishing data, i.e., the Japanese longline fleet operating in the tropical Indian Ocean. A total of 30 fisheries were finally determined from longline, purse seine and other-gears data sets, from which 10 longline and 4 purse seine fisheries represented 96 % of the whole historical geo-referenced catch. Nevertheless, one-third of total nominal catch is still not included due to a total lack of geo-referenced information and would need to be processed separately, accordingly to the requirements of the study. The geo-referenced records of catch, fishing effort and associated length frequency samples of all fisheries are available at doi:10.1594/PANGAEA.864154.
ERIC Educational Resources Information Center
Shore, Felice S.; Pascal, Matthew
2008-01-01
This article describes several distinct approaches taken by preservice elementary teachers to solving a classic rate problem. Their approaches incorporate a variety of mathematical concepts, ranging from proportions to infinite series, and illustrate the power of all five NCTM Process Standards. (Contains 8 figures.)
Systems Management of Air Force Standard Communications-Computer systems: There is a Better Way
1988-04-01
upgrade or replacement of systems. AFR 700-6, Information Systems Operation Management , AFR 700-7, Information Processing Center Opera- tions Management...and AFR 700-8, Telephone Systems Operation Management provide USAF guidance, policy and procedures governing this phase. 4 2. 800-Series Regulations
The report is one in a six-volume series considering abnormal operating conditions (AOCs) in the primary section (sintering, blast furnace ironmaking, open hearth, electric furnace, and basic oxygen steelmaking) of an integrated iron and steel plant. Pollution standards, generall...
NCATE: Does it Matter? Research Series No. 92.
ERIC Educational Resources Information Center
Wheeler, Christopher W.
This study of the National Council for Accreditation of Teacher Education (NCATE) examines how NCATE applies its standards and the effect of its process on the quality of programs in professional education. The accreditation procedures are examined and criticism is leveled at the prevalence of an evaluation approach that frequently examines…
IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR
NASA Technical Reports Server (NTRS)
Mish, W. H.
1994-01-01
The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting routines and IMSL Math/Library routines. These software packages are not included in IDSP. The virtual memory requirement for the program is approximately 2.36 MB. The IDSP system was developed in 1982 and was last updated in 1986. Versaplot is a registered trademark of Versatec Inc. Template is a registered trademark of Template Graphics Software Inc. IMSL Math/Library is a registered trademark of IMSL Inc.
Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2014-04-14
To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less
Mobile Visualization and Analysis Tools for Spatial Time-Series Data
NASA Astrophysics Data System (ADS)
Eberle, J.; Hüttich, C.; Schmullius, C.
2013-12-01
The Siberian Earth System Science Cluster (SIB-ESS-C) provides access and analysis services for spatial time-series data build on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and climate data from meteorological stations. Until now a webportal for data access, visualization and analysis with standard-compliant web services was developed for SIB-ESS-C. As a further enhancement a mobile app was developed to provide an easy access to these time-series data for field campaigns. The app sends the current position from the GPS receiver and a specific dataset (like land surface temperature or vegetation indices) - selected by the user - to our SIB-ESS-C web service and gets the requested time-series data for the identified pixel back in real-time. The data is then being plotted directly in the app. Furthermore the user has possibilities to analyze the time-series data for breaking points and other phenological values. These processings are executed on demand of the user on our SIB-ESS-C web server and results are transfered to the app. Any processing can also be done at the SIB-ESS-C webportal. The aim of this work is to make spatial time-series data and analysis functions available for end users without the need of data processing. In this presentation the author gives an overview on this new mobile app, the functionalities, the technical infrastructure as well as technological issues (how the app was developed, our made experiences).
NASA Satellite Data for Seagrass Health Modeling and Monitoring
NASA Technical Reports Server (NTRS)
Spiering, Bruce A.; Underwood, Lauren; Ross, Kenton
2011-01-01
Time series derived information for coastal waters will be used to provide input data for the Fong and Harwell model. The current MODIS land mask limits where the model can be applied; this project will: a) Apply MODIS data with resolution higher than the standard products (250-m vs. 1-km). b) Seek to refine the land mask. c) Explore nearby areas to use as proxies for time series directly over the beds. Novel processing approaches will be leveraged from other NASA projects and customized as inputs for seagrass productivity modeling
Life cycle assessment in market, research, and policy: Harmonization beyond standardization.
Zamagni, Alessandra; Cutaia, Laura
2015-07-01
This article introduces the special series "LCA in Market Research and Policy: Harmonization beyond standardization," which was generated from the 19th SETAC Life Cycle Assessment (LCA) Case Study Symposium held November 2013, in Rome, Italy. This collection of invited articles reflects the purpose of symposium and focuses on how LCA can support the decision-making process at all levels (i.e., in industry and policy contexts) and how LCA results can be efficiently communicated and used to support market strategies. © 2015 SETAC.
Multifractal Properties of Process Control Variables
NASA Astrophysics Data System (ADS)
Domański, Paweł D.
2017-06-01
Control system is an inevitable element of any industrial installation. Its quality affects overall process performance significantly. The assessment, whether control system needs any improvement or not, requires relevant and constructive measures. There are various methods, like time domain based, Minimum Variance, Gaussian and non-Gaussian statistical factors, fractal and entropy indexes. Majority of approaches use time series of control variables. They are able to cover many phenomena. But process complexities and human interventions cause effects that are hardly visible for standard measures. It is shown that the signals originating from industrial installations have multifractal properties and such an analysis may extend standard approach to further observations. The work is based on industrial and simulation data. The analysis delivers additional insight into the properties of control system and the process. It helps to discover internal dependencies and human factors, which are hardly detectable.
Report from the First CERT-RMM Users Group Workshop Series
2012-04-01
deploy processes to support our programs – Benchmark our programs to determine current gaps – Complements current work in CMMI® and ISO 27001 19...benchmarking program performance through process analytics and Lean/Six Sigma activities to ensure Performance Excellence. • Provides ISO Standards...Office www.cmu.edu/ iso 29 Carnegie Mellon University • Est 1967 in Pittsburgh, PA • Global, private research university • Ranked 22nd • 15,000
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basso, T.; DeBlasio, R.
The IEEE American National Standards smart grid publications and standards development projects IEEE 2030, which addresses smart grid interoperability, and IEEE 1547TM, which addresses distributed resources interconnection with the grid, have made substantial progress since 2009. The IEEE 2030TM and 1547 standards series focus on systems-level aspects and cover many of the technical integration issues involved in a mature smart grid. The status and highlights of these two IEEE series of standards, which are sponsored by IEEE Standards Coordinating Committee 21 (SCC21), are provided in this paper.
Compliance with minimum information guidelines in public metabolomics repositories
Spicer, Rachel A.; Salek, Reza; Steinbeck, Christoph
2017-01-01
The Metabolomics Standards Initiative (MSI) guidelines were first published in 2007. These guidelines provided reporting standards for all stages of metabolomics analysis: experimental design, biological context, chemical analysis and data processing. Since 2012, a series of public metabolomics databases and repositories, which accept the deposition of metabolomic datasets, have arisen. In this study, the compliance of 399 public data sets, from four major metabolomics data repositories, to the biological context MSI reporting standards was evaluated. None of the reporting standards were complied with in every publicly available study, although adherence rates varied greatly, from 0 to 97%. The plant minimum reporting standards were the most complied with and the microbial and in vitro were the least. Our results indicate the need for reassessment and revision of the existing MSI reporting standards. PMID:28949328
Compliance with minimum information guidelines in public metabolomics repositories.
Spicer, Rachel A; Salek, Reza; Steinbeck, Christoph
2017-09-26
The Metabolomics Standards Initiative (MSI) guidelines were first published in 2007. These guidelines provided reporting standards for all stages of metabolomics analysis: experimental design, biological context, chemical analysis and data processing. Since 2012, a series of public metabolomics databases and repositories, which accept the deposition of metabolomic datasets, have arisen. In this study, the compliance of 399 public data sets, from four major metabolomics data repositories, to the biological context MSI reporting standards was evaluated. None of the reporting standards were complied with in every publicly available study, although adherence rates varied greatly, from 0 to 97%. The plant minimum reporting standards were the most complied with and the microbial and in vitro were the least. Our results indicate the need for reassessment and revision of the existing MSI reporting standards.
Introduction to Communication, Grades PreK-2. The Math Process Standards Series
ERIC Educational Resources Information Center
O'Connell, Susan; O'Connor, Kelly
2007-01-01
In this book, the authors offer suggestions for teachers to help students explore, express, and better understand mathematical content through talking and writing. They offer an array of entry points for understanding, planning, and teaching, including strategies that help students put their ideas into words, clarify them, elaborate on them, and…
Align-and-shine photolithography
NASA Astrophysics Data System (ADS)
Petrusis, Audrius; Rector, Jan H.; Smith, Kristen; de Man, Sven; Iannuzzi, Davide
2009-10-01
At the beginning of 2009, our group has introduced a new technique that allows fabrication of photolithographic patterns on the cleaved end of an optical fibre: the align-and-shine photolithography technique (see A. Petrušis et al., "The align-and-shine technique for series production of photolithography patterns on optical fibres", J. Micromech. Microeng. 19, 047001, 2009). Align-and-shine photolithography combines standard optical lithography with imagebased active fibre alignment processes. The technique adapts well to series production, opening the way to batch fabrication of fibre-top devices (D. Iannuzzi et al., "Monolithic fibre-top cantilever for critical environments and standard applications", Appl. Phys. Lett. 88, 053501, 2006) and all other devices that rely on suitable machining of engineered parts on the tip of a fibre. In this paper we review our results and briefly discuss its potential applications.
Estimating error statistics for Chambon-la-Forêt observatory definitive data
NASA Astrophysics Data System (ADS)
Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly
2017-08-01
We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.
USGS standard quadrangle maps for emergency response
Moore, Laurence R.
2009-01-01
The 1:24,000-scale topographic quadrangle was the primary product of the U.S. Geological Survey's (USGS) National Mapping Program from 1947-1992. This map series includes about 54,000 map sheets for the conterminous United States, and is the only uniform map series ever produced that covers this area at such a large scale. This map series partially was revised under several programs, starting as early as 1968, but these programs were not adequate to keep the series current. Through the 1990s the emphasis of the USGS mapping program shifted away from topographic maps and toward more specialized digital data products. Topographic map revision dropped off rapidly after 1999, and stopped completely by 2004. Since 2001, emergency-response and homeland security requirement have revived the question of whether a standard national topographic series is needed. Emergencies such as Hurricane Katrina in 2005 and California wildfires in 2007-08 demonstrated that familiar maps are important to first responders. Maps that have a standard scale, extent, and grids help reduce confusion and save time in emergencies. Traditional maps are designed to allow the human brain to quickly process large amounts of information, and depend on artistic layout and design that cannot be fully automated. In spite of technical advances, creating a traditional, general-purpose topographic map is still expensive. Although the content and layout of traditional topographic maps probably is still desirable, the preferred packaging and delivery of maps has changed. Digital image files are now desired by most users, but to be useful to the emergency-response community, these files must be easy to view and easy to print without specialized geographic information system expertise or software.
User-Centered Design Practices to Redesign a Nursing e-Chart in Line with the Nursing Process.
Schachner, María B; Recondo, Francisco J; González, Zulma A; Sommer, Janine A; Stanziola, Enrique; Gassino, Fernando D; Simón, Mariana; López, Gastón E; Benítez, Sonia E
2016-01-01
Regarding the user-centered design (UCD) practices carried out at Hospital Italiano of Buenos Aires, nursing e-chart user interface was redesigned in order to improve records' quality of nursing process based on an adapted Virginia Henderson theoretical model and patient safety standards to fulfil Joint Commission accreditation requirements. UCD practices were applied as standardized and recommended for electronic medical records usability evaluation. Implementation of these practices yielded a series of prototypes in 5 iterative cycles of incremental improvements to achieve goals of usability which were used and perceived as satisfactory by general care nurses. Nurses' involvement allowed balance between their needs and institution requirements.
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
Hammerschmidt, Nikolaus; Tscheliessnig, Anne; Sommer, Ralf; Helk, Bernhard; Jungbauer, Alois
2014-06-01
Standard industry processes for recombinant antibody production employ protein A affinity chromatography in combination with other chromatography steps and ultra-/diafiltration. This study compares a generic antibody production process with a recently developed purification process based on a series of selective precipitation steps. The new process makes two of the usual three chromatographic steps obsolete and can be performed in a continuous fashion. Cost of Goods (CoGs) analyses were done for: (i) a generic chromatography-based antibody standard purification; (ii) the continuous precipitation-based purification process coupled to a continuous perfusion production system; and (iii) a hybrid process, coupling the continuous purification process to an upstream batch process. The results of this economic analysis show that the precipitation-based process offers cost reductions at all stages of the life cycle of a therapeutic antibody, (i.e. clinical phase I, II and III, as well as full commercial production). The savings in clinical phase production are largely attributed to the fact that expensive chromatographic resins are omitted. These economic analyses will help to determine the strategies that are best suited for small-scale production in parallel fashion, which is of importance for antibody production in non-privileged countries and for personalized medicine. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Salmon, B. P.; Kleynhans, W.; Olivier, J. C.; van den Bergh, F.; Wessels, K. J.
2018-05-01
Humans are transforming land cover at an ever-increasing rate. Accurate geographical maps on land cover, especially rural and urban settlements are essential to planning sustainable development. Time series extracted from MODerate resolution Imaging Spectroradiometer (MODIS) land surface reflectance products have been used to differentiate land cover classes by analyzing the seasonal patterns in reflectance values. The proper fitting of a parametric model to these time series usually requires several adjustments to the regression method. To reduce the workload, a global setting of parameters is done to the regression method for a geographical area. In this work we have modified a meta-optimization approach to setting a regression method to extract the parameters on a per time series basis. The standard deviation of the model parameters and magnitude of residuals are used as scoring function. We successfully fitted a triply modulated model to the seasonal patterns of our study area using a non-linear extended Kalman filter (EKF). The approach uses temporal information which significantly reduces the processing time and storage requirements to process each time series. It also derives reliability metrics for each time series individually. The features extracted using the proposed method are classified with a support vector machine and the performance of the method is compared to the original approach on our ground truth data.
Occupational asthma in the commercial fishing industry: a case series and review of the literature.
Lucas, David; Lucas, Raymond; Boniface, Keith; Jegaden, Dominique; Lodde, Brice; Dewitte, Jean-Ariel
2010-01-01
We present a case series of snow crab-induced occupational asthma (OA) from a fishing and processing vessel, followed by a review of OA in the commercial fishing industry. OA is typically caused from an IgE-mediated hypersensitivity reaction after respiratory exposure to aerosolized fish and shellfish proteins. It more commonly occurs due to crustaceans, but molluscs and fin fish are implicated as well. Standard medical therapy for asthma may be used acutely; however, steps to reduce atmospheric allergen concentrations in the workplace have proven to be preventive for this disease.
NASA Astrophysics Data System (ADS)
Lindholm, D. M.; Wilson, A.
2010-12-01
The Laboratory for Atmospheric and Space Physics at the University of Colorado has developed an Open Source, OPeNDAP compliant, Java Servlet based, RESTful web service to serve time series data. In addition to handling OPeNDAP style requests and returning standard responses, existing modules for alternate output formats can be reused or customized. It is also simple to reuse or customize modules to directly read various native data sources and even to perform some processing on the server. The server is built around a common data model based on the Unidata Common Data Model (CDM) which merges the NetCDF, HDF, and OPeNDAP data models. The server framework features a modular architecture that supports pluggable Readers, Writers, and Filters via the common interface to the data, enabling a workflow that reads data from their native form, performs some processing on the server, and presents the results to the client in its preferred form. The service is currently being used operationally to serve time series data for the LASP Interactive Solar Irradiance Data Center (LISIRD, http://lasp.colorado.edu/lisird/) and as part of the Time Series Data Server (TSDS, http://tsds.net/). I will present the data model and how it enables reading, writing, and processing concerns to be separated into loosely coupled components. I will also share thoughts for evolving beyond the time series abstraction and providing a general purpose data service that can be orchestrated into larger workflows.
Godson, Richard H.
1974-01-01
GEOPAC .consists of a series of subroutines to primarily process potential-field geophysical data but other types of data can also be used with the program. The package contains routines to reduce, store, process and display information in two-dimensional or three-dimensional form. Input and output formats are standardized and temporary disk storage permits data sets to be processed by several subroutines in one job step. The subroutines are link-edited in an overlay mode to form one program and they can be executed by submitting a card containing the subroutine name in the input stream.
NASA Astrophysics Data System (ADS)
Osada, Y.; Ohta, Y.; Demachi, T.; Kido, M.; Fujimoto, H.; Azuma, R.; Hino, R.
2013-12-01
Large interplate earthquake repeatedly occurred in Japan Trench. Recently, the detail crustal deformation revealed by the nation-wide inland GPS network called as GEONET by GSI. However, the maximum displacement region for interplate earthquake is mainly located offshore region. GPS/Acoustic seafloor geodetic observation (hereafter GPS/A) is quite important and useful for understanding of shallower part of the interplate coupling between subducting and overriding plates. We typically conduct GPS/A in specific ocean area based on repeated campaign style using research vessel or buoy. Therefore, we cannot monitor the temporal variation of seafloor crustal deformation in real time. The one of technical issue on real time observation is kinematic GPS analysis because kinematic GPS analysis based on reference and rover data. If the precise kinematic GPS analysis will be possible in the offshore region, it should be promising method for real time GPS/A with USV (Unmanned Surface Vehicle) and a moored buoy. We assessed stability, precision and accuracy of StarFireTM global satellites based augmentation system. We primarily tested for StarFire in the static condition. In order to assess coordinate precision and accuracy, we compared 1Hz StarFire time series and post-processed precise point positioning (PPP) 1Hz time series by GIPSY-OASIS II processing software Ver. 6.1.2 with three difference product types (ultra-rapid, rapid, and final orbits). We also used difference interval clock information (30 and 300 seconds) for the post-processed PPP processing. The standard deviation of real time StarFire time series is less than 30 mm (horizontal components) and 60 mm (vertical component) based on 1 month continuous processing. We also assessed noise spectrum of the estimated time series by StarFire and post-processed GIPSY PPP results. We found that the noise spectrum of StarFire time series is similar pattern with GIPSY-OASIS II processing result based on JPL rapid orbit products with 300 seconds interval clock information. And we report stability, precision and accuracy of StarFire in the moving conditon.
Minding Impacting Events in a Model of Stochastic Variance
Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.
2011-01-01
We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864
Audit of nuclear medicine scientific and technical standards.
Jarritt, Peter H; Perkins, Alan C; Woods, Sandra D
2004-08-01
The British Nuclear Medicine Society has developed a process for the service-specific organizational audit of nuclear medicine departments. This process identified the need for a scheme suitable for the audit of the scientific and technical standards of a department providing such a service. This document has evolved following audit visits of a number of UK departments. It is intended to be used as a written document to facilitate the audit procedure and may be used for both external and self-audit purposes. Scientific and technical standards have been derived from a number of sources, including regulatory documents, notes for guidance and peer-reviewed publications. The audit scheme is presented as a series of questions with responses graded according to legal and safety obligations (A), good practice (B) and desirable aspects of service delivery (C). This document should be regarded as part of an audit framework and should be kept under review as the process evolves to meet the future demands of this high-technology-based clinical service.
Network structure of multivariate time series.
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-10-21
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.
Statement Before the U. S. Senate Committee on Commerce Subcommittee on Communications.
ERIC Educational Resources Information Center
Schneider, Alfred R.
The American Broadcasting Company's (ABC) Department of Standards and Practices follows a precise and detailed series of steps in its review of material presented over the network, to assure its conformity with the Television Code of the National Association of Broadcasters. In this process, special attention is given to programs which contain…
Standard Operating Procedures for Collecting Data Requested by the Federal Government.
ERIC Educational Resources Information Center
New Jersey State Dept. of Education, Trenton. Office of Management Information.
Federal requests to collect data from local school systems originate at the federal program level; and through a series of processes, are placed before the Council of Chief State School Officers' (CCSSO) Committee on Evaluation and Information Systems (CEIS) for review. The New Jersey CEIS Coordinator is located within the Office of Management…
ERIC Educational Resources Information Center
Barth, John; Burk, Zona Sharp; Serfass, Richard; Harms, Barbara Ann; Houlihan, G. Thomas; Anderson, Gerald; Farley, Raymond P.; Rigsby, Ken; O'Rourke, John
This document, one of a series of reports, focuses on the adoption of principles of quality management, originally developed by W. Edwards Deming, and the Baldrige Criteria for use in education. These processes and tools for systemic organizational management, when comprehensively applied, produce performance excellence and continuous improvement.…
The Relationship between the Training Function and ISO-9000 Registration.
ERIC Educational Resources Information Center
Maxwell, Randy C.; Jost, Karen L.
ISO 9000 is one of a series of international standards providing guidelines and governing quality of products and services. The ISO 9000 certification demonstrates the capability of a supplier to control the processes that determine the acceptability of the product or service being supplied. This paper focuses on the training aspects of ISO 9000…
Individualized Program Planning (IPP): ECS to Grade 12. Programming for Students with Special Needs
ERIC Educational Resources Information Center
Online Submission, 2006
2006-01-01
This resource is a revision of the teaching resource Individualized Program Plans (1995), Book 3 (ED392232) in the Programming for Students with Special Needs series. It aims to create a bridge between the product, the process and the underlying vision of Individualized Program Planning (IPP). The Standards for Special Education (Amended June…
Neurons and the Process Standards
ERIC Educational Resources Information Center
Zambo, Ron; Zambo, Debby
2011-01-01
The classic Chickens and Pigs problem is considered to be an algebraic problem with two equations and two unknowns. In this article, the authors describe how third-grade teacher Maria is using it to develop a problem-based lesson because she is looking to her students' future needs. As Maria plans, she considers how a series of problems with the…
Adapting the CUAHSI Hydrologic Information System to OGC standards
NASA Astrophysics Data System (ADS)
Valentine, D. W.; Whitenack, T.; Zaslavsky, I.
2010-12-01
The CUAHSI Hydrologic Information System (HIS) provides web and desktop client access to hydrologic observations via water data web services using an XML schema called “WaterML”. The WaterML 1.x specification and the corresponding Water Data Services have been the backbone of the HIS service-oriented architecture (SOA) and have been adopted for serving hydrologic data by several federal agencies and many academic groups. The central discovery service, HIS Central, is based on an metadata catalog that references 4.7 billion observations, organized as 23 million data series from 1.5 million sites from 51 organizations. Observations data are published using HydroServer nodes that have been deployed at 18 organizations. Usage of HIS has increased by 8x from 2008 to 2010, and doubled in usage from 1600 data series a day in 2009 to 3600 data series a day in the first half of 2010. The HIS central metadata catalog currently harvests information from 56 Water Data Services. We collaborate on the catalog updates with two federal partners, USGS and US EPA: their data series are periodically reloaded into the HIS metadata catalog. We are pursuing two main development directions in the HIS project: Cloud-based computing, and further compliance with Open Geospatial Consortium (OGC) standards. The goal of moving to cloud-computing is to provide a scalable collaborative system with a simpler deployment and less dependence of hardware maintenance and staff. This move requires re-architecting the information models underlying the metadata catalog, and Water Data Services to be independent of the underlying relational database model, allowing for implementation on both relational databases, and cloud-based processing systems. Cloud-based HIS central resources can be managed collaboratively; partners share responsibility for their metadata by publishing data series information into the centralized catalog. Publishing data series will use REST-based service interfaces, like OData, as the basis for ingesting data series information into a cloud-hosted catalog. The future HIS services involve providing information via OGC Standards that will allow for observational data access from commercial GIS applications. Use of standards will allow for tools to access observational data from other projects using standards, such as the Ocean Observatories Initiative, and for tools from such projects to be integrated into the HIS toolset. With international collaborators, we have been developing a water information exchange language called “WaterML 2.0” which will be used to deliver observations data over OGC Sensor Observation Services (SOS). A software stack of OGC standard services will provide access to HIS information. In addition to SOS, Web Mapping and Feature Services (WMS, and WFS) will provide access to location information. Catalog Services for the Web (CSW) will provide a catalog for water information that is both centralized, and distributed. We intend the OGC standards supplement the existing HIS service interfaces, rather than replace the present service interfaces. The ultimate goal of this development is expand access to hydrologic observations data, and create an environment where these data can be seamlessly integrated with standards-compliant data resources.
Romano, C; Carosso, A; Bosio, D; Chiesa, A; Gullino, A; Turrini, A
2003-01-01
Aim of the study was to verify the reliability in clinical practice of patch testing with "standard" series and additional series of haptens for the diagnosis of occupational and non-occupational allergic contact dermatitis, evaluating positive reactions and relating those reactions to professional categories. A total of 392 out of 937 patients (41.8%) showed at least one positive reaction to "standard" series testing; the hapten most frequently noted as the cause of positive reaction was nickel sulphate. Professional categories that showed positive reactions to "standard" series most frequently were clerks, hairdressers and hospital auxiliary workers. Among 897 patients tested with nonstandard allergens, only 124 (13.8%) elicited at least one positive reaction, ammonium persulphate being the most frequently positive hapten. A dominant percentage of positive results was seen in hairdressers and cleaning personnel. No positive reactions were observed in a large number of haptens, tested more than 200 times. Haptens of "standard series" elicited a higher number of positive reaction than the additional series, even though there was a high specificity of few additional series haptens in some professional categories. Data suggest some caution in systematically testing additional series, despite a higher accuracy and diagnostic efficacy in some job categories.
FOOT ECZEMA: THE ROLE OF PATCH TEST IN DETERMINING THE CAUSATIVE AGENT USING STANDARD SERIES
Priya, K S; Kamath, Ganesh; Martis, Jacintha; D, Sukumar; Shetty, Narendra J; Bhat, Ramesh M; Kishore, B Nanda
2008-01-01
Foot dermatitis refers to the predominant involvement of feet in the eczematous process. This study is undertaken to determine the clinical pattern and causative agent in foot eczema and to evaluate the role of patch testing in determining the causative agent of foot eczema. Data was collected from 50 patients with foot eczema, who attended the out-patient department. The patch test was performed using Indian standard series. Patch test was positive in 88% of the patients. The most common site affected was the dorsal aspect of the foot (48%) and scaly plaque was the predominant morphological pattern. The highest number of patients (24%) showed positive reactions to mercaptobenzothiazole (MBT) and the lowest (4%) to neomycin sulfate. Rubber and rubber chemicals have been reported worldwide to be the most common sensitizer causing foot eczema. Thus, patch test has a major role in finding out the cause of foot eczema. PMID:19881990
1988-01-01
for this conference’s success goes to the panel chairmen and their panelists who gave generously of their time , effort, and talent, to the...Conference is the fourth in a series of bi-annual meetings convened to address timely issues affecting defense acquisition. Complementary goals are...the perspective on that issue has been clarified. o A lot of times we tend to look at acquisition, standardization, logistics, quality and other
NASA Technical Reports Server (NTRS)
Jester, Peggy L.; Lee, Jeffrey; Zukor, Dorothy J. (Technical Monitor)
2001-01-01
This document addresses the software requirements of the Geoscience Laser Altimeter System (GLAS) Standard Data Software (SDS) supporting the GLAS instrument on the EOS ICESat Spacecraft. This Software Requirements Document represents the initial collection of the technical engineering information for the GLAS SDS. This information is detailed within the second of four main volumes of the Standard documentation, the Product Specification volume. This document is a "roll-out" from the governing volume outline containing the Concept and Requirements sections.
FY 2016 Status Report on the Modeling of the M8 Calibration Series using MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Benjamin Allen; Ortensi, Javier; DeHart, Mark David
2016-09-01
This report provides a summary of the progress made towards validating the multi-physics reactor analysis application MAMMOTH using data from measurements performed at the Transient Reactor Test facility, TREAT. The work completed consists of a series of comparisons of TREAT element types (standard and control rod assemblies) in small geometries as well as slotted mini-cores to reference Monte Carlo simulations to ascertain the accuracy of cross section preparation techniques. After the successful completion of these smaller problems, a full core model of the half slotted core used in the M8 Calibration series was assembled. Full core MAMMOTH simulations were comparedmore » to Serpent reference calculations to assess the cross section preparation process for this larger configuration. As part of the validation process the M8 Calibration series included a steady state wire irradiation experiment and coupling factors for the experiment region. The shape of the power distribution obtained from the MAMMOTH simulation shows excellent agreement with the experiment. Larger differences were encountered in the calculation of the coupling factors, but there is also great uncertainty on how the experimental values were obtained. Future work will focus on resolving some of these differences.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford
The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
Jabs, Douglas A; Nussenblatt, Robert B; Rosenbaum, James T
2005-09-01
To begin a process of standardizing the methods for reporting clinical data in the field of uveitis. Consensus workshop. Members of an international working group were surveyed about diagnostic terminology, inflammation grading schema, and outcome measures, and the results used to develop a series of proposals to better standardize the use of these entities. Small groups employed nominal group techniques to achieve consensus on several of these issues. The group affirmed that an anatomic classification of uveitis should be used as a framework for subsequent work on diagnostic criteria for specific uveitic syndromes, and that the classification of uveitis entities should be on the basis of the location of the inflammation and not on the presence of structural complications. Issues regarding the use of the terms "intermediate uveitis," "pars planitis," "panuveitis," and descriptors of the onset and course of the uveitis were addressed. The following were adopted: standardized grading schema for anterior chamber cells, anterior chamber flare, and for vitreous haze; standardized methods of recording structural complications of uveitis; standardized definitions of outcomes, including "inactive" inflammation, "improvement'; and "worsening" of the inflammation, and "corticosteroid sparing," and standardized guidelines for reporting visual acuity outcomes. A process of standardizing the approach to reporting clinical data in uveitis research has begun, and several terms have been standardized.
Duarte, Ida Alzira Gomes; Tanaka, Greta Merie; Suzuki, Nathalie Mie; Lazzarini, Rosana; Lopes, Andressa Sato de Aquino; Volpini, Beatrice Mussio Fornazier; Castro, Paulo Carrara de
2013-01-01
A retrospective study was carried out between 2006-2011. Six hundred and eighteen patients with suspected allergic contact dermatitis underwent the standard patch test series recommended by the Brazilian Contact Dermatitis Research Group. The aim of our study was to evaluate the variation of positive patch-test results from standard series year by year. The most frequently positive allergens were: nickel sulfate, thimerosal and potassium bichromate. Decrease of positive patch-test results over the years was statistically significant for: lanolin (p=0.01), neomycin (p=0.01) and anthraquinone (p=0.04). A follow-up study should be useful in determining which allergens could be excluded from standard series, as they may represent low sensitization risk.
The R-package eseis - A toolbox to weld geomorphic, seismologic, spatial, and time series analysis
NASA Astrophysics Data System (ADS)
Dietze, Michael
2017-04-01
Environmental seismology is the science of investigating the seismic signals that are emitted by Earth surface processes. This emerging field provides unique opportunities to identify, locate, track and inspect a wide range of the processes that shape our planet. Modern broadband seismometers are sensitive enough to detect signals from sources as weak as wind interacting with the ground and as powerful as collapsing mountains. This places the field of environmental seismology at the seams of many geoscientific disciplines and requires integration of a series of specialised analysis techniques. R provides the perfect environment for this challenge. The package eseis uses the foundations laid by a series of existing packages and data types tailored to solve specialised problems (e.g., signal, sp, rgdal, Rcpp, matrixStats) and thus provides access to efficiently handling large streams of seismic data (> 300 million samples per station and day). It supports standard data formats (mseed, sac), preparation techniques (deconvolution, filtering, rotation), processing methods (spectra, spectrograms, event picking, migration for localisation) and data visualisation. Thus, eseis provides a seamless approach to the entire workflow of environmental seismology and passes the output to related analysis fields with temporal, spatial and modelling focus in R.
Effect of Time Varying Gravity on DORIS processing for ITRF2013
NASA Astrophysics Data System (ADS)
Zelensky, N. P.; Lemoine, F. G.; Chinn, D. S.; Beall, J. W.; Melachroinos, S. A.; Beckley, B. D.; Pavlis, D.; Wimert, J.
2013-12-01
Computations are under way to develop a new time series of DORIS SINEX solutions to contribute to the development of the new realization of the terrestrial reference frame (c.f. ITRF2013). One of the improvements that are envisaged is the application of improved models of time-variable gravity in the background orbit modeling. At GSFC we have developed a time series of spherical harmonics to degree and order 5 (using the GOC02S model as a base), based on the processing of SLR and DORIS data to 14 satellites from 1993 to 2013. This is compared with the standard approach used in ITRF2008, based on the static model EIGEN-GL04S1 which included secular variations in only a few select coefficients. Previous work on altimeter satellite POD (c.f. TOPEX/Poseidon, Jason-1, Jason-2) has shown that the standard model is not adequate and orbit improvements are observed with application of more detailed models of time-variable gravity. In this study, we quantify the impact of TVG modeling on DORIS satellite POD, and ascertain the impact on DORIS station positions estimated weekly from 1993 to 2013. The numerous recent improvements to SLR and DORIS processing at GSFC include a more complete compliance to IERS2010 standards, improvements to SLR/DORIS measurement modeling, and improved non-conservative force modeling to DORIS satellites. These improvements will affect gravity coefficient estimates, POD, and the station solutions. Tests evaluate the impact of time varying gravity on tracking data residuals, station consistency, and the geocenter and scale reference frame parameters.
Development of noise emission measurement specifications for color printing multifunctional devices
NASA Astrophysics Data System (ADS)
Kimizuka, Ikuo
2005-09-01
Color printing (including copying) is becoming more popular application in home, as well as in offices. Existing de jule and/or industrial standards (such as ISO 7779, ECMA-74, ANSI S12.10 series, etc.), however, state only monochrome patterns, which are mainly intended for acoustic noise testing of mechanical impact type printers. This paper discusses the key issues and corresponding resolutions for development of color printing patterns for acoustic noise measurements. The results of these technical works will be published by JBMS-74 (new industry standard of JBMIA within 2005), and hopefully be the technical basis of updating other standards mentioned above. This paper also shows the development processes and key features of proposed patterns.
WaterML, an Information Standard for the Exchange of in-situ hydrological observations
NASA Astrophysics Data System (ADS)
Valentine, D.; Taylor, P.; Zaslavsky, I.
2012-04-01
The WaterML 2.0 Standards Working Group (SWG), working within the Open Geospatial Consortium (OGC) and in cooperation with the joint OGC-World Meteorological Organization (WMO) Hydrology Domain Working Group (HDWG), has developed an open standard for the exchange of water observation data; WaterML 2.0. The focus of the standard is time-series data, commonly generated from in-situ style monitoring. This is high value data for hydrological applications such as flood forecasting, environmental reporting and supporting hydrological infrastructure (e.g. dams, supply systems), which is commonly exchanged, but a lack of standards inhibits efficient reuse and automation. The process of developing WaterML required doing a harmonization analysis of existing standards to identify overlapping concepts and come to agreement on a harmonized definition. Generally the formats captured similar requirements, all with subtle differences, such as how time-series point metadata was handled. The in-progress standard WaterML 2.0 incorporates the semantics of the hydrologic information: location, procedure, and observations, and is implemented as an application schema of the Geography Markup Language version 3.2.1, making use of the OGC Observations & Measurements standards. WaterML2.0 is designed as an extensible schema to allow encoding of data to be used in a variety of exchange scenarios. Example areas of usage are: exchange of data for operational hydrological monitoring programs; supporting operation of infrastructure (e.g. dams, supply systems); cross-border exchange of observational data; release of data for public dissemination; enhancing disaster management through data exchange; and exchange in support of national reporting The first phase of WaterML2.0 focused on structural definitions allowing for the transfer of time-series, with less work on harmonization of vocabulary items such as quality codes. Vocabularies from various organizations tend to be specific and take time to come to agreement on. This will be continued in future work for the HDWG, along with extending the information model to cover additional types of hydrologic information: rating and gauging information, and water quality. Rating curves, gaugings and river cross sections are commonly exchanged in addition to standard time-series data to allow information relating to conversions such as river level to discharge. Members of the HDWG plan to initiate this work in early 2012. Water quality data is varied in the way it is processed and in the number of phenomena it measures. It will require specific components of extension to the WaterML2.0 model, most likely making use of the specimen types within O&M and extensive use of controlled vocabularies. Other future work involves different target encodings for the WaterML2.0 conceptual model, such as JSON, netCDF, CSV etc. are optimized for particular needs, such as efficiency in size of the encoding and parsing of structure, but may not be capable of representing the full extent of the WaterML2.0 information model. Certain encodings are best matched for particular needs; the community has begun investigation into when and how best to implement these.
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
77 FR 19148 - Special Conditions: Airbus, A350-900 Series Airplane; Crew Rest Compartments
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-30
...-900 series airplanes. These airplanes will have novel or unusual design features associated with two... standards for this design feature. These proposed special conditions contain the additional safety standards... for FAA type certification to June 28, 2009. The A350-900 series has a conventional layout with twin...
Epoxy-based production of wind turbine rotor blades: occupational contact allergies.
Pontén, Ann; Carstensen, Ole; Rasmussen, Kurt; Gruvberger, Birgitta; Isaksson, Marléne; Bruze, Magnus
2004-03-01
An industry producing rotor blades for wind turbines with an epoxy-based technology had experienced an increasing number of workers with dermatitis, among whom the frequency of occupational contact allergy (OCA) was suspected to be underestimated. To investigate the frequency of OCA by patch-testing with a specially profiled occupational patch test series. In a blinded study design, 603 workers were first interviewed and thereafter clinically examined. Based on a history of work-related skin disease, clinical findings of dermatitis, or both, 325 (53.9%) of the workers were patch-tested with an occupational patch test series and the European Standard patch test series. Of the 603 investigated workers, 10.9% had OCA and 5.6% had contact allergy to epoxy resin in the standard test series. Contact allergy to amine hardeners/catalysts was found in 4.1% of the workers. Among the workers with OCA, 48.5% reacted to work material other than epoxy resin in the European Standard patch test series. Approximately 50% of the workers with OCA would not have been detected if only the European Standard patch test series had been used.
40 CFR 94.8 - Exhaust emission standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
...)(i) of this section. (f) The following define the requirements for low-emitting Blue Sky Series engines: (1) Voluntary standards. (i) Category 1 and Category 2 engines may be designated “Blue Sky Series... may be designated “Blue Sky Series” engines by meeting these voluntary standards that would apply to...
Crashworthiness of Small Poststandard School Buses: Safety Study.
ERIC Educational Resources Information Center
National Transportation Safety Board (DOT), Washington, DC.
In 1977, a series of Federal Motor Vehicle Safety Standards (FMVSS) for school buses became effective, mandating different performance standards for school buses compared to other buses. Because data on the crash performance of school buses built to these standards were lacking, the National Transportation Safety Board conducted a series of…
Using in-situ Glider Data to Improve the Interpretation of Time-Series Data in the San Pedro Channel
NASA Astrophysics Data System (ADS)
Teel, E.; Liu, X.; Seegers, B. N.; Ragan, M. A.; Jones, B. H.; Levine, N. M.
2016-02-01
Oceanic time-series have provided insight into biological, physical, and chemical processes and how these processes change over time. However, time-series data collected near coastal zones have not been used as broadly because of regional features that may prevent extrapolation of local results. Though these sites are inherently more affected by local processes, broadening the application of coastal data is crucial for improved modeling of processes such as total carbon drawdown and the development of oxygen minimum zones. Slocum gliders were deployed off the coast of Los Angeles from February to July of 2013 and 2014 providing high temporal and spatial resolution data of the San Pedro Channel (SPC), which includes the San Pedro Ocean Time Series (SPOT). The data were collapsed onto a standardized grid and primary and secondary characteristics of glider profiles were analyzed by principal component analysis to determine the processes impacting SPC and SPOT. The data fell into four categories: active upwelling, offshore intrusion, subsurface bloom, and surface bloom. Waters across the SPC were most similar to offshore water masses, even during the upwelling season when near-shore blooms are commonly observed. The SPOT site was found to be representative of the SPC 86% of the time, suggesting that the findings from SPOT are applicable for the entire SPC. Subsurface blooms were common in both years with co-located chlorophyll and particle maxima, and results suggested that these subsurface blooms contribute significantly to the local primary production. Satellite estimation of integrated chlorophyll was poor, possibly due to the prevalence of subsurface blooms and shallow optical depths during surface blooms. These results indicate that high resolution in-situ glider deployments can be used to determine the spatial domain of coastal time-series data, allowing for broader application of these datasets and greater integration into modeling efforts.
NASA Astrophysics Data System (ADS)
Antonik, Piotr; Haelterman, Marc; Massar, Serge
2017-05-01
Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.
NASA Astrophysics Data System (ADS)
Moser, Markus; Mehlhorn, Susanne; Rudolf-Miklau, Florian; Suda, Jürgen
2017-04-01
Since the beginning of systematic torrent control in Austria 130 years ago, barriers are constructed for protection purposes. Until the end of the 1960s, solid barriers were built at the exits of depositional areas to prevent dangerous debris flows from reaching high consequence areas. The development of solid barriers with large slots or slits to regulate sediment transport began with the use of reinforced concrete during the 1970s (Rudolf-Miklau, Suda 2011). In order to dissipate the energy of debris flows debris flow breakers have been designed since the 1980s. By slowing and depositing the surge front of the debris flow, downstream reaches of the stream channel and settlement areas should be exposed to considerably lower dynamic impact. In the past, the technological development of these constructions was only steered by the experiences of the engineering practice while an institutionalized process of standardization comparable to other engineering branches was not existent. In future all structures have to be designed and dimensioned according to the EUROCODE standards. This was the reason to establish an interdisciplinary working group (ON-K 256) at the Austrian Standards Institute (ASI), which has managed to developed comprehensive new technical standards for torrent control engineering, including load models, design, dimensioning and life cycle assessment of torrent control works (technical standard ONR 24800 - series). Extreme torrential events comprise four definable displacement processes floods; fluvial solid transport; hyper-concentrated solid transport (debris floods) and debris flow (stony debris flow or mud-earth flow). As a rule, the design of the torrential barriers has to follow its function (Kettl, 1984). Modern protection concepts in torrent control are scenario-oriented and try to optimize different functions in a chain of protections structures (function chain). More or less the first step for the designing the optimal construction type is the definition of the displacement processes for each torrent section. The criteria for each process are defined in the technical standard ONR 24800 - series in Austria. According to ONR 24800 the functions of torrential barriers can be divided in process control functional types (retention; dosing and filtering; energy dissipation). The last step is the designing of the construction type. Bedload and debris events in Austria showed the functionality of the barriers. On the basis of these findings and results, some recommendations were derived to improve the function fulfilment of the technical protection measures.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Planning and leading of the technological processes by mechanical working with microsoft project
NASA Astrophysics Data System (ADS)
Nae, I.; Grigore, N.
2016-08-01
Nowadays, fabrication systems and methods are being modified; new processing technologies come up, flow sheets develop a minimum number of phases, the flexibility of the technologies grows up, new methods and instruments of monitoring and leading the processing operations also come up. The technological course (route, entry, scheme, guiding) referring to the series of the operation, putting and execution phases of a mark in order to obtain the final product from the blank is represented by a sequence of activities realized by a logic manner, on a well determined schedule, with a determined budget and resources. Also, a project can be defined as a series of specific activities, methodical structured which they aim to finish a specific objective, within a fixed schedule and budget. Within the homogeneity between the project and the technological course, this research is presenting the defining of the technological course of mechanical chip removing process using Microsoft Project. Under these circumstances, this research highlights the advantages of this method: the celerity using of other technological alternatives in order to pick the optimal process, the job scheduling being constrained by any kinds, the standardization of some processing technological operations.
The GeoDataPortal: A Standards-based Environmental Modeling Data Access and Manipulation Toolkit
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Kunicki, T.; Booth, N.; Suftin, I.; Zoerb, R.; Walker, J.
2010-12-01
Environmental modelers from fields of study such as climatology, hydrology, geology, and ecology rely on many data sources and processing methods that are common across these disciplines. Interest in inter-disciplinary, loosely coupled modeling and data sharing is increasing among scientists from the USGS, other agencies, and academia. For example, hydrologic modelers need downscaled climate change scenarios and land cover data summarized for the watersheds they are modeling. Subsequently, ecological modelers are interested in soil moisture information for a particular habitat type as predicted by the hydrologic modeler. The USGS Center for Integrated Data Analytics Geo Data Portal (GDP) project seeks to facilitate this loose model coupling data sharing through broadly applicable open-source web processing services. These services simplify and streamline the time consuming and resource intensive tasks that are barriers to inter-disciplinary collaboration. The GDP framework includes a catalog describing projects, models, data, processes, and how they relate. Using newly introduced data, or sources already known to the catalog, the GDP facilitates access to sub-sets and common derivatives of data in numerous formats on disparate web servers. The GDP performs many of the critical functions needed to summarize data sources into modeling units regardless of scale or volume. A user can specify their analysis zones or modeling units as an Open Geospatial Consortium (OGC) standard Web Feature Service (WFS). Utilities to cache Shapefiles and other common GIS input formats have been developed to aid in making the geometry available for processing via WFS. Dataset access in the GDP relies primarily on the Unidata NetCDF-Java library’s common data model. Data transfer relies on methods provided by Unidata’s Thematic Real-time Environmental Data Distribution System Data Server (TDS). TDS services of interest include the Open-source Project for a Network Data Access Protocol (OPeNDAP) standard for gridded time series, the OGC’s Web Coverage Service for high-density static gridded data, and Unidata’s CDM-remote for point time series. OGC WFS and Sensor Observation Service (SOS) are being explored as mechanisms to serve and access static or time series data attributed to vector geometry. A set of standardized XML-based output formats allows easy transformation into a wide variety of “model-ready” formats. Interested users will have the option of submitting custom transformations to the GDP or transforming the XML output as a post-process. The GDP project aims to support simple, rapid development of thin user interfaces (like web portals) to commonly needed environmental modeling-related data access and manipulation tools. Standalone, service-oriented components of the GDP framework provide the metadata cataloging, data subset access, and spatial-statistics calculations needed to support interdisciplinary environmental modeling.
A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting
NASA Astrophysics Data System (ADS)
Kim, T.; Joo, K.; Seo, J.; Heo, J. H.
2016-12-01
Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.
Jadidi, Masoud; Båth, Magnus; Nyrén, Sven
2018-04-09
To compare the quality of images obtained with two different protocols with different acquisition time and the influence from image post processing in a chest digital tomosynthesis (DTS) system. 20 patients with suspected lung cancer were imaged with a chest X-ray equipment with tomosynthesis option. Two examination protocols with different acquisition times (6.3 and 12 s) were performed on each patient. Both protocols were presented with two different image post-processing (standard DTS processing and more advanced processing optimised for chest radiography). Thus, 4 series from each patient, altogether 80 series, were presented anonymously and in a random order. Five observers rated the quality of the reconstructed section images according to predefined quality criteria in three different classes. Visual grading characteristics (VGC) was used to analyse the data and the area under the VGC curve (AUC VGC ) was used as figure-of-merit. The 12 s protocol and the standard DTS processing were used as references in the analyses. The protocol with 6.3 s acquisition time had a statistically significant advantage over the vendor-recommended protocol with 12 s acquisition time for the classes of criteria, Demarcation (AUC VGC = 0.56, p = 0.009) and Disturbance (AUC VGC = 0.58, p < 0.001). A similar value of AUC VGC was found also for the class Structure (definition of bone structures in the spine) (0.56) but it could not be statistically separated from 0.5 (p = 0.21). For the image processing, the VGC analysis showed a small but statistically significant advantage for the standard DTS processing over the more advanced processing for the classes of criteria Demarcation (AUC VGC = 0.45, p = 0.017) and Disturbance (AUC VGC = 0.43, p = 0.005). A similar value of AUC VGC was found also for the class Structure (0.46), but it could not be statistically separated from 0.5 (p = 0.31). The study indicates that the protocol with 6.3 s acquisition time yields slightly better image quality than the vender-recommended protocol with acquisition time 12 s for several anatomical structures. Furthermore, the standard gradation processing (the vendor-recommended post-processing for DTS), yields to some extent advantage over the gradation processing/multiobjective frequency processing/flexible noise control processing in terms of image quality for all classes of criteria. Advances in knowledge: The study proves that the image quality may be strongly affected by the selection of DTS protocol and that the vendor-recommended protocol may not always be the optimal choice.
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
The Vanderbilt Professional Nursing Practice Program, part 3: managing an advancement process.
Steaban, Robin; Fudge, Mitzie; Leutgens, Wendy; Wells, Nancy
2003-11-01
Consistency of performance standards across multiple clinical settings is an essential component of a credible advancement system. Our advancement process incorporates a central committee, composed of nurses from all clinical settings within the institution, to ensure consistency of performance in inpatient, outpatient, and procedural settings. An analysis of nurses advanced during the first 18 months of the program indicates that performance standards are applicable to nurses in all clinical settings. The first article (September 2003) in this 3-part series described the foundation for and the philosophical background of the Vanderbilt Professional Nursing Practice Program (VPNPP), the career advancement program underway at Vanderbilt University Medical Center. Part 2 described the development of the evaluation tools used in the VPNPP, the implementation and management of this new system, program evaluation, and improvements since the program's inception. The purpose of this article is to review the advancement process, review the roles of those involved in the process, and to describe outcomes and lessons learned.
Hernández-Martin, Estefania; Marcano, Francisco; Casanova, Oscar; Modroño, Cristian; Plata-Bello, Julio; González-Mora, Jose Luis
2017-01-01
Abstract. Diffuse optical tomography (DOT) measures concentration changes in both oxy- and deoxyhemoglobin providing three-dimensional images of local brain activations. A pilot study, which compares both DOT and functional magnetic resonance imaging (fMRI) volumes through t-maps given by canonical statistical parametric mapping (SPM) processing for both data modalities, is presented. The DOT series were processed using a method that is based on a Bayesian filter application on raw DOT data to remove physiological changes and minimum description length application index to select a number of singular values, which reduce the data dimensionality during image reconstruction and adaptation of DOT volume series to normalized standard space. Therefore, statistical analysis is performed with canonical SPM software in the same way as fMRI analysis is done, accepting DOT volumes as if they were fMRI volumes. The results show the reproducibility and ruggedness of the method to process DOT series on group analysis using cognitive paradigms on the prefrontal cortex. Difficulties such as the fact that scalp–brain distances vary between subjects or cerebral activations are difficult to reproduce due to strategies used by the subjects to solve arithmetic problems are considered. T-images given by fMRI and DOT volume series analyzed in SPM show that at the functional level, both DOT and fMRI measures detect the same areas, although DOT provides complementary information to fMRI signals about cerebral activity. PMID:28386575
An array processing system for lunar geochemical and geophysical data
NASA Technical Reports Server (NTRS)
Eliason, E. M.; Soderblom, L. A.
1977-01-01
A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.
ERIC Educational Resources Information Center
Jones, Barbara; Tobiason, Glory; Chang, Sandy; Heritage, Margaret; Herman, Joan
2015-01-01
This resource is part of a series produced by the Center for Standards and Assessment Implementation (CSAI) to assist teachers and those who support teachers to plan teaching and learning from College and Career Ready Standards (CCRS) for all students, including students with disabilities, English learners, academically at-risk students, students…
2015-05-20
Joint Oil Analysis Program Spectrometer Standards SCP Science (Conostan) Qualification Report For D19-0, D3-100, and D12- XXX Series Standards NF...Candidate Type D19-0 ICP-AES Results ..................................................................... 4 Table V. Candidate Type D12- XXX ...Physical Property Results .................................................. 5 Table VI. Candidate Type D12- XXX Rotrode-AES Results
Implementation of a formulary management process.
Karel, Lauren I; Delisle, Dennis R; Anagnostis, Ellena A; Wordell, Cindy J
2017-08-15
The application of lean methodology in an initiative to redesign the formulary maintenance process at an academic medical center is described. Maintaining a hospital formulary requires clear communication and coordination among multiple members of the pharmacy department. Using principles of lean methodology, pharmacy department personnel within a multihospital health system launched a multifaceted initiative to optimize formulary management systemwide. The ongoing initiative began with creation of a formulary maintenance redesign committee consisting of pharmacy department personnel with expertise in informatics, automation, purchasing, drug information, and clinical pharmacy services. The committee met regularly and used lean methodology to design a standardized process for management of formulary additions and deletions and changes to medications' formulary status. Through value stream analysis, opportunities for process and performance improvement were identified; staff suggestions on process streamlining were gathered during a series of departmental kaizen events. A standardized template for development and dissemination of monographs associated with formulary additions and status changes was created. In addition, a shared Web-based checklist was developed to facilitate information sharing and timely initiation and completion of tasks involved in formulary status changes, and a permanent formulary maintenance committee was established to monitor and refine the formulary management process. A clearly defined, standardized process within the pharmacy department was developed for tracking necessary steps in enacting formulary changes to encourage safe and efficient workflow. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Satellite-motion Compensation for Monitoring Travelling Ionospheric Disturbances (TIDs) Using GPS
NASA Astrophysics Data System (ADS)
Jackson-Booth, N.; Penney, R.
2016-12-01
The ionosphere exerts a strong influence over a wide range of modern communications and navigtion systems, but is subject to complex influences from both terrestrial and solar sources. Ionospheric disturbances can be triggered by lower-atmosphere phenomena such as hurricanes as well as geophysical events such as earthquakes, as well as being strongly influenced by cyclical and unpredictable solar behaviour. Dual-band GPS receivers provide a popular and convenient means of obtaining information about the ionosphere, and ionospheric disturbances. While GPS measurements can provide clues about the state of the ionosphere, there are many challenges in obtaining reliable information from them. For example, drop-outs and carrier-phase cycle slips may have little influence on using GPS for (medium-precision) navigation, but can lead to signal-processing artefacts that would cause false alarms in detecting ionospheric disturbances. If one is interested in measuring the motion of travelling ionospheric disturbances (TIDs) one must also be able to disentangle the effects of satellite motion from the TID motion. We discuss a novel approach to robustly separating TID waveforms from background trends within GPS time-series of total electron content (TEC), as well as innovative techniques for estimating TID velocities using ideas from Synthetic Aperture Radar (SAR). Underpinning these, we consider how to robustly pre-process GPS time-series to reduce the influence of drop-outs while also reducing data volumes. We present comparisons of our TID velocity estimates with more standard "cross-correlation" techniques, including cases where these standard techniques produce pathological results. We also show results from simulated GPS time-series derived from modelled ionospheric disturbances.
GNSS station displacement analysis
NASA Astrophysics Data System (ADS)
Haritonova, Diana; Balodis, Janis; Janpaule, Inese; Normand, Madara
2013-04-01
Time series of GNSS station results of both the EUPOS®-Riga and LatPos networks have been developed at the Institute of Geodesy and Geoinformation (University of Latvia). The reference stations from EUREF Permanent Network (EPN) in surroundings of Latvia have been used and Bernese GPS Software, Version 5.0, in both static and kinematic modes was applied. The standard data sets were taken from IGS data base. The results of time series have been analysed and distinctive behaviour of daily and subdaily movements of EUPOS®-Riga and LatPos stations was identified. The reasons of dependence of GNSS station coordinate distribution on possible external factors such as seismic activity of some areas of Latvia and periodic processes were given.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-21
... Evacuation Systems Approved Under Technical Standard Order (TSO) TSO-C69b and Installed on Airbus Model A330-200 and -300 Series Airplanes, Model A340-200 and -300 Series Airplanes, and Model A340-541 and -642... evacuation systems approved under TSO- C69b and installed on certain Model A330-200 and -300 series airplanes...
Smith, Roger; Patel, Vipul; Satava, Richard
2014-09-01
There is a need for a standardized curriculum for training and assessment of robotic surgeons to proficiency, followed by high-stakes testing (HST) for certification. To standardize the curriculum and certification of robotic surgeons, a series of consensus conferences attended by 14 leading international surgical societies have been used to compile the outcomes measures and curriculum that should form the basis for a Fundamentals of Robotic Surgery (FRS) programme. A set of 25 outcomes measures and a curriculum for teaching the skills needed to safely use current generation surgical robotic systems has been developed and accepted by a committee of experienced robotic surgeons across 14 specialties. A standardized process for certifying the skills of a robotic surgeon has begun to emerge. The work described here documents both the processes used for developing educational material and the educational content of a robotic curriculum. Copyright © 2013 John Wiley & Sons, Ltd.
Normative Databases for Imaging Instrumentation.
Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray
2015-08-01
To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.
Normative Databases for Imaging Instrumentation
Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray
2015-01-01
Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003
Hervella-Garcés, M; García-Gavín, J; Silvestre-Salvador, J F
2016-09-01
The Spanish standard patch test series, as recommended by the Spanish Contact Dermatitis and Skin Allergy Research Group (GEIDAC), has been updated for 2016. The new series replaces the 2012 version and contains the minimum set of allergens recommended for routine investigation of contact allergy in Spain from 2016 onwards. Four haptens -clioquinol, thimerosal, mercury, and primin- have been eliminated owing to a low frequency of relevant allergic reactions, while 3 new allergens -methylisothiazolinone, diazolidinyl urea, and imidazolidinyl urea- have been added. GEIDAC has also modified the recommended aqueous solution concentrations for the 2 classic, major haptens methylchloroisothiazolinone and methylisothiazolinone, which are now to be tested at 200ppm in aqueous solution, and formaldehyde, which is now to be tested in a 2% aqueous solution. Updating the Spanish standard series is one of the functions of GEIDAC, which is responsible for ensuring that the standard series is suited to the country's epidemiological profile and pattern of contact sensitization. Copyright © 2016 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.
Defense Modeling and Simulation Initiative
1992-05-01
project solicitation and priority ranking process, and reviewing policy issues . The activities of the DMSO and MSWG are also supported by a series of... issues have been raised for discussion, including: *Proumulgation of standards for the interoperability of models and simulations " Modeling and...have been completed or will be completed in the near term. The policy issues should be defined at a high level in the near term, although their
MTpy: A Python toolbox for magnetotellurics
Krieger, Lars; Peacock, Jared R.
2014-01-01
In this paper, we introduce the structure and concept of MTpy . Additionally, we show some examples from an everyday work-flow of MT data processing: the generation of standard EDI data files from raw electric (E-) and magnetic flux density (B-) field time series as input, the conversion into MiniSEED data format, as well as the generation of a graphical data representation in the form of a Phase Tensor pseudosection.
The Geostationary Operational Satellite R Series SpaceWire Based Data System
NASA Technical Reports Server (NTRS)
Anderson, William; Birmingham, Michael; Krimchansky, Alexander; Lombardi, Matthew
2016-01-01
The Geostationary Operational Environmental Satellite R-Series Program (GOES-R, S, T, and U) mission is a joint program between National Oceanic & Atmospheric Administration (NOAA) and National Aeronautics & Space Administration (NASA) Goddard Space Flight Center (GSFC). SpaceWire was selected as the science data bus as well as command and telemetry for the GOES instruments. GOES-R, S, T, and U spacecraft have a mission data loss requirement for all data transfers between the instruments and spacecraft requiring error detection and correction at the packet level. The GOES-R Reliable Data Delivery Protocol (GRDDP) [1] was developed in house to provide a means of reliably delivering data among various on board sources and sinks. The GRDDP was presented to and accepted by the European Cooperation for Space Standardization (ECSS) and is part of the ECSS Protocol Identification Standard [2]. GOES-R development and integration is complete and the observatory is scheduled for launch November 2016. Now that instrument to spacecraft integration is complete, GOES-R Project reviewed lessons learned to determine how the GRDDP could be revised to improve the integration process. Based on knowledge gained during the instrument to spacecraft integration process the following is presented to help potential GRDDP users improve their system designs and implementation.
Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; ...
2016-01-01
The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less
Evaluating the Effectiveness of the 2001-2002 NASA CONNECT(tm) Program
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Frank, Kari Lou; Lambert, Matthew A.; Williams, Amy C.
2002-01-01
NASA CONNECT(tm) is a research and standards-based, integrated mathematics, science, and technology series of 30-minute instructional distance learning (television and web-based) programs for students in grades 6-8. Respondents who evaluated the programs in the 2001-2002 NASA CONNECT(tm) series reported that (1) they used the programs in the series; (2) the goals and objectives for the series were met; (3) the programs were aligned with the national mathematics, science, and technology standards; (4) the program content was developmentally appropriate for grade level; and (5) the programs in the series enhanced and enriched the teaching of mathematics, science, and technology.
Evaluating the Effectiveness of the 2002-2003 NASA CONNECT(TM) Program
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Lambert, Matthew A.; Williams, Amy C.
2004-01-01
NASA CONNECT is a research-, inquiry-, and standards-based, integrated mathematics, science, and technology series of 30-minute instructional distance learning (television and web-based) programs for students in grades 6 8. Respondents who evaluated the programs in the 2002 2003 NASA CONNECT series reported that (1) they used the programs in the series; (2) the goals and objectives for the series were met; (3) the programs were aligned with the national mathematics, science, and technology standards; (4) the program content was developmentally appropriate for grade level; and (5) the programs in the series enhanced and enriched the teaching of mathematics, science, and technology.
NASA Technical Reports Server (NTRS)
Rowlands, D. D.; Luthcke, S. B.; McCarthy J. J.; Klosko, S. M.; Chinn, D. S.; Lemoine, F. G.; Boy, J.-P.; Sabaka, T. J.
2010-01-01
The differences between mass concentration (mas con) parameters and standard Stokes coefficient parameters in the recovery of gravity infonnation from gravity recovery and climate experiment (GRACE) intersatellite K-band range rate data are investigated. First, mascons are decomposed into their Stokes coefficient representations to gauge the range of solutions available using each of the two types of parameters. Next, a direct comparison is made between two time series of unconstrained gravity solutions, one based on a set of global equal area mascon parameters (equivalent to 4deg x 4deg at the equator), and the other based on standard Stokes coefficients with each time series using the same fundamental processing of the GRACE tracking data. It is shown that in unconstrained solutions, the type of gravity parameter being estimated does not qualitatively affect the estimated gravity field. It is also shown that many of the differences in mass flux derivations from GRACE gravity solutions arise from the type of smoothing being used and that the type of smoothing that can be embedded in mas con solutions has distinct advantages over postsolution smoothing. Finally, a 1 year time series based on global 2deg equal area mascons estimated every 10 days is presented.
Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A
2011-09-26
The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America
Whole Building Efficiency for Whole Foods: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deru, M.; Doebber, I.; Hirsch, A.
2013-02-01
The National Renewable Energy Laboratory partnered with Whole Foods Market under the Commercial Building Partnership (CBP) program to design and implement a new store in Raleigh, North Carolina. The result was a design with a predicted energy savings of 40% over ASHRAE Standard 90.1-2004, and 25% energy savings over their standard design. Measured performance of the as-built building showed that the building did not achieve the predicted performance. A detailed review of the project several months after opening revealed a series of several items in construction and controls items that were not implemented properly and were not fully corrected inmore » the commissioning process.« less
FDA recognition of consensus standards in the premarket notification program.
Marlowe, D E; Phillips, P J
1998-01-01
"The FDA has long advocated the use of standards as a significant contributor to safety and effectiveness of medical devices," Center for Devices and Radiological Health's (CDRH) Donald E. Marlowe and Philip J. Phillips note in the following article, highlighting the latest U.S. Food and Drug Administration (FDA) plans for use of standards. They note that the important role standards can play has been reinforced as part of FDA reengineering efforts undertaken in anticipation of an increased regulatory work-load and declining agency resources. As part of its restructuring effort, the FDA announced last spring that it would recognize some consensus standards for use in the device approval process. Under the new 510(k) paradigm--the FDA's proposal to streamline premarket review, which includes incorporating the use of standards in the review of 510(k) submissions--the FDA will accept proof of compliance with standards as evidence of device safety and effectiveness. Manufacturers may submit declarations of conformity to standards instead of following the traditional review process. The International Electrotechnical Commission (IEC) 60601 series of consensus standards, which deals with many safety issues common to electrical medical devices, was the first to be chosen for regulatory review. Other standards developed by nationally or internationally recognized standards development organizations, such as AAMI, may be eligible for use to ensure review requirements. In the following article, Marlowe and Phillips describe the FDA's plans to use standards in the device review process. The article focuses on the use of standards for medical device review, the development of the standards recognition process for reviewing devices, and the anticipated benefits of using standards to review devices. One important development has been the recent implementation of the FDA Modernization Act of 1997 (FDAMA), which advocates the use of standards in the device review process. In implementing the legislation, the FDA published in the Federal Register a list of standards to which manufacturers may declare conformity. Visit AAMI's Web site at www.aami.org/news/fda.standards for a copy of the list and for information on nominating other standards for official recognition by the agency. The FDA expects that use of standards will benefit the agency and manufacturers alike: "We estimate that in time, reliance on declarations of conformity to recognized standards could save the agency considerable resources while reducing the regulatory obstacles to entry to domestic and international markets," state the authors.
ERIC Educational Resources Information Center
Tobiason, Glory; Heritage, Margaret; Chang, Sandy; Jones, Barbara; Herman, Joan
2014-01-01
This resource is part of a series produced by the Center for Standards and Assessment Implementation (CSAI) to assist teachers and those who support teachers to plan teaching and learning from College and Career Ready Standards (CCRS) for all students, including students with disabilities, English learners, academically at-risk students, students…
Vieux, Florent; Dubois, Christophe; Allegre, Laëtitia; Mandon, Lionel; Ciantar, Laurent; Darmon, Nicole
2013-01-01
To assess the impact on food-related cost of meals to fulfill the new compulsory dietary standards for primary schools in France. A descriptive study assessed the relationship between the level of compliance with the standards of observed school meals and their food-related cost. An analytical study assessed the cost of series of meals published in professional journals, and complying or not with new dietary standards. The costs were based on prices actually paid for food used to prepare school meals. Food-related cost of meals. Parametric and nonparametric tests from a total of 42 and 120 series of 20 meals in the analytical and descriptive studies, respectively. The descriptive study indicated that meeting the standards was not related to cost. The analytical study showed that fulfilling the frequency guidelines increased the cost, whereas fulfilling the portion sizes criteria decreased it. Series of meals fully respecting the standards (ie, frequency and portion sizes) cost significantly less (-0.10 €/meal) than series not fulfilling them, because the standards recommend smaller portion sizes. Introducing portion sizes rules in dietary standards for school catering may help increase dietary quality without increasing the food cost of meals. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Concept of Draft International Standard for a Unified Approach to Space Program Quality Assurance
NASA Astrophysics Data System (ADS)
Stryzhak, Y.; Vasilina, V.; Kurbatov, V.
2002-01-01
For want of the unified approach to guaranteed space project and product quality assurance, implementation of many international space programs has become a challenge. Globalization of aerospace industry and participation of various international ventures with diverse quality assurance requirements in big international space programs requires for urgent generation of unified international standards related to this field. To ensure successful fulfillment of space missions, aerospace companies should design and process reliable and safe products with properties complying or bettering User's (or Customer's) requirements. Quality of the products designed or processed by subcontractors (or other suppliers) should also be in compliance with the main user (customer)'s requirements. Implementation of this involved set of unified requirements will be made possible by creating and approving a system (series) of international standards under a generic title Space Product Quality Assurance based on a system consensus principle. Conceptual features of the baseline standard in this system (series) should comprise: - Procedures for ISO 9000, CEN and ECSS requirements adaptation and introduction into space product creation, design, manufacture, testing and operation; - Procedures for quality assurance at initial (design) phases of space programs, with a decision on the end product made based on the principle of independence; - Procedures to arrange incoming inspection of products delivered by subcontractors (including testing, audit of supplier's procedures, review of supplier's documentation), and space product certification; - Procedures to identify materials and primary products applied; - Procedures for quality system audit at the component part, primary product and materials supplier facilities; - Unified procedures to form a list of basic performances to be under configuration management; - Unified procedures to form a list of critical space product components, and unified procedures to define risks related to the specific component application and evaluate safety for the entire program implementation. In the eyes of the authors, those features together with a number of other conceptual proposals should constitute a unified standard-technical basis for implementing international space programs.
ERIC Educational Resources Information Center
Federal Highway Administration (DOT), Washington, DC. Offices of Research and Development.
Part of the series "Managing Highway Maintenance," the unit is designed for use with unit eight, level one, and unit 13, level two, and the certification tests for those units in the series. It contains typical management data and selected highway maintenance standards for the areas of: surface and shoulder; roadside and drainage; traffic…
ERIC Educational Resources Information Center
Pearson Education, Inc., 2011
2011-01-01
With the June 2, 2010, release of the Common Core State Standards, state-led education standards developed for K-12 English Language Arts and Mathematics, Pearson Learning Assessments and content experts conducted an in-depth study to analyze how the "Stanford 10 Achievement Test Series," Tenth Edition (Stanford 10) and Stanford 10…
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
Application of time-variable process noise in terrestrial reference frames determined from VLBI data
NASA Astrophysics Data System (ADS)
Soja, Benedikt; Gross, Richard S.; Abbondanza, Claudio; Chin, Toshio M.; Heflin, Michael B.; Parker, Jay W.; Wu, Xiaoping; Balidakis, Kyriakos; Nilsson, Tobias; Glaser, Susanne; Karbon, Maria; Heinkelmann, Robert; Schuh, Harald
2018-05-01
In recent years, Kalman filtering has emerged as a suitable technique to determine terrestrial reference frames (TRFs), a prime example being JTRF2014. The time series approach allows variations of station coordinates that are neither reduced by observational corrections nor considered in the functional model to be taken into account. These variations are primarily due to non-tidal geophysical loading effects that are not reduced according to the current IERS Conventions (2010). It is standard practice that the process noise models applied in Kalman filter TRF solutions are derived from time series of loading displacements and account for station dependent differences. So far, it has been assumed that the parameters of these process noise models are constant over time. However, due to the presence of seasonal and irregular variations, this assumption does not truly reflect reality. In this study, we derive a station coordinate process noise model allowing for such temporal variations. This process noise model and one that is a parameterized version of the former are applied in the computation of TRF solutions based on very long baseline interferometry data. In comparison with a solution based on a constant process noise model, we find that the station coordinates are affected at the millimeter level.
NASA Astrophysics Data System (ADS)
Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang
2017-10-01
Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.
ERIC Educational Resources Information Center
Tobiason, Glory; Chang, Sandy; Heritage, Margaret; Jones, Barbara; Herman, Joan
2014-01-01
This resource is part of a series produced by the Center for Standards and Assessment Implementation (CSAI) to assist teachers and those who support teachers to plan teaching and learning from College and Career Ready Standards (CCRS) for all students, including students with disabilities, English learners, academically at-risk students, students…
Nonlinear time-series analysis of current signal in cathodic contact glow discharge electrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allagui, Anis, E-mail: aallagui@sharjah.ac.ae; Abdelkareem, Mohammad Ali; Rojas, Andrea Espinel
In the standard two-electrode configuration employed in electrolytic process, when the control dc voltage is brought to a critical value, the system undergoes a transition from conventional electrolysis to contact glow discharge electrolysis (CGDE), which has also been referred to as liquid-submerged micro-plasma, glow discharge plasma electrolysis, electrode effect, electrolytic plasma, etc. The light-emitting process is associated with the development of an irregular and erratic current time-series which has been arbitrarily labelled as “random,” and thus dissuaded further research in this direction. Here, we examine the current time-series signals measured in cathodic CGDE configuration in a concentrated KOH solution atmore » different dc bias voltages greater than the critical voltage. We show that the signals are, in fact, not random according to the NIST SP. 800-22 test suite definition. We also demonstrate that post-processing low-pass filtered sequences requires less time than the native as-measured sequences, suggesting a superposition of low frequency chaotic fluctuations and high frequency behaviors (which may be produced by more than one possible source of entropy). Using an array of nonlinear time-series analyses for dynamical systems, i.e., the computation of largest Lyapunov exponents and correlation dimensions, and re-construction of phase portraits, we found that low-pass filtered datasets undergo a transition from quasi-periodic to chaotic to quasi-hyper-chaotic behavior, and back again to chaos when the voltage controlling-parameter is increased. The high frequency part of the signals is discussed in terms of highly nonlinear turbulent motion developed around the working electrode.« less
Zhao, Yan-Hong; Zhang, Xue-Fang; Zhao, Yan-Qiu; Bai, Fan; Qin, Fan; Sun, Jing; Dong, Ying
2017-08-01
Chronic myeloid leukemia (CML) is characterized by the accumulation of active BCR-ABL protein. Imatinib is the first-line treatment of CML; however, many patients are resistant to this drug. In this study, we aimed to compare the differences in expression patterns and functions of time-series genes in imatinib-resistant CML cells under different drug treatments. GSE24946 was downloaded from the GEO database, which included 17 samples of K562-r cells with (n=12) or without drug administration (n=5). Three drug treatment groups were considered for this study: arsenic trioxide (ATO), AMN107, and ATO+AMN107. Each group had one sample at each time point (3, 12, 24, and 48 h). Time-series genes with a ratio of standard deviation/average (coefficient of variation) >0.15 were screened, and their expression patterns were revealed based on Short Time-series Expression Miner (STEM). Then, the functional enrichment analysis of time-series genes in each group was performed using DAVID, and the genes enriched in the top ten functional categories were extracted to detect their expression patterns. Different time-series genes were identified in the three groups, and most of them were enriched in the ribosome and oxidative phosphorylation pathways. Time-series genes in the three treatment groups had different expression patterns and functions. Time-series genes in the ATO group (e.g. CCNA2 and DAB2) were significantly associated with cell adhesion, those in the AMN107 group were related to cellular carbohydrate metabolic process, while those in the ATO+AMN107 group (e.g. AP2M1) were significantly related to cell proliferation and antigen processing. In imatinib-resistant CML cells, ATO could influence genes related to cell adhesion, AMN107 might affect genes involved in cellular carbohydrate metabolism, and the combination therapy might regulate genes involved in cell proliferation.
NASA Technical Reports Server (NTRS)
Hancock, David W., III
1999-01-01
This document provides the Software Management Plan for the GLAS Standard Data Software (SDS) supporting the GLAS instrument of the EOS ICESat Spacecraft. The SDS encompasses the ICESat Science Investigator-led Processing System (I-SIPS) Software and the Instrument Support Terminal (IST) Software. For the I-SIPS Software, the SDS will produce Level 0, Level 1, and Level 2 data products as well as the associated product quality assessments and descriptive information. For the IST Software, the SDS will accommodate the GLAS instrument support areas of engineering status, command, performance assessment, and instrument health status.
Refined multiscale fuzzy entropy based on standard deviation for biomedical signal analysis.
Azami, Hamed; Fernández, Alberto; Escudero, Javier
2017-11-01
Multiscale entropy (MSE) has been a prevalent algorithm to quantify the complexity of biomedical time series. Recent developments in the field have tried to alleviate the problem of undefined MSE values for short signals. Moreover, there has been a recent interest in using other statistical moments than the mean, i.e., variance, in the coarse-graining step of the MSE. Building on these trends, here we introduce the so-called refined composite multiscale fuzzy entropy based on the standard deviation (RCMFE σ ) and mean (RCMFE μ ) to quantify the dynamical properties of spread and mean, respectively, over multiple time scales. We demonstrate the dependency of the RCMFE σ and RCMFE μ , in comparison with other multiscale approaches, on several straightforward signal processing concepts using a set of synthetic signals. The results evidenced that the RCMFE σ and RCMFE μ values are more stable and reliable than the classical multiscale entropy ones. We also inspect the ability of using the standard deviation as well as the mean in the coarse-graining process using magnetoencephalograms in Alzheimer's disease and publicly available electroencephalograms recorded from focal and non-focal areas in epilepsy. Our results indicated that when the RCMFE μ cannot distinguish different types of dynamics of a particular time series at some scale factors, the RCMFE σ may do so, and vice versa. The results showed that RCMFE σ -based features lead to higher classification accuracies in comparison with the RCMFE μ -based ones. We also made freely available all the Matlab codes used in this study at http://dx.doi.org/10.7488/ds/1477 .
Manpower and Personnel Standardization Language for Army Systems
1989-01-01
completeness, the outline described a series of steps in a ’stand-alone* process. That is to say that it included all of the actions that a contractor would...duplication among MANPRINT domains and other programs, such as LSA, was deferred to later stages of the effort. Outlining the information that was needed...MANPRINT policy, emphasis was placed on initiating action during the earliest phases of development. These contractor tasks and the attendant quality
Open cohort ("time-series") studies of the adverse health effects of short-term exposures to ambient particulate matter and gaseous co-pollutants have been essential in the standard setting process. Last year, a number of serious issues were raised concerning the fitting of Gener...
1978 Army Library Institute, 22-26 May 1978. Fort Bliss, Texas. A report of the Proceedings
1978-10-01
functions of management, librarianship and information science. - To encourage greater self-appraisal and self-development efforts. - To explore some...series of manuals describing the process in detail. This study impacts heavily on Army libraries, which use ALA standards of service as well as... STUDY OF ARMY LIBRARIES: Today Where Do We Stand .MAJ Paul Tracy Girard The Office of the Adjutant General Plans and Operations Directorate
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.
Evaluating the Effectiveness of the 2002-2003 NASA SCIence Files(TM) Program
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Lambert, Matthew A.; Williams, Amy C.
2004-01-01
NASA SCIence Files (tm) is a research-, inquiry-, and standards-based, integrated mathematics, science, and technology series of 60-minute instructional distance learning (television and web-based) programs for students in grades 3-5. Respondents who evaluated the programs in the 2002-2003 NASA SCIence Files (tm) series reported that (1) they used the programs in the series; (2) the goals and objectives for the series were met; (3) the programs were aligned with the national mathematics, science, and technology standards; (4) the program content was developmentally appropriate for grade level; and (5) the programs in the series enhanced and enriched the teaching of mathematics, science, and technology.
Parks, Colleen M
2013-07-01
Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which suggest that the extent to which perceptual fluency matters on a recognition test depends in large part on the task demands. A test that recruits perceptual processing for discrimination should show greater perceptual effects and smaller conceptual effects than standard recognition, similar to the pattern of effects found in perceptual implicit memory tasks. This idea was tested in the current experiment by crossing a levels of processing manipulation with a modality manipulation on a series of recognition tests that ranged from conceptual (standard recognition) to very perceptually demanding (a speeded recognition test with degraded stimuli). Results showed that the levels of processing effect decreased and the effect of modality increased when tests were made perceptually demanding. These results support the idea that surface-level features influence performance on recognition tests when they are made salient by the task demands. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Garcí A-de-León-Chocano, Ricardo; Sáez, Carlos; Muñoz-Soler, Verónica; Garcí A-de-León-González, Ricardo; García-Gómez, Juan M
2015-12-01
This is the first paper of a series of two regarding the construction of data quality (DQ) assured repositories for the reuse of information on infant feeding from birth until two years old. This first paper justifies the need for such repositories and describes the design of a process to construct them from Electronic Health Records (EHR). As a result, Part 1 proposes a computational process to obtain quality-assured datasets represented by a canonical structure extracted from raw data from multiple EHR. For this, 13 steps were defined to ensure the harmonization, standardization, completion, de-duplication, and consistency of the dataset content. Moreover, the quality of the input and output data for each of these steps is controlled according to eight DQ dimensions: predictive value, correctness, duplication, consistency, completeness, contextualization, temporal-stability and spatial-stability. The second paper of the series will describe the application of this computational process to construct the first quality-assured repository for the reuse of information on infant feeding in the perinatal period aimed at the monitoring of clinical activities and research. Copyright © 2015 Elsevier Ltd. All rights reserved.
MTpy: A Python toolbox for magnetotellurics
NASA Astrophysics Data System (ADS)
Krieger, Lars; Peacock, Jared R.
2014-11-01
We present the software package MTpy that allows handling, processing, and imaging of magnetotelluric (MT) data sets. Written in Python, the code is open source, containing sub-packages and modules for various tasks within the standard MT data processing and handling scheme. Besides the independent definition of classes and functions, MTpy provides wrappers and convenience scripts to call standard external data processing and modelling software. In its current state, modules and functions of MTpy work on raw and pre-processed MT data. However, opposite to providing a static compilation of software, we prefer to introduce MTpy as a flexible software toolbox, whose contents can be combined and utilised according to the respective needs of the user. Just as the overall functionality of a mechanical toolbox can be extended by adding new tools, MTpy is a flexible framework, which will be dynamically extended in the future. Furthermore, it can help to unify and extend existing codes and algorithms within the (academic) MT community. In this paper, we introduce the structure and concept of MTpy. Additionally, we show some examples from an everyday work-flow of MT data processing: the generation of standard EDI data files from raw electric (E-) and magnetic flux density (B-) field time series as input, the conversion into MiniSEED data format, as well as the generation of a graphical data representation in the form of a Phase Tensor pseudosection.
Single-Event Transient Testing of Low Dropout PNP Series Linear Voltage Regulators
NASA Technical Reports Server (NTRS)
Adell, Philippe; Allen, Gregory
2013-01-01
As demand for high-speed, on-board, digital-processing integrated circuits on spacecraft increases (field-programmable gate arrays and digital signal processors in particular), the need for the next generation point-of-load (POL) regulator becomes a prominent design issue. Shrinking process nodes have resulted in core rails dropping to values close to 1.0 V, drastically reducing margin to standard switching converters or regulators that power digital ICs. The goal of this task is to perform SET characterization of several commercial POL converters, and provide a discussion of the impact of these results to state-of-the-art digital processing IC through laser and heavy ion testing
Human Spaceflight Safety for the Next Generation on Orbital Space Systems
NASA Technical Reports Server (NTRS)
Mango, Edward J.
2011-01-01
The National Aeronautics and Space Administration (NASA) Commercial Crew Program (CCP) has been chartered to facilitate the development of a United States (U.S.) commercial crew space transportation capability with the goal of achieving safe, reliable, and cost effective access to and from low Earth orbit (LEO) and the International Space Station (ISS) as soon as possible. Once the capability is matured and is available to the Government and other customers, NASA expects to purchase commercial services to meet its ISS crew rotation and emergency return objectives. The primary role of the CCP is to enable and ensure safe human spaceflight and processes for the next generation of earth orbital space systems. The architecture of the Program delineates the process for investment performance in safe orbital systems, Crew Transportation System (CTS) certification, and CTS Flight Readiness. A series of six technical documents build up the architecture to address the top-level CTS requirements and standards. They include Design Reference Missions, with the near term focus on ISS crew services, Certification and Service Requirements, Technical Management Processes, and Technical and Operations Standards Evaluation Processes.
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Cruz, Márcio Freire; Cavalcante, Carlos Arthur Mattos Teixeira; Sá Barretto, Sérgio Torres
2018-05-30
Health Level Seven (HL7) is one of the standards most used to centralize data from different vital sign monitoring systems. This solution significantly limits the data available for historical analysis, because it typically uses databases that are not effective in storing large volumes of data. In industry, a specific Big Data Historian, known as a Process Information Management System (PIMS), solves this problem. This work proposes the same solution to overcome the restriction on storing vital sign data. The PIMS needs a compatible communication standard to allow storing, and the one most commonly used is the OLE for Process Control (OPC). This paper presents a HL7-OPC Server that permits communication between vital sign monitoring systems with PIMS, thus allowing the storage of long historical series of vital signs. In addition, it carries out a review about local and cloud-based Big Medical Data researches, followed by an analysis of the PIMS in a Health IT Environment. Then it shows the architecture of HL7 and OPC Standards. Finally, it shows the HL7-OPC Server and a sequence of tests that proved its full operation and performance.
5 CFR 9901.332 - Standard and targeted local market supplements.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Standard local market supplements are not applicable to physicians and dentists (in occupational series... dentists paid under 38 U.S.C. chapter 74 and since their adjusted salary rates apply on a worldwide basis...) Except for physicians and dentists (in occupational series 0602 and 0680, respectively) or as otherwise...
5 CFR 9901.332 - Standard and targeted local market supplements.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Standard local market supplements are not applicable to physicians and dentists (in occupational series... dentists paid under 38 U.S.C. chapter 74 and since their adjusted salary rates apply on a worldwide basis...) Except for physicians and dentists (in occupational series 0602 and 0680, respectively) or as otherwise...
Gabard-Durnam, Laurel J; Mendez Leal, Adriana S; Wilkinson, Carol L; Levin, April R
2018-01-01
Electroenchephalography (EEG) recordings collected with developmental populations present particular challenges from a data processing perspective. These EEGs have a high degree of artifact contamination and often short recording lengths. As both sample sizes and EEG channel densities increase, traditional processing approaches like manual data rejection are becoming unsustainable. Moreover, such subjective approaches preclude standardized metrics of data quality, despite the heightened importance of such measures for EEGs with high rates of initial artifact contamination. There is presently a paucity of automated resources for processing these EEG data and no consistent reporting of data quality measures. To address these challenges, we propose the Harvard Automated Processing Pipeline for EEG (HAPPE) as a standardized, automated pipeline compatible with EEG recordings of variable lengths and artifact contamination levels, including high-artifact and short EEG recordings from young children or those with neurodevelopmental disorders. HAPPE processes event-related and resting-state EEG data from raw files through a series of filtering, artifact rejection, and re-referencing steps to processed EEG suitable for time-frequency-domain analyses. HAPPE also includes a post-processing report of data quality metrics to facilitate the evaluation and reporting of data quality in a standardized manner. Here, we describe each processing step in HAPPE, perform an example analysis with EEG files we have made freely available, and show that HAPPE outperforms seven alternative, widely-used processing approaches. HAPPE removes more artifact than all alternative approaches while simultaneously preserving greater or equivalent amounts of EEG signal in almost all instances. We also provide distributions of HAPPE's data quality metrics in an 867 file dataset as a reference distribution and in support of HAPPE's performance across EEG data with variable artifact contamination and recording lengths. HAPPE software is freely available under the terms of the GNU General Public License at https://github.com/lcnhappe/happe.
Gabard-Durnam, Laurel J.; Mendez Leal, Adriana S.; Wilkinson, Carol L.; Levin, April R.
2018-01-01
Electroenchephalography (EEG) recordings collected with developmental populations present particular challenges from a data processing perspective. These EEGs have a high degree of artifact contamination and often short recording lengths. As both sample sizes and EEG channel densities increase, traditional processing approaches like manual data rejection are becoming unsustainable. Moreover, such subjective approaches preclude standardized metrics of data quality, despite the heightened importance of such measures for EEGs with high rates of initial artifact contamination. There is presently a paucity of automated resources for processing these EEG data and no consistent reporting of data quality measures. To address these challenges, we propose the Harvard Automated Processing Pipeline for EEG (HAPPE) as a standardized, automated pipeline compatible with EEG recordings of variable lengths and artifact contamination levels, including high-artifact and short EEG recordings from young children or those with neurodevelopmental disorders. HAPPE processes event-related and resting-state EEG data from raw files through a series of filtering, artifact rejection, and re-referencing steps to processed EEG suitable for time-frequency-domain analyses. HAPPE also includes a post-processing report of data quality metrics to facilitate the evaluation and reporting of data quality in a standardized manner. Here, we describe each processing step in HAPPE, perform an example analysis with EEG files we have made freely available, and show that HAPPE outperforms seven alternative, widely-used processing approaches. HAPPE removes more artifact than all alternative approaches while simultaneously preserving greater or equivalent amounts of EEG signal in almost all instances. We also provide distributions of HAPPE's data quality metrics in an 867 file dataset as a reference distribution and in support of HAPPE's performance across EEG data with variable artifact contamination and recording lengths. HAPPE software is freely available under the terms of the GNU General Public License at https://github.com/lcnhappe/happe. PMID:29535597
ISO 9000 quality standards: a model for blood banking?
Nevalainen, D E; Lloyd, H L
1995-06-01
The recent American Association of Blood Banks publications Quality Program and Quality Systems in the Blood Bank and Laboratory Environment, the FDA's draft guidelines, and recent changes in the GMP regulations all discuss the benefits of implementing quality systems in blood center and/or manufacturing operations. While the medical device GMPs in the United States have been rewritten to accommodate a quality system approach similar to ISO 9000, the Center for Biologics Evaluation and Research of the FDA is also beginning to make moves toward adopting "quality systems audits" as an inspection process rather than using the historical approach of record reviews. The approach is one of prevention of errors rather than detection after the fact (Tourault MA, oral communication, November 1994). The ISO 9000 series of standards is a quality system that has worldwide scope and can be applied in any industry or service. The use of such international standards in blood banking should raise the level of quality within an organization, among organizations on a regional level, within a country, and among nations on a worldwide basis. Whether an organization wishes to become registered to a voluntary standard or not, the use of such standards to become ISO 9000-compliant would be a move in the right direction and would be a positive sign to the regulatory authorities and the public that blood banking is making a visible effort to implement world-class quality systems in its operations. Implementation of quality system standards such as the ISO 9000 series will provide an organized approach for blood banks and blood bank testing operations. With the continued trend toward consolidation and mergers, resulting in larger operational units with more complexity, quality systems will become even more important as the industry moves into the future.(ABSTRACT TRUNCATED AT 250 WORDS)
USL/DBMS NASA/RECON working paper series. Standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Chum, Frank Y.
1984-01-01
The USL/DBMS NASA/RECON Working Paper Series contains a collection of reports representing results of activities being conducted by the Computer Science Department of the University of Southwestern Louisiana pursuant to the specifications of NASA Contract number NASw-3846. The work on this portion of the contract is being performed jointly by the University of Southwestern Louisiana and Southern University. This report contains the full set of standards for the development, formatting, reviewing, and issuance of entries within the USL/DBMS NASA/RECON Working Paper Series.
Hermes III endpoint energy calculation from photonuclear activation of 197Au and 58Ni foils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parzyck, Christopher Thomas
2014-09-01
A new process has been developed to characterize the endpoint energy of HERMES III on a shot-to-shot basis using standard dosimetry tools from the Sandia Radiation Measurements Laboratory. Photonuclear activation readings from nickel and gold foils are used in conjunction with calcium fluoride thermoluminescent dosimeters to derive estimated electron endpoint energies for a series of HERMES shots. The results are reasonably consistent with the expected endpoint voltages on those shots.
AGARD Flight Test Techniques Series. Volume 7. Air-to-Air Radar Flight Testing
1988-06-01
enters the beam ), a different tilt angle should be used. The emphasis on setting the tilt angle may require a non - standard high accuracy tilt angle...is: the time from pilot designation on a non -maneuvering target to the time that the system achieves target range, range rate and angle tracking...minimal attenuation, distortion, or boresight Shift effects on the radar beam . Thus, radome design for airborne application io largely a process of
Whitlock, J; Dixon, J; Sherlock, C; Tucker, R; Bolt, D M; Weller, R
2016-05-21
Since the 1950s, veterinary practitioners have included two separate dorsoproximal-palmarodistal oblique (DPr-PaDiO) radiographs as part of a standard series of the equine foot. One image is obtained to visualise the distal phalanx and the other to visualise the navicular bone. However, rapid development of computed radiography and digital radiography and their post-processing capabilities could mean that this practice is no longer required. The aim of this study was to determine differences in perceived image quality between DPr-PaDiO radiographs that were acquired with a computerised radiography system with exposures, centring and collimation recommended for the navicular bone versus images acquired for the distal phalanx but were subsequently manipulated post-acquisition to highlight the navicular bone. Thirty images were presented to four clinicians for quality assessment and graded using a 1-3 scale (1=textbook quality, 2=diagnostic quality, 3=non-diagnostic image). No significant difference in diagnostic quality was found between the original navicular bone images and the manipulated distal phalanx images. This finding suggests that a single DPr-PaDiO image of the distal phalanx is sufficient for an equine foot radiographic series, with appropriate post-processing and manipulation. This change in protocol will result in reduced radiographic study time and decreased patient/personnel radiation exposure. British Veterinary Association.
Carrer, Marco; von Arx, Georg; Castagneri, Daniele; Petit, Giai
2015-01-01
Trees are among the best natural archives of past environmental information. Xylem anatomy preserves information related to tree allometry and ecophysiological performance, which is not available from the more customary ring-width or wood-density proxy parameters. Recent technological advances make tree-ring anatomy very attractive because time frames of many centuries can now be covered. This calls for the proper treatment of time series of xylem anatomical attributes. In this article, we synthesize current knowledge on the biophysical and physiological mechanisms influencing the short- to long-term variation in the most widely used wood-anatomical feature, namely conduit size. We also clarify the strong mechanistic link between conduit-lumen size, tree hydraulic architecture and height growth. Among the key consequences of these biophysical constraints is the pervasive, increasing trend of conduit size during ontogeny. Such knowledge is required to process time series of anatomical parameters correctly in order to obtain the information of interest. An appropriate standardization procedure is fundamental when analysing long tree-ring-related chronologies. When dealing with wood-anatomical parameters, this is even more critical. Only an interdisciplinary approach involving ecophysiology, wood anatomy and dendrochronology will help to distill the valuable information about tree height growth and past environmental variability correctly. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Kazantseva, L.
2011-09-01
The collection of photographic images of Kiev University Observatory covers a period of almost a hundred years and it is interesting from scientific and historical point of view. The study of contemporary techniques of such observations, processing of negatives, creating of copies of them, a photometric standards using various photographic emulsions and photographic materials in combination with preserved photographic techniques and astronomical instruments (from telescopes unique home made photometer to cassettes) - reflect the age-old history of photographic field of astronomy. For the first, celestial objects, astronomical events, star fields, recorded on such a long time interval have a valuable information. For the second, complete restoration of information causes many difficulties. Even with well-preserved emulsion for a hundred years, the standards for description of photographs repeatedly were changing; not all magazines of observations are preserved; sometimes it is not possible to install a toll, which held up. Therefore phase of systematization and cataloguing the collection is very important and quite difficult. Conduction of observations in expedition conditions with various instruments requires a comparative assessment of their accuracy. This division performed on a series of collections, identified photos, and selected certain standards, scanned images of each series by the standard method compared with atalogue information. In the future such work will enable a quick search and use images in conjunction with other than the object coordinates, date, method of observation, and for astrometry and photometric accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basso, T.
Public-private partnerships have been a mainstay of the U.S. Department of Energy and the National Renewable Energy Laboratory (DOE/NREL) approach to research and development. These partnerships also include technology development that enables grid modernization and distributed energy resources (DER) advancement, especially renewable energy systems integration with the grid. Through DOE/NREL and industry support of Institute of Electrical and Electronics Engineers (IEEE) standards development, the IEEE 1547 series of standards has helped shape the way utilities and other businesses have worked together to realize increasing amounts of DER interconnected with the distribution grid. And more recently, the IEEE 2030 series ofmore » standards is helping to further realize greater implementation of communications and information technologies that provide interoperability solutions for enhanced integration of DER and loads with the grid. For these standards development partnerships, for approximately $1 of federal funding, industry partnering has contributed $5. In this report, the status update is presented for the American National Standards IEEE 1547 and IEEE 2030 series of standards. A short synopsis of the history of the 1547 standards is first presented, then the current status and future direction of the ongoing standards development activities are discussed.« less
NASA Astrophysics Data System (ADS)
Helama, S.; Lindholm, M.; Timonen, M.; Eronen, M.
2004-12-01
Tree-ring standardization methods were compared. Traditional methods along with the recently introduced approaches of regional curve standardization (RCS) and power-transformation (PT) were included. The difficulty in removing non-climatic variation (noise) while simultaneously preserving the low-frequency variability in the tree-ring series was emphasized. The potential risk of obtaining inflated index values was analysed by comparing methods to extract tree-ring indices from the standardization curve. The material for the tree-ring series, previously used in several palaeoclimate predictions, came from living and dead wood of high-latitude Scots pine in northernmost Europe. This material provided a useful example of a long composite tree-ring chronology with the typical strengths and weaknesses of such data, particularly in the context of standardization. PT stabilized the heteroscedastic variation in the original tree-ring series more efficiently than any other standardization practice expected to preserve the low-frequency variability. RCS showed great potential in preserving variability in tree-ring series at centennial time scales; however, this method requires a homogeneous sample for reliable signal estimation. It is not recommended to derive indices by subtraction without first stabilizing the variance in the case of series of forest-limit tree-ring data. Index calculation by division did not seem to produce inflated chronology values for the past one and a half centuries of the chronology (where mean sample cambial age is high). On the other hand, potential bias of high RCS chronology values was observed during the period of anomalously low mean sample cambial age. An alternative technique for chronology construction was proposed based on series age decomposition, where indices in the young vigorously behaving part of each series are extracted from the curve by division and in the mature part by subtraction. Because of their specific nature, the dendrochronological data here should not be generalized to all tree-ring records. The examples presented should be used as guidelines for detecting potential sources of bias and as illustrations of the usefulness of tree-ring records as palaeoclimate indicators.
Garbarino, J.R.; Taylor, Howard E.
1996-01-01
An inductively coupled plasma-mass spectrometry method was developed for the determination of dissolved Al, As, B, Ba, Be, Cd, Co, Cr, Cu, Li, Mn, Mo, Ni, Pb, Sr, Tl, U, V, and Zn in natural waters. Detection limits are generally in the 50-100 picogram per milliliter (pg/mL) range, with the exception of As which is in the 1 microgram per liter (ug/L) range. Interferences associated with spectral overlap from concomitant isotopes or molecular ions and sample matrix composition have been identified. Procedures for interference correction and reduction related to isotope selection, instrumental operating conditions, and mathematical data processing techniques are described. Internal standards are used to minimize instrumental drift. The average analytical precision attainable for 5 times the detection limit is about 16 percent. The accuracy of the method was tested using a series of U.S. Geological Survey Standard Reference Water Standards (SWRS), National Research Council Canada Riverine Water Standard, and National Institute of Standards and Technology (NIST) Trace Elements in Water Standards. Average accuracies range from 90 to 110 percent of the published mean values.
Moorman, J. Randall; Delos, John B.; Flower, Abigail A.; Cao, Hanqing; Kovatchev, Boris P.; Richman, Joshua S.; Lake, Douglas E.
2014-01-01
We have applied principles of statistical signal processing and non-linear dynamics to analyze heart rate time series from premature newborn infants in order to assist in the early diagnosis of sepsis, a common and potentially deadly bacterial infection of the bloodstream. We began with the observation of reduced variability and transient decelerations in heart rate interval time series for hours up to days prior to clinical signs of illness. We find that measurements of standard deviation, sample asymmetry and sample entropy are highly related to imminent clinical illness. We developed multivariable statistical predictive models, and an interface to display the real-time results to clinicians. Using this approach, we have observed numerous cases in which incipient neonatal sepsis was diagnosed and treated without any clinical illness at all. This review focuses on the mathematical and statistical time series approaches used to detect these abnormal heart rate characteristics and present predictive monitoring information to the clinician. PMID:22026974
Streamlining the medication process improves safety in the intensive care unit.
Benoit, E; Eckert, P; Theytaz, C; Joris-Frasseren, M; Faouzi, M; Beney, J
2012-09-01
Multiple interventions were made to optimize the medication process in our intensive care unit (ICU). 1 Transcriptions from the medical order form to the administration plan were eliminated by merging both into a single document; 2 the new form was built in a logical sequence and was highly structured to promote completeness and standardization of information; 3 frequently used drug names, approved units, and fixed routes were pre-printed; 4 physicians and nurses were trained with regard to the correct use of the new form. This study was aimed at evaluating the impact of these interventions on clinically significant types of medication errors. Eight types of medication errors were measured by a prospective chart review before and after the interventions in the ICU of a public tertiary care hospital. We used an interrupted time-series design to control the secular trends. Over 85 days, 9298 lines of drug prescription and/or administration to 294 patients, corresponding to 754 patient-days were collected and analysed for the three series before and three series following the intervention. Global error rate decreased from 4.95 to 2.14% (-56.8%, P < 0.001). The safety of the medication process in our ICU was improved by simple and inexpensive interventions. In addition to the optimization of the prescription writing process, the documentation of intravenous preparation, and the scheduling of administration, the elimination of the transcription in combination with the training of users contributed to reducing errors and carried an interesting potential to increase safety. © 2012 The Authors. Acta Anaesthesiologica Scandinavica © 2012 The Acta Anaesthesiologica Scandinavica Foundation.
Mesoscale fabrication and design
NASA Astrophysics Data System (ADS)
Hayes, Gregory R.
A strong link between mechanical engineering design and materials science and engineering fabrication can facilitate an effective and adaptable prototyping process. In this dissertation, new developments in the lost mold-rapid infiltration forming (LM-RIF) process is presented which demonstrates the relationship between these two fields of engineering in the context of two device applications. Within the LM-RIF process, changes in materials processing and mechanical design are updated iteratively, often aided by statistical design of experiments (DOE). The LM-RIF process was originally developed by Antolino and Hayes et al to fabricate mesoscale components. In this dissertation the focus is on advancements in the process and underlying science. The presented advancements to the LM-RIF process include an augmented lithography procedure, the incorporation of engineered aqueous and non-aqueous colloidal suspensions, an assessment of constrained drying forces during LM-RIF processing, mechanical property evaluation, and finally prototype testing and validation. Specifically, the molding procedure within the LM-RIF process is capable of producing molds with thickness upwards of 1mm, as well as multi-layering to create three dimensional structures. Increasing the mold thickness leads to an increase in the smallest feature resolvable; however, the increase in mold thickness and three dimensional capability has expanded the mechanical design space. Tetragonally stabilized zirconia (3Y-TZP) is an ideal material for mesoscale instruments, as it is biocompatible, exhibits high strength, and is chemically stable. In this work, aqueous colloidal suspensions were formulated with two new gel-binder systems, increasing final natural orifice translumenal endoscopic surgery (NOTES) instrument yield from 0% to upwards of 40% in the best case scenario. The effects of the gel-binder system on the rheological behavior of the suspension along with the thermal characteristics of the gel-binder system were characterized. Finally, mechanical properties of ceramic specimens were obtained via 3-point bend testing. Another candidate material for NOTES devices as well as cellular contact aided compliant mechanisms (C3M) devices is 300 series stainless steel (300 series stainless steel). 300 series stainless steel is a common biocompatible material; it is used in surgical applications, exhibits a high corrosion resistance, and has high strength to failure. New, high solids loading, non-aqueous colloidal suspensions of 300 series stainless steel were formulated and incorporated into the LM-RIF process. The rheological behavior and thermal characteristics of the non-aqueous colloidal suspensions were analyzed and engineered to operate within the LM-RIF process. Final part yield with the non-aqueous colloidal suspensions was higher than that of the aqueous ceramic suspensions. Mechanical properties of 300 series stainless steel specimens were determined via 3-point bend testing. Furthermore, new composite non-aqueous colloidal suspensions of 3Y-TZP and 300 series stainless steel were formulated and incorporated into the LM-RIF process. The composite materials showed an increase in final part yield, and an increase in yield strength compared to pure 300 series stainless steel was determined by Vickers hardness testing. The successful incorporation of composite suspensions in the LM-RIF process was facilitated through an analysis of the rheological behavior as a function of solids loading and ceramic to metal ratio. Optimized designs of NOTES instruments, as well as C3M devices were manufactured using the LM-RIF process with the non-aqueous 300 series stainless steel suspension. The performance of the prototype NOTES instruments was evaluated and compared against the theoretically predicted performance results, showing good agreement. Similarly, good agreement was seen between the stress-displacement behavior of prototype C3M devices when compared to the theoretically calculated stress-displacement results. Finally, in a comparison by endoscopic surgeons at Hershey Medical Center between an existing industry standard endoscopic device and the mesoscale instrument prototypes fabricated via the LM-RIF process, the prototype design performed favorably in almost all categories. (Abstract shortened by UMI.)
NASA Technical Reports Server (NTRS)
Glassman, Nanci A.; Perry, Jeannine B.; Giersch, Christopher E.; Lambert, Matthew A.; Pinelli, Thomas E.
2004-01-01
NASA CONNECT is a research-, inquiry, and standards-based, integrated mathematics, science, and technology series of 30-minute instructional distance learning (television and web-based) programs for students in grades 6 8. Respondents who evaluated the programs in the series over the first five seasons (1998-99 through 2002-03) reported that (1) they used the programs in the series; (2) the goals and objectives for the series were met; (3) the programs were aligned with the national mathematics, science, and technology standards; (4) the program content was developmentally appropriate for the grade level; and (5) the programs in the series enhanced and enriched the teaching of mathematics, science, and technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moryakov, A. V., E-mail: sailor@orc.ru
2016-12-15
An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.
Arbitrary-order corrections for finite-time drift and diffusion coefficients
NASA Astrophysics Data System (ADS)
Anteneodo, C.; Riera, R.
2009-09-01
We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.
High resolution, low cost solar cell contact development
NASA Technical Reports Server (NTRS)
Mardesich, N.
1979-01-01
The experimental work demonstrating the feasibility of the MIDFILM process as a low cost means of applying solar cell collector metallization as reported. Cell efficiencies of above 14% (AMl, 28 C) were achieved with fritted silver metallization. Environmental tests suggest that the metallization is slightly humidity sensitive and degradation is observed on cells with high series resistance. The major yield loss in the fabrication of cells was due to discontinuous grid lines, resulting in high series resitance. Standard lead-tin solder plated interconnections do not appear compatible with the MIDFILM contact. Copper, nickel and molybdemun base powder were investigated as low cost metallization systems. The copper based powder degraded the cell response. The nickel and molybdenum base powders oxidized when sintered in the oxidizing atmosphere necessary to ash the photoresin.
NASA Technical Reports Server (NTRS)
Molnar, Gyula I.; Susskind, Joel; Iredell, Lena
2011-01-01
In the beginning, a good measure of a GMCs performance was their ability to simulate the observed mean seasonal cycle. That is, a reasonable simulation of the means (i.e., small biases) and standard deviations of TODAY?S climate would suffice. Here, we argue that coupled GCM (CG CM for short) simulations of FUTURE climates should be evaluated in much more detail, both spatially and temporally. Arguably, it is not the bias, but rather the reliability of the model-generated anomaly time-series, even down to the [C]GCM grid-scale, which really matter. This statement is underlined by the social need to address potential REGIONAL climate variability, and climate drifts/changes in a manner suitable for policy decisions.
Tweedell, Andrew J.; Haynes, Courtney A.
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897
Lenga, L; Czwikla, R; Wichmann, J L; Leithner, D; Albrecht, M H; D'Angelo, T; Arendt, C T; Booz, C; Hammerstingl, R; Vogl, T J; Martin, S S
2018-06-05
To investigate the impact of noise-optimised virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with malignant lymphoma at dual-energy computed tomography (DECT) examinations of the abdomen. Thirty-five consecutive patients (mean age, 53.8±18.6 years; range, 21-82 years) with histologically proven malignant lymphoma of the abdomen were included retrospectively. Images were post-processed with standard linear blending (M_0.6), traditional VMI, and VMI+ technique at energy levels ranging from 40 to 100 keV in 10 keV increments. Signal-to-noise (SNR) and contrast-to-noise ratios (CNR) were objectively measured in lymphoma lesions. Image quality, lesion delineation, and image noise were rated subjectively by three blinded observers using five-point Likert scales. Quantitative image quality parameters peaked at 40-keV VMI+ (SNR, 15.77±7.74; CNR, 18.27±8.04) with significant differences compared to standard linearly blended M_0.6 (SNR, 7.96±3.26; CNR, 13.55±3.47) and all traditional VMI series (p<0.001). Qualitative image quality assessment revealed significantly superior ratings for image quality at 60-keV VMI+ (median, 5) in comparison with all other image series (p<0.001). Assessment of lesion delineation showed the highest rating scores for 40-keV VMI+ series (median, 5), while lowest subjective image noise was found for 100-keV VMI+ reconstructions (median, 5). Low-keV VMI+ reconstructions led to improved image quality and lesion delineation of malignant lymphoma lesions compared to standard image reconstruction and traditional VMI at abdominal DECT examinations. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
NASPE Developed Informational Products and Applications. Article #4 in a 4-Part Series
ERIC Educational Resources Information Center
Fisette, Jennifer L.; Placek, Judith H.; Avery, Marybell; Dyson, Ben; Fox, Connie; Franck, Marian; Graber, Kim; Rink, Judith; Zhu, Weimo
2009-01-01
This is the fourth and final article in the "PE Metrics" series that focuses on assessing the National Standards for Physical Education (NASPE) for Standard 1. The first article focused on assessment of student learning. The second described formative and summative assessments and provided considerations on how to implement assessment within…
Explorations in Statistics: Standard Deviations and Standard Errors
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2008-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…
NASA Astrophysics Data System (ADS)
Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan
2017-04-01
Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.
Handley, J; Burrows, D
1994-11-01
The case is reported of a 28-year-old man who developed allergic contact dermatitis from 2 synthetic fragrance ingredients, Lyral (3- and 4-(4-hydroxy-4-methylpentyl)-3-cyclohexene-1-aldehyde) and acetyl cedrene, in separate underarm deodorant preparations. The implications of the patient's negative patch test reactions to the European standard series (Trolab) and cosmetics and fragrance series (both Chemotechnique Diagnostics) are discussed. The importance is stressed of patch testing with the patient's own preparations when cosmetic dermatitis is suspected, and of identifying and reporting offending fragrance ingredients, with a view possibly to updating the European standard series and commercially available cosmetics and fragrance series.
NASA Technical Reports Server (NTRS)
Melton, John E.
1994-01-01
EGADS is a comprehensive preliminary design tool for estimating the performance of light, single-engine general aviation aircraft. The software runs on the Apple Macintosh series of personal computers and assists amateur designers and aeronautical engineering students in performing the many repetitive calculations required in the aircraft design process. The program makes full use of the mouse and standard Macintosh interface techniques to simplify the input of various design parameters. Extensive graphics, plotting, and text output capabilities are also included.
Standard Specimen Reference Set: Lung — EDRN Public Portal
The NCI/EDRN/SPORE Lung Cancer Biomarkers Group (LCBG) began its activities back in November 2004 and developed clear objectives and strategies on how to begin validating a series of candidate biomarkers for the early detection of lung cancer. The initial goal of the LCBG is to develop the requisite sample resources to validate serum/plasma biomarkers for the early diagnosis of lung cancer. Researchers may use these resources and process for continued biomarker refinement but this is not the primary activity of the LCBG.
Reference Manual on Interference Seals and Connectors for Undersea Electrical Applications
1976-07-01
processes. It has a standard line of metal shell connectors, the ER and EB series, which are available with braided and laced harnass work and breakouts, and...Assemblies (RM) 4-10 4.3.2 Molded Plastic Assemblies (PM) 4-11 4.3.3 Metal Shell Assemblies (MS) 4-12 4.3.4 Pressure-balanced Oil-filled Assemblies...connectors according to material composition. The classes of connectors include: Rubber Molded (RM), Plastic Molded (PM), Metal Shell (MS), Pressure-Balanced
Colloidal Fouling of Nanofiltration Membranes: Development of a Standard Operating Procedure
Al Mamun, Md Abdullaha; Bhattacharjee, Subir; Pernitsky, David; Sadrzadeh, Mohtada
2017-01-01
Fouling of nanofiltration (NF) membranes is the most significant obstacle to the development of a sustainable and energy-efficient NF process. Colloidal fouling and performance decline in NF processes is complex due to the combination of cake formation and salt concentration polarization effects, which are influenced by the properties of the colloids and the membrane, the operating conditions of the test, and the solution chemistry. Although numerous studies have been conducted to investigate the influence of these parameters on the performance of the NF process, the importance of membrane preconditioning (e.g., compaction and equilibrating with salt water), as well as the determination of key parameters (e.g., critical flux and trans-membrane osmotic pressure) before the fouling experiment have not been reported in detail. The aim of this paper is to present a standard experimental and data analysis protocol for NF colloidal fouling experiments. The developed methodology covers preparation and characterization of water samples and colloidal particles, pre-test membrane compaction and critical flux determination, measurement of experimental data during the fouling test, and the analysis of that data to determine the relative importance of various fouling mechanisms. The standard protocol is illustrated with data from a series of flat sheet, bench-scale experiments. PMID:28106775
NASA Astrophysics Data System (ADS)
Evans, K. D.; Early, A. B.; Northup, E. A.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.; Arctur, D. K.; Beach, A. L., III; Silverman, M. L.
2017-12-01
The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering Report template, describing recommended changes to TimeseriesML 1.0, in the form of use cases. In 2017, we are conducting interoperability experiments to implement the use cases and demonstrate the feasibility and suitability of these modifications for NASA and related user communities. The results will be incorporated into the existing draft engineering report.
NASA Technical Reports Server (NTRS)
Evans, Keith D.; Early, Amanda; Northup, Emily; Ames, Dan; Teng, William; Archur, David; Beach, Aubrey; Olding, Steve; Krotkov, Nickolay A.
2017-01-01
The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering Report template, describing recommended changes to TimeseriesML 1.0, in the form of use cases. In 2017, we are conducting interoperability experiments to implement the use cases and demonstrate the feasibility and suitability of these modifications for NASA and related user communities. The results will be incorporated into the existing draft engineering report.
Morgan, Lauren; New, Steve; Robertson, Eleanor; Collins, Gary; Rivero-Arias, Oliver; Catchpole, Ken; Pickering, Sharon P; Hadi, Mohammed; Griffin, Damian; McCulloch, Peter
2015-02-01
Standard operating procedures (SOPs) should improve safety in the operating theatre, but controlled studies evaluating the effect of staff-led implementation are needed. In a controlled interrupted time series, we evaluated three team process measures (compliance with WHO surgical safety checklist, non-technical skills and technical performance) and three clinical outcome measures (length of hospital stay, complications and readmissions) before and after a 3-month staff-led development of SOPs. Process measures were evaluated by direct observation, using Oxford Non-Technical Skills II for non-technical skills and the 'glitch count' for technical performance. All staff in two orthopaedic operating theatres were trained in the principles of SOPs and then assisted to develop standardised procedures. Staff in a control operating theatre underwent the same observations but received no training. The change in difference between active and control groups was compared before and after the intervention using repeated measures analysis of variance. We observed 50 operations before and 55 after the intervention and analysed clinical data on 1022 and 861 operations, respectively. The staff chose to structure their efforts around revising the 'whiteboard' which documented and prompted tasks, rather than directly addressing specific task problems. Although staff preferred and sustained the new system, we found no significant differences in process or outcome measures before/after intervention in the active versus the control group. There was a secular trend towards worse outcomes in the postintervention period, seen in both active and control theatres. SOPs when developed and introduced by frontline staff do not necessarily improve operative processes or outcomes. The inherent tension in improvement work between giving staff ownership of improvement and maintaining control of direction needs to be managed, to ensure staff are engaged but invest energy in appropriate change. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jafarov, E. E.; Parsekian, A. D.; Schaefer, K.
Ground penetrating radar (GPR) has emerged as an effective tool for estimating active layer thickness (ALT) and volumetric water content (VWC) within the active layer. In August 2013, we conducted a series of GPR and probing surveys using a 500 MHz antenna and metallic probe around Barrow, Alaska. Here, we collected about 15 km of GPR data and 1.5 km of probing data. We describe the GPR data processing workflow from raw GPR data to the estimated ALT and VWC. We then include the corresponding uncertainties for each measured and estimated parameter. The estimated average GPR-derived ALT was 41 cm,more » with a standard deviation of 9 cm. The average probed ALT was 40 cm, with a standard deviation of 12 cm. The average GPR-derived VWC was 0.65, with a standard deviation of 0.14.« less
Jafarov, E. E.; Parsekian, A. D.; Schaefer, K.; ...
2018-01-09
Ground penetrating radar (GPR) has emerged as an effective tool for estimating active layer thickness (ALT) and volumetric water content (VWC) within the active layer. In August 2013, we conducted a series of GPR and probing surveys using a 500 MHz antenna and metallic probe around Barrow, Alaska. Here, we collected about 15 km of GPR data and 1.5 km of probing data. We describe the GPR data processing workflow from raw GPR data to the estimated ALT and VWC. We then include the corresponding uncertainties for each measured and estimated parameter. The estimated average GPR-derived ALT was 41 cm,more » with a standard deviation of 9 cm. The average probed ALT was 40 cm, with a standard deviation of 12 cm. The average GPR-derived VWC was 0.65, with a standard deviation of 0.14.« less
A method for the geometric and densitometric standardization of intraoral radiographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duckworth, J.E.; Judy, P.F.; Goodson, J.M.
1983-07-01
The interpretation of dental radiographs for the diagnosis of periodontal disease conditions poses several difficulties. These include the inability to adequately reproduce the projection geometry and optical density of the exposures. In order to improve the ability to extract accurate quantitative information from a radiographic survey of periodontal status, a method was developed which provided for consistent reproduction of both geometric and densitometric exposure parameters. This technique employed vertical bitewing projections in holders customized to individual segments of the dentition. A copper stepwedge was designed to provide densitometric standardization, and wire markers were included to permit measurement of angular variation.more » In a series of 53 paired radiographs, measurement of alveolar crest heights was found to be reproducible within approximately 0.1 mm. This method provided a full mouth radiographic survey using seven films, each complete with internal standards suitable for computer-based image processing.« less
ERIC Educational Resources Information Center
Afterschool Alliance, 2014
2014-01-01
The Afterschool Alliance, in partnership with MetLife Foundation, is proud to present the first in their latest series of four issue briefs examining critical issues facing middle school youth and the vital role afterschool programs play in addressing these issues. This series explores afterschool and: the Common Core State Standards, students…
ERIC Educational Resources Information Center
Meiring, Steven P.; And Others
The 1989 document, "Curriculum and Evaluation Standards for School Mathematics," provides a vision and a framework for revising and strengthening the K-12 mathematics curriculum in North American schools and for evaluating both the mathematics curriculum and students' progress. When completed, it is expected that the Addenda Series will…
Assessing the catchment's filtering effect on the propagation of meteorological anomalies
NASA Astrophysics Data System (ADS)
di Domenico, Antonella; Laguardia, Giovanni; Margiotta, Maria Rosaria
2010-05-01
The characteristics of drought propagation within a catchment are evaluated by means of the analysis of time series of water fluxes and storages' states. The study area is the Agri basin, Southern Italy, closed at the Tarangelo gauging station (507 km2). Once calibrated the IRP weather generator (Veneziano and Iacobellis, 2002) on observed data, a 100 years time series of precipitation has been produced. The drought statistics obtained from the synthetic data have been compared to the ones obtained from the limited observations available. The DREAM hydrological model has been calibrated based on observed precipitation and discharge. From the model run on the synthetic precipitation we have obtained the time series of variables relevant for assessing the status of the catchment, namely total runoff and its components, actual evapotranspiration, and soil moisture. The Standardized Precipitation Index (SPI; McKee et al., 1993) has been calculated for different averaging periods. The modelled data have been processed for the calculation of drought indices. In particular, we have chosen to use their transformation into standardized variables. We have performed autocorrelation analysis for assessing the characteristic time scales of the variables. Moreover, we have investigated through cross correlation their relationships, assessing also the SPI averaging period for which the maximum correlation is reached. The variables' drought statistics, namely number of events, duration, and deficit volumes, have been assessed. As a result of the filtering effect exerted by the different catchment storages, the characteristic time scale and the maximum correlation SPI averaging periods for the different time series tend to increase. Thus, the number of drought events tends to decrease and their duration to increase under increasing storage.
Aberer, W; Komericki, P; Uter, W; Hausen, B M; Lessmann, H; Kränke, B; Geier, J; Schnuch, A
2003-08-01
The selection of the most important contact allergens is subject to a continuous change. Several factors may influence the sensitization rates and thus the decision, which substances to include in the standard series of the most frequent allergens. The Information Network of Departments of Dermatology adds substances of interest for a certain time period to the standard series in order to evaluate parameters such as sensitization rate, grade of reaction, and clinical relevance of positive reactions. In 6 testing periods starting in 1996, 13 test substances were evaluated. Due to the results, propolis, compositae mix, and bufexamac were included in the standard series in 1999, while lyral was added in 2002. Sorbitansesquioleat, dispers blue mix, and iodopropynyl butylcarbamate are under further discussion. Substances such as glutaraldehyde and p-aminoazobenzole should be tested in certain risk groups only, whereas the steroids budesonide and tixocortol should be tested when clinically suspected.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-14
... appropriate safety standards for the 767-400ER series airplanes because of a novel or unusual design feature...-1106; Special Conditions No. 25-448-SC] Special Conditions: Boeing Model 767-400ER Series Airplanes...- 400ER series airplane. These airplanes, as modified by Continental Airlines, will have a novel or...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-11
... appropriate safety standards for the C-series airplanes because of a novel or unusual design feature, special... Features The C-series airplanes will incorporate the following novel or unusual design features: new... Series Airplanes; Flight Envelope Protection: General Limiting Requirements AGENCY: Federal Aviation...
Is the European standard series suitable for patch testing in Riyadh, Saudi Arabia?
el-Rab, M O; al-Sheikh, O A
1995-11-01
Due to the lack of a regional patch test series in our geographical area, the suitability of the European standard series was evaluated by patch testing dermatitis patients in Riyadh, Saudi Arabia. Of 240 consecutive patients with various forms of dermatitis, 136 (57%) showed 1 or more positive patch tests, women, 74 (54%), slightly outnumbering men, 62 (46%). Positive reactions were found to 21 of the 22 items in the test series. Sensitization was most common to nickel sulfate (51 = 37.5%), potassium dichromate (48 = 35%) and cobalt chloride (43 = 32%) The frequency of sensitization to nickel was higher in women (41 = 30%) while that to dichromate was higher in men (39 = 29%). Less reactions were found to fragrance mix (21 = 15%), formaldehyde (15 = 11%) and neomycin sulfate (15 = 11%). Sensitization to other allergens ranged between 10 and 1%. Less than 1% of patients (0.7%) reacted to benzocaine and none to primin. The frequency of occurrence of multiple sensitivities is also presented. We conclude that the European standard series is suitable for patch testing dermatitis patients in our region, with the exception of benzocaine and primin. The addition of 3 allergens that could be of local relevance is discussed.
Chen, Kewei; Ríos, José Julián; Pérez-Gálvez, Antonio; Roca, María
2015-08-07
Phytylated chlorophyll derivatives undergo specific oxidative reactions through the natural metabolism or during food processing or storage, and consequently pyro-, 13(2)-hydroxy-, 15(1)-hydroxy-lactone chlorophylls, and pheophytins (a and b) are originated. New analytical procedures have been developed here to reproduce controlled oxidation reactions that specifically, and in reasonable amounts, produce those natural target standards. At the same time and under the same conditions, 16 natural chlorophyll derivatives have been analyzed by APCI-HPLC-hrMS(2) and most of them by the first time. The combination of the high-resolution MS mode with powerful post-processing software has allowed the identification of new fragmentation patterns, characterizing specific product ions for some particular standards. In addition, new hypotheses and reaction mechanisms for the established MS(2)-based reactions have been proposed. As a general rule, the main product ions involve the phytyl and the propionic chains but the introduction of oxygenated functional groups at the isocyclic ring produces new and specific productions and at the same time inhibits some particular fragmentations. It is noteworthy that all b derivatives, except 15(1)-hydroxy-lactone compounds, undergo specific CO losses. We propose a new reaction mechanism based in the structural configuration of a and b chlorophyll derivatives that explain the exclusive CO fragmentation in all b series except for 15(1)-hydroxy-lactone b and all a series compounds. Copyright © 2015 Elsevier B.V. All rights reserved.
Standard map in magnetized relativistic systems: fixed points and regular acceleration.
de Sousa, M C; Steffens, F M; Pakter, R; Rizzato, F B
2010-08-01
We investigate the concept of a standard map for the interaction of relativistic particles and electrostatic waves of arbitrary amplitudes, under the action of external magnetic fields. The map is adequate for physical settings where waves and particles interact impulsively, and allows for a series of analytical result to be exactly obtained. Unlike the traditional form of the standard map, the present map is nonlinear in the wave amplitude and displays a series of peculiar properties. Among these properties we discuss the relation involving fixed points of the maps and accelerator regimes.
ERIC Educational Resources Information Center
Linn, Robert L.
The New Standards Project conducted a pilot test of a series of performance-based assessment tasks in mathematics and English language arts at Grades 4 and 8 in the spring of 1993. This paper reports the results of a series of generalizability analyses conducted for a subset of the 1993 pilot study data in mathematics. Generalizability analyses…
Testing for intracycle determinism in pseudoperiodic time series.
Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A
2008-06-01
A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.
Impervious surfaces mapping using high resolution satellite imagery
NASA Astrophysics Data System (ADS)
Shirmeen, Tahmina
In recent years, impervious surfaces have emerged not only as an indicator of the degree of urbanization, but also as an indicator of environmental quality. As impervious surface area increases, storm water runoff increases in velocity, quantity, temperature and pollution load. Any of these attributes can contribute to the degradation of natural hydrology and water quality. Various image processing techniques have been used to identify the impervious surfaces, however, most of the existing impervious surface mapping tools used moderate resolution imagery. In this project, the potential of standard image processing techniques to generate impervious surface data for change detection analysis using high-resolution satellite imagery was evaluated. The city of Oxford, MS was selected as the study site for this project. Standard image processing techniques, including Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA), a combination of NDVI and PCA, and image classification algorithms, were used to generate impervious surfaces from multispectral IKONOS and QuickBird imagery acquired in both leaf-on and leaf-off conditions. Accuracy assessments were performed, using truth data generated by manual classification, with Kappa statistics and Zonal statistics to select the most appropriate image processing techniques for impervious surface mapping. The performance of selected image processing techniques was enhanced by incorporating Soil Brightness Index (SBI) and Greenness Index (GI) derived from Tasseled Cap Transformed (TCT) IKONOS and QuickBird imagery. A time series of impervious surfaces for the time frame between 2001 and 2007 was made using the refined image processing techniques to analyze the changes in IS in Oxford. It was found that NDVI and the combined NDVI--PCA methods are the most suitable image processing techniques for mapping impervious surfaces in leaf-off and leaf-on conditions respectively, using high resolution multispectral imagery. It was also found that IS data generated by these techniques can be refined by removing the conflicting dry soil patches using SBI and GI obtained from TCT of the same imagery used for IS data generation. The change detection analysis of the IS time series shows that Oxford experienced the major changes in IS from the year 2001 to 2004 and 2006 to 2007.
Gearbox Reliability Collaborative Phase 3 Gearbox 2 Test Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, H.; Keller, J.; Guo, Y.
2013-04-01
Gearboxes in wind turbines have not been achieving their expected design life even though they commonly meet or exceed the design criteria specified in current design standards. One of the basic premises of the National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) is that the low gearbox reliability results from the absence of critical elements in the design process or insufficient design tools. Key goals of the GRC are to improve design approaches and analysis tools and to recommend practices and test methods resulting in improved design standards for wind turbine gearboxes that lower the cost of energy (COE)more » through improved reliability. The GRC uses a combined gearbox testing, modeling and analysis approach, along with a database of information from gearbox failures collected from overhauls and investigation of gearbox condition monitoring techniques to improve wind turbine operations and maintenance practices. Testing of Gearbox 2 (GB2) using the two-speed turbine controller that has been used in prior testing. This test series will investigate non-torque loads, high-speed shaft misalignment, and reproduction of field conditions in the dynamometer. This test series will also include vibration testing using an eddy-current brake on the gearbox's high speed shaft.« less
Psychological and Neural Mechanisms of Subjective Time Dilation
van Wassenhove, Virginie; Wittmann, Marc; Craig, A. D. (Bud); Paulus, Martin P.
2011-01-01
For a given physical duration, certain events can be experienced as subjectively longer in duration than others. Try this for yourself: take a quick glance at the second hand of a clock. Immediately, the tick will pause momentarily and appear to be longer than the subsequent ticks. Yet, they all last exactly 1 s. By and large, a deviant or an unexpected stimulus in a series of similar events (same duration, same features) can elicit a relative overestimation of subjective time (or “time dilation”) but, as is shown here, this is not always the case. We conducted an event-related functional magnetic neuroimaging study on the time dilation effect. Participants were presented with a series of five visual discs, all static and of equal duration (standards) except for the fourth one, a looming or a receding target. The duration of the target was systematically varied and participants judged whether it was shorter or longer than all other standards in the sequence. Subjective time dilation was observed for the looming stimulus but not for the receding one, which was estimated to be of equal duration to the standards. The neural activation for targets (looming and receding) contrasted with the standards revealed an increased activation of the anterior insula and of the anterior cingulate cortex. Contrasting the looming with the receding targets (i.e., capturing the time dilation effect proper) revealed a specific activation of cortical midline structures. The implication of midline structures in the time dilation illusion is here interpreted in the context of self-referential processes. PMID:21559346
One nanosecond time synchronization using series and GPS
NASA Technical Reports Server (NTRS)
Buennagel, A. A.; Spitzmesser, D. J.; Young, L. E.
1983-01-01
Subnanosecond time sychronization between two remote rubidium frequency standards is verified by a traveling clock comparison. Using a novel, code ignorant Global Positioning System (GPS) receiver developed at JPL, the SERIES geodetic baseline measurement system is applied to establish the offset between the 1 Hz. outputs of the remote standards. Results of the two intercomparison experiments to date are presented as well as experimental details.
PREFACE: 6th International Workshop on Multi-Rate Processes and Hysteresis (MURPHYS2012)
NASA Astrophysics Data System (ADS)
Dimian, Mihai; Rachinskii, Dmitrii
2015-02-01
The International Workshop on Multi-Rate Processes and Hysteresis (MURPHYS) conference series focuses on multiple scale systems, singular perturbation problems, phase transitions and hysteresis phenomena occurring in physical, biological, chemical, economical, engineering and information systems. The 6th edition was hosted by Stefan cel Mare University in the city of Suceava located in the beautiful multicultural land of Bukovina, Romania, from May 21 to 24, 2012. This continued the series of biennial multidisciplinary conferences organized in Cork, Ireland from 2002 to 2008 and in Pécs, Hungary in 2010. The MURPHYS 2012 Workshop brought together more than 50 researchers in hysteresis and multi-scale phenomena from the United State of America, the United Kingdom, France, Germany, Italy, Ireland, Czech Republic, Hungary, Greece, Ukraine, and Romania. Participants shared and discussed new developments of analytical techniques and numerical methods along with a variety of their applications in various areas, including material sciences, electrical and electronics engineering, mechanical engineering and civil structures, biological and eco-systems, economics and finance. The Workshop was sponsored by the European Social Fund through Sectoral Operational Program Human Resources 2007-2013 (PRO-DOCT) and Stefan cel Mare University, Suceava. The Organizing Committee was co-chaired by Mihai Dimian from Stefan cel Mare University, Suceava (Romania), Amalia Ivanyi from the University of Pecs (Hungary), and Dmitrii Rachinskii from the University College Cork (Ireland). All papers published in this volume of Journal of Physics: Conference Series have been peer reviewed through processes administered by the Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing. The Guest Editors wish to place on record their sincere gratitude to Miss Sarah Toms for the assistance she provided during the publication process. More information about the Workshop can be found at http://www.murphys.usv.ro/ Mihai Dimian and Dmitrii Rachinskii Guest Editors for Journal of Physics: Conference Series Proceedings of the 6th International Workshop on Multi-Rate Processes and Hysteresis
Jeffery, A.; Elmquist, R. E.; Cage, M. E.
1995-01-01
Precision tests verify the dc equivalent circuit used by Ricketts and Kemeny to describe a quantum Hall effect device in terms of electrical circuit elements. The tests employ the use of cryogenic current comparators and the double-series and triple-series connection techniques of Delahaye. Verification of the dc equivalent circuit in double-series and triple-series connections is a necessary step in developing the ac quantum Hall effect as an intrinsic standard of resistance. PMID:29151768
Gomes de Souza, Rafael
2018-03-07
The literature on crocodylian anatomy presents the transverse process in an ambiguous meaning, which could represent all lateral expansions derived from the neural arch, including vertebrae from cervical to caudal series, or in a more restrictive meaning, being applied only to lumbar vertebrae. The lateral expansion of sacral and caudal vertebrae usually referred to as the transverse process has been discovered to be fused ribs, bringing more ambiguity to this term. Therefore, with the lack of a definition for transverse process and other associated terms, the present work aims to propose a nomenclatural standardization, as well as definitions and biological meaning, for vertebral rib related structures. Vertebra obtained from museum collections from a total of 87 specimens of 22 species of all extant Crocodylia genera were studied. All vertebrae, except cervical and first three dorsal, exhibit transverse processes. The transverse process is more developed in dorsal and lumbar vertebrae than in sacral and caudal vertebrae in which it is suppressed by the fused ribs. The serial homology hypotheses here proposed can also be aplied to other Crurotarsi and saurischian dinosaurs specimens. This standardization clarifies the understand of the serial homology among those homotypes, and reduces the ambiguity and misleadings in future work comparisons. Anat Rec, 2018. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Process capability determination of new and existing equipment
NASA Technical Reports Server (NTRS)
Mcclelland, H. T.; Su, Penwen
1994-01-01
The objective of this paper is to illustrate a method of determining the process capability of new or existing equipment. The method may also be modified to apply to testing laboratories. Long term changes in the system may be determined by periodically making new test parts or submitting samples from the original set to the testing laboratory. The technique described has been developed through a series of projects in special topics manufacturing courses and graduate student projects. It will be implemented as a standard experiment in an advanced manufacturing course in a new Manufacturing Engineering program at the University of Wisconsin-Stout campus. Before starting a project of this nature, it is important to decide on the exact question to be answered. In this case, it is desired to know what variation can be reasonably expected in the next part, feature, or test result produced. Generally, this question is answered by providing the process capability or the average value of a measured characteristic of the part or process plus or minus three standard deviations. There are two general cases to be considered: the part or test is made in large quantities with little change, or the process is flexible and makes a large variety of parts. Both cases can be accommodated; however, the emphasis in this report is on short run situations.
An Integrated Product Environment
NASA Technical Reports Server (NTRS)
Higgins, Chuck
1997-01-01
Mechanical Advantage is a mechanical design decision support system. Unlike our CAD/CAM cousins, Mechanical Advantage addresses true engineering processes, not just the form and fit of geometry. If we look at a traditional engineering environment, we see that an engineer starts with two things - performance goals and design rules. The intent is to have a product perform specific functions and accomplish that within a designated environment. Geometry should be a simple byproduct of that engineering process - not the controller of it. Mechanical Advantage is a performance modeler allowing engineers to consider all these criteria in making their decisions by providing such capabilities as critical parameter analysis, tolerance and sensitivity analysis, math driven Geometry, and automated design optimizations. If you should desire an industry standard solid model, we would produce an ACIS-based solid model. If you should desire an ANSI/ISO standard drawing, we would produce this as well with a virtual push of the button. For more information on this and other Advantage Series products, please contact the author.
A study of software standards used in the avionics industry
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1994-01-01
Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.
Study of oxygen gas production phenomenon during stand and discharge in silver-zinc batteries
NASA Technical Reports Server (NTRS)
1973-01-01
The effects of a number of cell process and performance variables upon the oxygen evolution rate of silver/silver oxide cathodes are studied to predict and measure the conditions which would result in the production of a minimum of oxygen. The following five tasks comprise the study: the design and fabrication of two pilot test cells to be used for electrode testing; the determination of the sensitivity and accuracy of the test cell; the determination of total volumes and rates of generation by cathodes of standard production procedures; the construction of a sequential test plan; and the construction of a series of positive formation cells in which formation process factors can be controlled.
Improvements in the malaxation process to enhance the aroma quality of extra virgin olive oils.
Reboredo-Rodríguez, P; González-Barreiro, C; Cancho-Grande, B; Simal-Gándara, J
2014-09-01
The influence of olive paste preparation conditions on the standard quality parameters, as well as volatile profiles of extra virgin olive oils (EVOOs) from Morisca and Manzanilla de Sevilla cultivars produced in an emerging olive growing area in north-western Spain and processed in an oil mill plant were investigated. For this purpose, two malaxation temperatures (20/30 °C), and two malaxation times (30/90 min) selected in accordance with the customs of the area producers were tested. The volatile profile of the oils underwent a substantial change in terms of odorant series when different malaxation parameters were applied. Copyright © 2014 Elsevier Ltd. All rights reserved.
Toxic anterior segment syndrome following penetrating keratoplasty.
Maier, Philip; Birnbaum, Florian; Böhringer, Daniel; Reinhard, Thomas
2008-12-01
To describe an outbreak of toxic anterior segment syndrome (TASS) following penetrating keratoplasty (PK) and to examine its possible causes. Owing to a series of TASS following PK between June 6, 2007, and October 2, 2007, we reviewed the records of all patients who had undergone PK during that time. In addition to routine microbial tests on organ culture media, we looked for specific pathogens and endotoxins in all of the materials used for organ culture or PK. Furthermore, we analyzed all of the perioperative products and instrument processing. Of the 94 patients who underwent PK, we observed 24 cases of postoperative sterile keratitis. Causal research revealed that the accumulation of cleaning substances or heat-stable endotoxins on the surface of the routinely used guided trephine system was most likely responsible for the TASS. To our knowledge, this is the first report on TASS following PK. Suboptimal reprocessing of surgical instruments may be an important cause of TASS as in this series the TASS-like symptoms resolved after modified instrument-cleaning procedures. The standardization of protocols for processing reusable trephine systems might prevent outbreaks of TASS following PK.
Long time-series of turbid coastal water using AVHRR: An example from Florida Bay, USA
Stumpf, R.P.; Frayer, M.L.
1997-01-01
The AVHRR can provide information on the reflectance of turbid case II water, permitting examination of large estuaries and plumes from major rivers. The AVHRR has been onboard several NOAA satellites, with afternoon overpasses since 1981, offering a long time-series to examine changes in coastal water. We are using AVHRR data starting in December 1989, to examine water clarity in Florida Bay, which has undergone a decline since the late 1980's. The processing involves obtaining a nominal reflectance for red light with standard corrections including those for Rayleigh and aerosol path radiances. Established relationships between reflectance and the water properties being measured in the Bay provide estimates of diffuse attenuation and light limitation for phytoplankton and seagrass productivity studies. Processing also includes monthly averages of reflectance and attenuation. The AVHRR data set describes spatial and temporal patterns, including resuspension of bottom sediments in the winter, and changes in water clarity. The AVHRR also indicates that Florida Bay has much higher reflectivity relative to attenuation than other southeastern US estuaries. ??2005 Copyright SPIE - The International Society for Optical Engineering.
Long time-series of turbid coastal water using AVHRR: an example from Florida Bay, USA
NASA Astrophysics Data System (ADS)
Stumpf, Richard P.; Frayer, M. L.
1997-02-01
The AVHRR can provide information on the reflectance of turbid case II water, permitting examination of large estuaries and plumes from major rivers. The AVHRR has been onboard several NOAA satellites, with afternoon overpasses since 1981, offering a long time-series to examine changes in coastal water. We are using AVHRR data starting in December 1989, to examine water clarity in Florida Bay, which has undergone a decline since the late 1980's. The processing involves obtaining a nominal reflectance for red light with standard corrections including those for Rayleigh and aerosol path radiances. Established relationships between reflectance and the water properties being measured in the Bay provide estimates of diffuse attenuation and light limitation for phytoplankton and seagrass productivity studies. Processing also includes monthly averages of reflectance and attenuation. The AVHRR data set describes spatial and temporal patterns, including resuspension of bottom sediments in the winter, and changes in water clarity. The AVHRR also indicates that Florida Bay has much higher reflectivity relative to attenuation than other southeastern US estuaries.
Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras
Harris, A.J.L.; Thornber, C.R.
1999-01-01
GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.
NASA Astrophysics Data System (ADS)
Horvath, Alexander; Horwath, Martin; Pail, Roland
2014-05-01
The Release-05 monthly solutions by the three centers of the GRACE Science and Data System are a significant improvement with respect to the previous Release 4. Meanwhile, previous assessments have revealed different noise levels between the solutions by CSR, GFZ and JPL, and also different amplitudes of interannual signal in the solutions by GFZ as compared to the two other centers. Encouraged by the science community, GFZ and CSR have kindly provided additional sets of time series. GFZ has reprocessed the RL05 monthly solutions (up to degree and order 90) with revised processing. CSR has made available monthly solutions with standard processing up to degree and order 96, in addition to their solutions up to degree and order 60. We compare these different time series with respect to their signal and noise content and analyze them on global and regional scale. For the regional scale our special interest is paid on Antarctica and on revealing polar signals such as ice mass trends and GIA. Following the necessity of destriping, an optimal choice for the setup of the Swenson & Wahr filter approach is evaluated to adapt to the specific signal and noise level in Antarctica. Furthermore we analyze the potential benefit of mixed time series solutions in order to combine the strengths of the solutions available. Concerning the question for an optimal maximum degree we suggest that for resolving large polar ice mass changes, it would be beneficial to provide gravity field variations even beyond degree 90.
NASA Astrophysics Data System (ADS)
Mercadante, Katie Lynn
The Next Generation Science Standards (NGSS) are the culmination of reform efforts spanning more than three decades and are the first major reform movement in science education since Sputnik. When implementing these new standards, teachers are faced with many barriers. NGSS requires critical thinking, cross-curricular learning, and key changes in teaching, learning, and assessment. Implementation nationwide has been slow, due to sweeping changes, and controversial content within the standards. Resistance to implementation occurs in nearly all levels for these reasons. The purpose of this descriptive study was to determine the perceptions of in-service teachers of the NGSS Framework, to identify barriers that inhibit implementation, and to identify commonalities among teachers who have successfully implemented the Framework, as well as assist others who may do the same in the future. Teachers from public, private, and charter schools from across the United States participated in the study. Based upon teacher response, a three-stage action plan and series of necessary recommendations were developed to assist teachers and administrators in K-12 schools to develop plans to implement the NGSS.
Waddell, J Neil; Payne, Alan G T; Swain, Michael V; Kieser, Jules A
2010-03-01
Soldered or cast bars are used as a standard of care in attachment systems supporting maxillary and mandibular implant overdentures. When failures of these bars occur, currently there is a lack of evidence in relation to their specific etiology, location, or nature. To investigate the failure process of a case series of six failed soldered bars, four intact soldered bars, and one intact cast milled bar, which had been supporting implant overdentures. A total of 11 different overdenture bars were removed from patients with different configuration of opposing arches. A failed bar (FB) group (n = 6) had failed soldered overdenture bars, which were recovered from patients following up to 2 years of wear before requiring prosthodontic maintenance and repair. An intact bar (IB) group (n = 5) had both soldered bars and a single cast milled bar, which had been worn by patients for 2 to 5 years prior to receiving other aspects of prosthodontic maintenance. All bars were examined using scanning electron microscopy to establish the possible mode of failure (FB) or to identify evidence of potential failure in the future (IB). Evidence of a progressive failure mode of corrosion fatigue and creep were observed on all the FB and IB usually around the solder areas and nonoxidizing gold cylinder. Fatigue and creep were also observed in all the IB. Where the level of corrosion was substantial, there was no evidence of wear from the matrices of the attachment system. Evidence of an instantaneous failure mode, ductile and brittle overload, was observed on the fracture surfaces of all the FB, within the solder and the nonoxidizing gold cylinders, at the solder/cylinder interface. Corrosion, followed by corrosion fatigue, appears to be a key factor in the onset of the failure process for overdenture bars in this case series of both maxillary and mandibular overdentures. Limited sample size and lack of standardization identify trends only but prevent broad interpretation of the findings.
Education Requirements for Natural Resource Based Outdoor Recreation Professionals.
ERIC Educational Resources Information Center
Elsner, Gary; And Others
The Office of Personnel Management should designate a new professional series for hiring individuals in outdoor recreational management. A new professional series would help set a standard for professionals with training in both resource management and the social sciences. Recommended educational requirements for the series include: (1) natural…
Shassere, Benjamin A.; Yamamoto, Yukinori; Babu, Sudarsanam Suresh
2016-02-23
Detailed microstructure characterization of Grade 91 (Modified 9Cr-1Mo, ASTM A387) steel subjected to a thermo-mechanical treatment (TMT) process was performed to rationalize the cross-weld creep properties. A series of thermo-mechanical processing in the austenite phase region, followed by isothermal aging at temperatures at 973 to 1173 K (700 to 900ºC) was applied to the Grade 91 steel to promote precipitation kinetics of MX (M: Nb and V, X: C and N) in the austenite matrix. Detailed characterization of the base metals after standard tempering confirmed the presence of fine MX dispersion within the tempered martensitic microstructure in steels processed at/andmore » above 1073 K (800 ºC). Relatively low volume fraction of M 23C 6 precipitates was observed after processing at 1073 K (800 ºC). The cross-weld creep strength after processing was increased with respect to the increase of MX dispersion, indicating that these MX precipitates maintained during weld thermal cycles in the fine grained heat affected zone (FGHAZ) region and thereby contribute to improved creep resistant of welds in comparison to the welds made with the standard “normalization and tempering” processes. Lastly, the steels processed in this specific processing condition showed improved cross-weld creep resistance and sufficient room-temperature toughness. The above data is also analysed based on existing theories of creep deformation based on dislocation climb mechanism.« less
NASA Astrophysics Data System (ADS)
Shassere, Benjamin A.; Yamamoto, Yukinori; Babu, Sudarsanam Suresh
2016-05-01
Detailed microstructure characterization of Grade 91 (Modified 9Cr-1Mo, ASTM A387) steel subjected to a thermo-mechanical treatment process was performed to rationalize the cross-weld creep properties. A series of thermo-mechanical processing in the austenite phase region, followed by isothermal aging at temperatures at 973 K to 1173 K (700 °C to 900 °C), was applied to the Grade 91 steel to promote precipitation kinetics of MX (M: Nb and V, X: C and N) in the austenite matrix. Detailed characterization of the base metals after standard tempering confirmed the presence of fine MX dispersion within the tempered martensitic microstructure in steels processed at/and above 1073 K (800 °C). Relatively low volume fraction of M23C6 precipitates was observed after processing at 1073 K (800 °C). The cross-weld creep strength after processing was increased with respect to the increase of MX dispersion, indicating that these MX precipitates maintained during weld thermal cycles in the fine-grained heat-affected zone region and thereby contribute to improved creep resistant of welds in comparison to the welds made with the standard "normalization and tempering" processes. The steels processed in this specific processing condition showed improved cross-weld creep resistance and sufficient room temperature toughness. The above data are also analyzed based on existing theories of creep deformation based on dislocation climb mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shassere, Benjamin A.; Yamamoto, Yukinori; Babu, Sudarsanam Suresh
Detailed microstructure characterization of Grade 91 (Modified 9Cr-1Mo, ASTM A387) steel subjected to a thermo-mechanical treatment (TMT) process was performed to rationalize the cross-weld creep properties. A series of thermo-mechanical processing in the austenite phase region, followed by isothermal aging at temperatures at 973 to 1173 K (700 to 900ºC) was applied to the Grade 91 steel to promote precipitation kinetics of MX (M: Nb and V, X: C and N) in the austenite matrix. Detailed characterization of the base metals after standard tempering confirmed the presence of fine MX dispersion within the tempered martensitic microstructure in steels processed at/andmore » above 1073 K (800 ºC). Relatively low volume fraction of M 23C 6 precipitates was observed after processing at 1073 K (800 ºC). The cross-weld creep strength after processing was increased with respect to the increase of MX dispersion, indicating that these MX precipitates maintained during weld thermal cycles in the fine grained heat affected zone (FGHAZ) region and thereby contribute to improved creep resistant of welds in comparison to the welds made with the standard “normalization and tempering” processes. Lastly, the steels processed in this specific processing condition showed improved cross-weld creep resistance and sufficient room-temperature toughness. The above data is also analysed based on existing theories of creep deformation based on dislocation climb mechanism.« less
Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano
2016-05-01
There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria proposed in this study should help scientists, managers, reviewers, and the public evaluate the technical quality of future findings against an established benchmark. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
3D-printed devices for continuous-flow organic chemistry.
Dragone, Vincenza; Sans, Victor; Rosnes, Mali H; Kitson, Philip J; Cronin, Leroy
2013-01-01
We present a study in which the versatility of 3D-printing is combined with the processing advantages of flow chemistry for the synthesis of organic compounds. Robust and inexpensive 3D-printed reactionware devices are easily connected using standard fittings resulting in complex, custom-made flow systems, including multiple reactors in a series with in-line, real-time analysis using an ATR-IR flow cell. As a proof of concept, we utilized two types of organic reactions, imine syntheses and imine reductions, to show how different reactor configurations and substrates give different products.
Why Are Chemists and Other Scientists Afraid of the Peer Review of Teaching?
NASA Astrophysics Data System (ADS)
Atwood, Charles H.; Taylor, James W.; Hutchings, Pat A.
2000-02-01
This paper presents a series of arguments that teaching should be subjected to the similar review standards that chemical research employs. Through peer review, the hope is to elevate the status of teaching in academe. The paper also describes a national effort through the American Association for Higher Education and the Carnegie Foundation for the Advancement of Teaching to establish a peer-review process appropriate for teaching. Finally, an examination of some of the perceived barriers to peer review, including fear, is detailed.
2016-11-01
The modernization strategy of traditional Chinese medicine (TCM) has been implemented for 20 years. A great deal of basic and innovative researches have been done on basic theory of TCM, effective substance, efficacy evaluation, action mechanism, intracorporal metabolic process, safety evaluation, clinical evaluation and quality standards. As a result, a series of remarkable achievements in scientific research have been generated and promoted the interpretation of the connotation of TCM, supported the industry development of TCM and accelerated internationalization of TCM. Copyright© by the Chinese Pharmaceutical Association.
Gang, Wei-juan; Wang, Xin; Wang, Fang; Dong, Guo-feng; Wu, Xiao-dong
2015-08-01
The national standard of "Regulations of Acupuncture-needle Manipulating Techniques" is one of the national Criteria of Acupuncturology for which a total of 22 items have been already established. In the process of formulation, a series of common and specific problems have been met. In the present paper, the authors expound these problems from 3 aspects, namely principles for formulation, methods for formulating criteria, and considerations about some problems. The formulating principles include selection and regulations of principles for technique classification and technique-related key factors. The main methods for formulating criteria are 1) taking the literature as the theoretical foundation, 2) taking the clinical practice as the supporting evidence, and 3) taking the expounded suggestions or conclusions through peer review.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... the test program. This would include the costs for a current or comparable pre-test or pre-standard... Partner; Testing of Patient Litters and Patient Restraints to Proposed Test Standard Authority: 29 U.S.C... developed a series of proposed ambulance component test standards. One such standard, AMD STANDARD 004...
Nonlinear Analysis of Surface EMG Time Series
NASA Astrophysics Data System (ADS)
Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-04-01
Applications of nonlinear analysis of surface electromyography time series of patients with and without low back pain are presented. Limitations of the standard methods based on the power spectrum are discussed.
Weak ergodicity breaking, irreproducibility, and ageing in anomalous diffusion processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzler, Ralf
2014-01-14
Single particle traces are standardly evaluated in terms of time averages of the second moment of the position time series r(t). For ergodic processes, one can interpret such results in terms of the known theories for the corresponding ensemble averaged quantities. In anomalous diffusion processes, that are widely observed in nature over many orders of magnitude, the equivalence between (long) time and ensemble averages may be broken (weak ergodicity breaking), and these time averages may no longer be interpreted in terms of ensemble theories. Here we detail some recent results on weakly non-ergodic systems with respect to the time averagedmore » mean squared displacement, the inherent irreproducibility of individual measurements, and methods to determine the exact underlying stochastic process. We also address the phenomenon of ageing, the dependence of physical observables on the time span between initial preparation of the system and the start of the measurement.« less
IDCDACS: IDC's Distributed Application Control System
NASA Astrophysics Data System (ADS)
Ertl, Martin; Boresch, Alexander; Kianička, Ján; Sudakov, Alexander; Tomuta, Elena
2015-04-01
The Preparatory Commission for the CTBTO is an international organization based in Vienna, Austria. Its mission is to establish a global verification regime to monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which bans all nuclear explosions. For this purpose time series data from a global network of seismic, hydro-acoustic and infrasound (SHI) sensors are transmitted to the International Data Centre (IDC) in Vienna in near-real-time, where it is processed to locate events that may be nuclear explosions. We newly designed the distributed application control system that glues together the various components of the automatic waveform data processing system at the IDC (IDCDACS). Our highly-scalable solution preserves the existing architecture of the IDC processing system that proved successful over many years of operational use, but replaces proprietary components with open-source solutions and custom developed software. Existing code was refactored and extended to obtain a reusable software framework that is flexibly adaptable to different types of processing workflows. Automatic data processing is organized in series of self-contained processing steps, each series being referred to as a processing pipeline. Pipelines process data by time intervals, i.e. the time-series data received from monitoring stations is organized in segments based on the time when the data was recorded. So-called data monitor applications queue the data for processing in each pipeline based on specific conditions, e.g. data availability, elapsed time or completion states of preceding processing pipelines. IDCDACS consists of a configurable number of distributed monitoring and controlling processes, a message broker and a relational database. All processes communicate through message queues hosted on the message broker. Persistent state information is stored in the database. A configurable processing controller instantiates and monitors all data processing applications. Due to decoupling by message queues the system is highly versatile and failure tolerant. The implementation utilizes the RabbitMQ open-source messaging platform that is based upon the Advanced Message Queuing Protocol (AMQP), an on-the-wire protocol (like HTML) and open industry standard. IDCDACS uses high availability capabilities provided by RabbitMQ and is equipped with failure recovery features to survive network and server outages. It is implemented in C and Python and is operated in a Linux environment at the IDC. Although IDCDACS was specifically designed for the existing IDC processing system its architecture is generic and reusable for different automatic processing workflows, e.g. similar to those described in (Olivieri et al. 2012, Kværna et al. 2012). Major advantages are its independence of the specific data processing applications used and the possibility to reconfigure IDCDACS for different types of processing, data and trigger logic. A possible future development would be to use the IDCDACS framework for different scientific domains, e.g. for processing of Earth observation satellite data extending the one-dimensional time-series intervals to spatio-temporal data cubes. REFERENCES Olivieri M., J. Clinton (2012) An almost fair comparison between Earthworm and SeisComp3, Seismological Research Letters, 83(4), 720-727. Kværna, T., S. J. Gibbons, D. B. Harris, D. A. Dodge (2012) Adapting pipeline architectures to track developing aftershock sequences and recurrent explosions, Proceedings of the 2012 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies, 776-785.
Managing Highway Maintenance: Standards for Maintenance Work, Part 1, Unit 8, Level 2.
ERIC Educational Resources Information Center
Federal Highway Administration (DOT), Washington, DC. Offices of Research and Development.
Part of the series "Managing Highway Maintenance," the unit is about maintenance standards and is designed for superintendents and senior foremen who are responsible for scheduling and controlling routine maintenance. It describes different kinds of standards, why and how standards are developed, and how standards are to be used and…
ERIC Educational Resources Information Center
Kouba, Vicky L.; Champagne, Audrey B.; Piscitelli, Michael; Havasy, Monique; White, Kara; Hurley, Marlene
A study analyzed in detail the perspectives in science and mathematics literacy found in the national standards for science and mathematics. The National Science Education Standards (NSES), the Benchmarks for Science Literacy, the Curriculum and Evaluation Standards for School Mathematics, and the Professional Teaching Standards for School…
System 80+{trademark} standard design: CESSAR design certification. Volume 5: Amendment I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report has been prepared in support of the industry effort to standardize nuclear plant designs. The documents in this series describe the Combustion Engineering, Inc. System 80+{sup TM} Standard Design.
A method for analyzing temporal patterns of variability of a time series from Poincare plots.
Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E
2012-07-01
The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.
SURMODERR: A MATLAB toolbox for estimation of velocity uncertainties of a non-permanent GPS station
NASA Astrophysics Data System (ADS)
Teza, Giordano; Pesci, Arianna; Casula, Giuseppe
2010-08-01
SURMODERR is a MATLAB toolbox intended for the estimation of reliable velocity uncertainties of a non-permanent GPS station (NPS), i.e. a GPS receiver used in campaign-style measurements. The implemented method is based on the subsampling of daily coordinate time series of one or more continuous GPS stations located inside or close to the area where the NPSs are installed. The continuous time series are subsampled according to real or planned occupation tables and random errors occurring in antenna replacement on different surveys are taken into account. In order to overcome the uncertainty underestimation that typically characterizes short duration GPS time series, statistical analysis of the simulated data is performed to estimate the velocity uncertainties of this real NPS. The basic hypotheses required are: (i) the signal must be a long-term linear trend plus seasonal and colored noise for each coordinate; (ii) the standard data processing should have already been performed to provide daily data series; and (iii) if the method is applied to survey planning, the future behavior should not be significantly different from the past behavior. In order to show the strength of the approach, two case studies with real data are presented and discussed (Central Apennine and Panarea Island, Italy).
InSAR Deformation Time Series Processed On-Demand in the Cloud
NASA Astrophysics Data System (ADS)
Horn, W. B.; Weeden, R.; Dimarchi, H.; Arko, S. A.; Hogenson, K.
2017-12-01
During this past year, ASF has developed a cloud-based on-demand processing system known as HyP3 (http://hyp3.asf.alaska.edu/), the Hybrid Pluggable Processing Pipeline, for Synthetic Aperture Radar (SAR) data. The system makes it easy for a user who doesn't have the time or inclination to install and use complex SAR processing software to leverage SAR data in their research or operations. One such processing algorithm is generation of a deformation time series product, which is a series of images representing ground displacements over time, which can be computed using a time series of interferometric SAR (InSAR) products. The set of software tools necessary to generate this useful product are difficult to install, configure, and use. Moreover, for a long time series with many images, the processing of just the interferograms can take days. Principally built by three undergraduate students at the ASF DAAC, the deformation time series processing relies the new Amazon Batch service, which enables processing of jobs with complex interconnected dependencies in a straightforward and efficient manner. In the case of generating a deformation time series product from a stack of single-look complex SAR images, the system uses Batch to serialize the up-front processing, interferogram generation, optional tropospheric correction, and deformation time series generation. The most time consuming portion is the interferogram generation, because even for a fairly small stack of images many interferograms need to be processed. By using AWS Batch, the interferograms are all generated in parallel; the entire process completes in hours rather than days. Additionally, the individual interferograms are saved in Amazon's cloud storage, so that when new data is acquired in the stack, an updated time series product can be generated with minimal addiitonal processing. This presentation will focus on the development techniques and enabling technologies that were used in developing the time series processing in the ASF HyP3 system. Data and process flow from job submission through to order completion will be shown, highlighting the benefits of the cloud for each step.
Scaling properties of Polish rain series
NASA Astrophysics Data System (ADS)
Licznar, P.
2009-04-01
Scaling properties as well as multifractal nature of precipitation time series have not been studied for local Polish conditions until recently due to lack of long series of high-resolution data. The first Polish study of precipitation time series scaling phenomena was made on the base of pluviograph data from the Wroclaw University of Environmental and Life Sciences meteorological station located at the south-western part of the country. The 38 annual rainfall records from years 1962-2004 were converted into digital format and transformed into a standard format of 5-minute time series. The scaling properties and multifractal character of this material were studied by means of several different techniques: power spectral density analysis, functional box-counting, probability distribution/multiple scaling and trace moment methods. The result proved the general scaling character of time series at the range of time scales ranging form 5 minutes up to at least 24 hours. At the same time some characteristic breaks at scaling behavior were recognized. It is believed that the breaks were artificial and arising from the pluviograph rain gauge measuring precision limitations. Especially strong limitations at the precision of low-intensity precipitations recording by pluviograph rain gauge were found to be the main reason for artificial break at energy spectra, as was reported by other authors before. The analysis of co-dimension and moments scaling functions showed the signs of the first-order multifractal phase transition. Such behavior is typical for dressed multifractal processes that are observed by spatial or temporal averaging on scales larger than the inner-scale of those processes. The fractal dimension of rainfall process support derived from codimension and moments scaling functions geometry analysis was found to be 0.45. The same fractal dimension estimated by means of the functional box-counting method was equal to 0.58. At the final part of the study implementation of double trace moment method allowed for estimation of local universal multifractal rainfall parameters (α=0.69; C1=0.34; H=-0.01). The research proved the fractal character of rainfall process support and multifractal character of the rainfall intensity values variability among analyzed time series. It is believed that scaling of local Wroclaw's rainfalls for timescales at the range from 24 hours up to 5 minutes opens the door for future research concerning for example random cascades implementation for daily precipitation totals disaggregation for smaller time intervals. The results of such a random cascades functioning in a form of 5 minute artificial rainfall scenarios could be of great practical usability for needs of urban hydrology, and design and hydrodynamic modeling of storm water and combined sewage conveyance systems.
76 FR 17191 - Staff Accounting Bulletin No. 114
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-28
...This Staff Accounting Bulletin (SAB) revises or rescinds portions of the interpretive guidance included in the codification of the Staff Accounting Bulletin Series. This update is intended to make the relevant interpretive guidance consistent with current authoritative accounting guidance issued as part of the Financial Accounting Standards Board's Accounting Standards Codification. The principal changes involve revision or removal of accounting guidance references and other conforming changes to ensure consistency of referencing throughout the SAB Series.
Fisher, William A; Gruenwald, Ilan; Jannini, Emmanuele A; Lev-Sagie, Ahinoam; Lowenstein, Lior; Pyke, Robert E; Reisman, Yakov; Revicki, Dennis A; Rubio-Aurioles, Eusebio
2017-01-01
This series of articles, Standards for Clinical Trials in Male and Female Sexual Dysfunction, began with the discussion of a common expected standard for clinical trial design in male and female sexual dysfunction, a common rationale for the design of phase I to IV clinical trials, and common considerations for the selection of study population and study duration in male and female sexual dysfunction. The second article in this series discussed fundamental principles in development, validation, and selection of patient- (and partner-) reported outcome assessment. The third and present article in this series discusses selected aspects of sexual dysfunction that are that are unique to male sexual dysfunctions and relevant to the conduct of clinical trials of candidate treatments for men. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Functional DNA: Teaching Infinite Series through Genetic Analogy
ERIC Educational Resources Information Center
Kowalski, R. Travis
2011-01-01
This article presents an extended analogy that connects infinite sequences and series to the science of genetics, by identifying power series as "DNA for a function." This analogy allows standard topics such as convergence tests or Taylor approximations to be recast in a "forensic" light as mathematical analogs of genetic concepts such as DNA…
FATS: Feature Analysis for Time Series
NASA Astrophysics Data System (ADS)
Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Zhu, Ming; Dave, Rahul; Castro, Nicolas; Pichara, Karim
2017-11-01
FATS facilitates and standardizes feature extraction for time series data; it quickly and efficiently calculates a compilation of many existing light curve features. Users can characterize or analyze an astronomical photometric database, though this library is not necessarily restricted to the astronomical domain and can also be applied to any kind of time series data.
Desperately Seeking Standards: Bridging the Gap from Concept to Reality.
ERIC Educational Resources Information Center
Jones, A. James; Gardner, Carrie; Zaenglein, Judith L.
1998-01-01
Discussion of national standards for information-and-technology literacy focuses on experiences at one school where national standards were synthesized by library media specialists to develop local standards as well as a series of benchmarks by which student achievement could be measured. (Author/LRW)
Simel, David L; Rennie, Drummond; Bossuyt, Patrick M M
2008-06-01
The Standards for Reporting of Diagnostic Accuracy (STARD) statement provided guidelines for investigators conducting diagnostic accuracy studies. We reviewed each item in the statement for its applicability to clinical examination diagnostic accuracy research, viewing each discrete aspect of the history and physical examination as a diagnostic test. Nonsystematic review of the STARD statement. Two former STARD Group participants and 1 editor of a journal series on clinical examination research reviewed each STARD item. Suggested interpretations and comments were shared to develop consensus. The STARD Statement applies generally well to clinical examination diagnostic accuracy studies. Three items are the most important for clinical examination diagnostic accuracy studies, and investigators should pay particular attention to their requirements: describe carefully the patient recruitment process, describe participant sampling and address if patients were from a consecutive series, and describe whether the clinicians were masked to the reference standard tests and whether the interpretation of the reference standard test was masked to the clinical examination components or overall clinical impression. The consideration of these and the other STARD items in clinical examination diagnostic research studies would improve the quality of investigations and strengthen conclusions reached by practicing clinicians. The STARD statement provides a very useful framework for diagnostic accuracy studies. The group correctly anticipated that there would be nuances applicable to studies of the clinical examination. We offer guidance that should enhance their usefulness to investigators embarking on original studies of a patient's history and physical examination.
Tracheobronchial Mycosis in a Retrospective Case-Series Study of Five Status Asthmaticus Patients
Mak, Garbo; Porter, Paul C.; Bandi, Venkata; Kheradmand, Farrah; Corry, David B.
2013-01-01
The aetiology of status asthmaticus (SA), a complication of severe asthma, is unknown. Fungal exposure, as measured by fungal atopy, is a major risk factor for developing asthma, but the relationship of fungi in SA per se has not previously been reported. In this five patient retrospective case series study, lower respiratory tract cultures were performed on bronchoalveolar lavage or tracheal aspirate fluid, comparing standard clinical laboratory cultures with a specialized technique in which respiratory mucus was removed prior to culture. We show mucolytic treatment allows increased detection of fungal growth, especially yeast, from the lower airways of all SA patients. We also demonstrate that the yeast Candida albicans inhalation readily induces asthma-like disease in mice. Our observations suggest, SA may represent a fungal infectious process, and supports additional prospective studies utilizing anti-fungal therapy to supplement conventional therapy, broad-spectrum antibiotics and high-dose glucocorticoids, which can promote fungal overgrowth. PMID:23280490
Workshop on Algorithms for Time-Series Analysis
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2012-04-01
abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.
NASA Astrophysics Data System (ADS)
Hooper, Richard; Zaslavsky, Ilya; Parodi, Antonio; Gochis, David; Jha, Shantenu; Whitenack, Thomas; Valentine, David; Caumont, Olivier; Dekic, Ljiljana; Ivkovic, Marija; Molini, Luca; Bedrina, Tatiana; Gijsbers, Peter J. A.; de Rooij, Erik; Rebora, Nicola
2013-04-01
To enable a plug-and-play infrastructure, the European DRIHM (Distributed Research Infrastructure for Hydro-Meteorology) project aims to develop a comprehensive data publication and sharing system presenting uniform standards-based data discovery and access interfaces for hydrometeorological data collected by DRIHM partners in several European countries. This is a challenging task due to heterogeneity in types of data being collected and organized for modeling, and different semantic and structural conventions adopted by different data publishers. To meet this goal, the DRIHM project, and its DRIHM2US extension, collaborates with the recently funded US SCIHM (Standards-based Cyberinfrastructure for HydroMeteorology) project to develop a data sharing infrastructure for time series information. We report initial results of the application of the data integrating technologies developed by the NSF-funded CUAHSI HIS (Consortium of Universities for the Advancement of Hydrologic Data, Inc., Hydrologic Information System) project, to information collected within DRIHM. The CUAHSI HIS system has been widely used in the US; it provides access to about a hundred water data collections that can be queried via uniform web services. The DRIHM partners initially implementing the system, include the CIMA Research Foundation (Italy), the French National Center for Scientific Research (CNRS), and the Republic Hydrometeorological Service of Serbia. The collected time series information was ingested into CUAHSI Observations Data Model databases, and water data services were created for each of the partners. At the time of writing, the water data services include SOAP and REST endpoints that provide access to the time series in WaterML 1 and WaterML 2.0 formats. The former encoding, developed by CUAHSI HIS, has been adopted by a number of federal agencies and research groups in the US, while the latter, created by an international group of experts under the aegis of the Hydrology Domain Working Group of the Open Geospatial Consortium and the World Meteorological Organization, has been recently adopted as an international standard for exchanging hydrologic data. The services are registered at the central catalog, which supports discovery of time series based on their spatio-temporal characteristics and on variable semantics. The semantics of the measurements is aligned , in the process of publication and service registration, with a set of centrally managed controlled vocabularies and a parameter vocabulary. Data from multiple collections can be discovered, accessed, visualized and analyzed using CUAHSI HydroDesktop software, or other clients that support water data service interfaces. While HydroDesktop relies on WaterML 1 format, the Delft FEWS system, maintained by a DRIHM partner Deltares, is already capable of ingesting WaterML 2.0 services, which enables interfacing WaterML 2.0-formatted streams with forecast models. Development of a consistent data sharing system is one of the first steps in realizing DRIHM objectives. Next steps include further integration of water data services with simulation models, WRF and WRF-Hydro in particular, and integrating point time series with gridded information to be used as model input. We discuss these additional steps and associated challenges, and demonstrate the complementarity and benefits of US-EU collaboration in the development of global standards-based infrastructure for hydrometeorological data.
Concept Study on a Flexible Standard Bus for Small Scientific Satellites
NASA Astrophysics Data System (ADS)
Fukuda, Seisuke; Sawai, Shujiro; Sakai, Shin-Ichiro; Saito, Hirobumi; Tohma, Takayuki; Takahashi, Junko; Toriumi, Tsuyoshi; Kitade, Kenji
In this paper, a new standard bus system for a series of small scientific satellites in the Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA) is described. Since each mission proposed for the series has a wide variety of requirements, a lot of efforts are needed to enhance flexibility of the standard bus. Some concepts from different viewpoints are proposed. First, standardization layers concerning satellite configuration, instruments, interfaces, and design methods are defined respectively. Methods of product platform engineering, which classify specifications of the bus system into a core platform, alternative variants, and selectable variants, are also investigated in order to realize a semi-custom-made bus. Furthermore, a tradeoff between integration and modularization architecture is fully considered.
A Systematic Evaluation of Blood Serum and Plasma Pre-Analytics for Metabolomics Cohort Studies
Jobard, Elodie; Trédan, Olivier; Postoly, Déborah; André, Fabrice; Martin, Anne-Laure; Elena-Herrmann, Bénédicte; Boyault, Sandrine
2016-01-01
The recent thriving development of biobanks and associated high-throughput phenotyping studies requires the elaboration of large-scale approaches for monitoring biological sample quality and compliance with standard protocols. We present a metabolomic investigation of human blood samples that delineates pitfalls and guidelines for the collection, storage and handling procedures for serum and plasma. A series of eight pre-processing technical parameters is systematically investigated along variable ranges commonly encountered across clinical studies. While metabolic fingerprints, as assessed by nuclear magnetic resonance, are not significantly affected by altered centrifugation parameters or delays between sample pre-processing (blood centrifugation) and storage, our metabolomic investigation highlights that both the delay and storage temperature between blood draw and centrifugation are the primary parameters impacting serum and plasma metabolic profiles. Storing the blood drawn at 4 °C is shown to be a reliable routine to confine variability associated with idle time prior to sample pre-processing. Based on their fine sensitivity to pre-analytical parameters and protocol variations, metabolic fingerprints could be exploited as valuable ways to determine compliance with standard procedures and quality assessment of blood samples within large multi-omic clinical and translational cohort studies. PMID:27929400
Synthesis of β-Peptide Standards for Use in Model Prebiotic Reactions
NASA Astrophysics Data System (ADS)
Forsythe, Jay G.; English, Sloane L.; Simoneaux, Rachel E.; Weber, Arthur L.
2018-05-01
A one-pot method was developed for the preparation of a series of β-alanine standards of moderate size (2 to ≥12 residues) for studies concerning the prebiotic origins of peptides. The one-pot synthesis involved two sequential reactions: (1) dry-down self-condensation of β-alanine methyl ester, yielding β-alanine peptide methyl ester oligomers, and (2) subsequent hydrolysis of β-alanine peptide methyl ester oligomers, producing a series of β-alanine peptide standards. These standards were then spiked into a model prebiotic product mixture to confirm by HPLC the formation of β-alanine peptides under plausible reaction conditions. The simplicity of this approach suggests it can be used to prepare a variety of β-peptide standards for investigating differences between α- and β-peptides in the context of prebiotic chemistry.
Introducing Python tools for magnetotellurics: MTpy
NASA Astrophysics Data System (ADS)
Krieger, L.; Peacock, J.; Inverarity, K.; Thiel, S.; Robertson, K.
2013-12-01
Within the framework of geophysical exploration techniques, the magnetotelluric method (MT) is relatively immature: It is still not as widely spread as other geophysical methods like seismology, and its processing schemes and data formats are not thoroughly standardized. As a result, the file handling and processing software within the academic community is mainly based on a loose collection of codes, which are sometimes highly adapted to the respective local specifications. Although tools for the estimation of the frequency dependent MT transfer function, as well as inversion and modelling codes, are available, the standards and software for handling MT data are generally not unified throughout the community. To overcome problems that arise from missing standards, and to simplify the general handling of MT data, we have developed the software package "MTpy", which allows the handling, processing, and imaging of magnetotelluric data sets. It is written in Python and the code is open-source. The setup of this package follows the modular approach of successful software packages like GMT or Obspy. It contains sub-packages and modules for various tasks within the standard MT data processing and handling scheme. Besides pure Python classes and functions, MTpy provides wrappers and convenience scripts to call external software, e.g. modelling and inversion codes. Even though still under development, MTpy already contains ca. 250 functions that work on raw and preprocessed data. However, as our aim is not to produce a static collection of software, we rather introduce MTpy as a flexible framework, which will be dynamically extended in the future. It then has the potential to help standardise processing procedures and at same time be a versatile supplement for existing algorithms. We introduce the concept and structure of MTpy, and we illustrate the workflow of MT data processing utilising MTpy on an example data set collected over a geothermal exploration site in South Australia. Workflow of MT data processing. Within the structural diagram, the MTpy sub-packages are shown in red (time series data processing), green (handling of EDI files and impedance tensor data), yellow (connection to modelling/inversion algorithms), black (impedance tensor interpretation, e.g. by Phase Tensor calculations), and blue (generation of visual representations, e.g pseudo sections or resistivity models).
Using XML Configuration-Driven Development to Create a Customizable Ground Data System
NASA Technical Reports Server (NTRS)
Nash, Brent; DeMore, Martha
2009-01-01
The Mission data Processing and Control Subsystem (MPCS) is being developed as a multi-mission Ground Data System with the Mars Science Laboratory (MSL) as the first fully supported mission. MPCS is a fully featured, Java-based Ground Data System (GDS) for telecommand and telemetry processing based on Configuration-Driven Development (CDD). The eXtensible Markup Language (XML) is the ideal language for CDD because it is easily readable and editable by all levels of users and is also backed by a World Wide Web Consortium (W3C) standard and numerous powerful processing tools that make it uniquely flexible. The CDD approach adopted by MPCS minimizes changes to compiled code by using XML to create a series of configuration files that provide both coarse and fine grained control over all aspects of GDS operation.
Measurement of cardiac output from dynamic pulmonary circulation time CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Scalzetti, Ernest M.
Purpose: To introduce a method of estimating cardiac output from the dynamic pulmonary circulation time CT that is primarily used to determine the optimal time window of CT pulmonary angiography (CTPA). Methods: Dynamic pulmonary circulation time CT series, acquired for eight patients, were retrospectively analyzed. The dynamic CT series was acquired, prior to the main CTPA, in cine mode (1 frame/s) for a single slice at the level of the main pulmonary artery covering the cross sections of ascending aorta (AA) and descending aorta (DA) during the infusion of iodinated contrast. The time series of contrast changes obtained for DA,more » which is the downstream of AA, was assumed to be related to the time series for AA by the convolution with a delay function. The delay time constant in the delay function, representing the average time interval between the cross sections of AA and DA, was determined by least square error fitting between the convoluted AA time series and the DA time series. The cardiac output was then calculated by dividing the volume of the aortic arch between the cross sections of AA and DA (estimated from the single slice CT image) by the average time interval, and multiplying the result by a correction factor. Results: The mean cardiac output value for the six patients was 5.11 (l/min) (with a standard deviation of 1.57 l/min), which is in good agreement with the literature value; the data for the other two patients were too noisy for processing. Conclusions: The dynamic single-slice pulmonary circulation time CT series also can be used to estimate cardiac output.« less
NASA Astrophysics Data System (ADS)
Duchko, Andrey; Bykov, Alexandr
2015-06-01
Nowadays the task of spectra processing is as relevant as ever in molecular spectroscopy. Nevertheless, existing techniques of vibrational energy levels and wave functions computation often come to a dead-lock. Application of standard quantum-mechanical approaches often faces inextricable difficulties. Variational method requires unimaginable computational performance. On the other hand perturbational approaches beat against divergent series. That's why this problem faces an urgent need in application of specific resummation techniques. In this research Rayleigh-Schrödinger perturbation theory is applied to vibrational energy levels calculation of excited vibrational states of H_2CO. It is known that perturbation series diverge in the case of anharmonic resonance coupling between vibrational states [1]. Nevertheless, application of advanced divergent series summation techniques makes it possible to calculate the value of energy with high precision (more than 10 true digits) even for highly excited states of the molecule [2]. For this purposes we have applied several summation techniques based on high-order Pade-Hermite approximations. Our research shows that series behaviour completely depends on the singularities of complex energy function inside unit circle. That's why choosing an approximation function modelling this singularities allows to calculate the sum of divergent series. Our calculations for formaldehyde molecule show that the efficiency of each summation technique depends on the resonant type. REFERENCES 1. J. Cizek, V. Spirko, and O. Bludsky, ON THE USE OF DIVERGENT SERIES IN VIBRATIONAL SPECTROSCOPY. TWO- AND THREE-DIMENSIONAL OSCILLATORS, J. Chem. Phys. 99, 7331 (1993). 2. A. V. Sergeev and D. Z. Goodson, SINGULARITY ANALYSIS OF FOURTH-ORDER MöLLER-PLESSET PERTURBATION THEORY, J. Chem. Phys. 124, 4111 (2006).
The Recalibrated Sunspot Number: Impact on Solar Cycle Predictions
NASA Astrophysics Data System (ADS)
Clette, F.; Lefevre, L.
2017-12-01
Recently and for the first time since their creation, the sunspot number and group number series were entirely revisited and a first fully recalibrated version was officially released in July 2015 by the World Data Center SILSO (Brussels). Those reference long-term series are widely used as input data or as a calibration reference by various solar cycle prediction methods. Therefore, past predictions may now need to be redone using the new sunspot series, and methods already used for predicting cycle 24 will require adaptations before attempting predictions of the next cycles.In order to clarify the nature of the applied changes, we describe the different corrections applied to the sunspot and group number series, which affect extended time periods and can reach up to 40%. While some changes simply involve constant scale factors, other corrections vary with time or follow the solar cycle modulation. Depending on the prediction method and on the selected time interval, this can lead to different responses and biases. Moreover, together with the new series, standard error estimates are also progressively added to the new sunspot numbers, which may help deriving more accurate uncertainties for predicted activity indices. We conclude on the new round of recalibration that is now undertaken in the framework of a broad multi-team collaboration articulated around upcoming ISSI workshops. We outline the future corrections that can still be expected in the future, as part of a permanent upgrading process and quality control. From now on, future sunspot-based predictive models should thus be made more adaptable, and regular updates of predictions should become common practice in order to track periodic upgrades of the sunspot number series, just like it is done when using other modern solar observational series.
The State's Formula for Success 2006
ERIC Educational Resources Information Center
Colorado Department of Education, 2006
2006-01-01
This standards review is the second in a series of annual reviews of the Colorado Model Content Standards. Its purpose is to identify student performance over time on measures of existing science standards, identify ways to affirm and strengthen standards and more clearly articulate the practices used by Colorado schools to promote student…
EPA STANDARDS NETWORK FACT SHEET: ISO 14000: INTERNATIONAL ENVIRONMENTAL MANAGEMENT STANDARDS
This flyer provides an overview of the ISO 14000 series of International standards, supplying a brief history, structure of the Technical Committee (TC) 207, structure of the U.S. Technical Advisory Group (TAG) to ISO TC-207, status of the Standards development as of June 1997, w...
GIAnT - Generic InSAR Analysis Toolbox
NASA Astrophysics Data System (ADS)
Agram, P.; Jolivet, R.; Riel, B. V.; Simons, M.; Doin, M.; Lasserre, C.; Hetland, E. A.
2012-12-01
We present a computing framework for studying the spatio-temporal evolution of ground deformation from interferometric synthetic aperture radar (InSAR) data. Several open-source tools including Repeat Orbit Interferometry PACkage (ROI-PAC) and InSAR Scientific Computing Environment (ISCE) from NASA-JPL, and Delft Object-oriented Repeat Interferometric Software (DORIS), have enabled scientists to generate individual interferograms from raw radar data with relative ease. Numerous computational techniques and algorithms that reduce phase information from multiple interferograms to a deformation time-series have been developed and verified over the past decade. However, the sharing and direct comparison of products from multiple processing approaches has been hindered by - 1) absence of simple standards for sharing of estimated time-series products, 2) use of proprietary software tools with license restrictions and 3) the closed source nature of the exact implementation of many of these algorithms. We have developed this computing framework to address all of the above issues. We attempt to take the first steps towards creating a community software repository for InSAR time-series analysis. To date, we have implemented the short baseline subset algorithm (SBAS), NSBAS and multi-scale interferometric time-series (MInTS) in this framework and the associated source code is included in the GIAnT distribution. A number of the associated routines have been optimized for performance and scalability with large data sets. Some of the new features in our processing framework are - 1) the use of daily solutions from continuous GPS stations to correct for orbit errors, 2) the use of meteorological data sets to estimate the tropospheric delay screen and 3) a data-driven bootstrapping approach to estimate the uncertainties associated with estimated time-series products. We are currently working on incorporating tidal load corrections for individual interferograms and propagation of noise covariance models through the processing chain for robust estimation of uncertainties in the deformation estimates. We will demonstrate the ease of use of our framework with results ranging from regional scale analysis around Long Valley, CA and Parkfield, CA to continental scale analysis in Western South America. We will also present preliminary results from a new time-series approach that simultaneously estimates deformation over the complete spatial domain at all time epochs on a distributed computing platform. GIAnT has been developed entirely using open source tools and uses Python as the underlying platform. We build on the extensive numerical (NumPy) and scientific (SciPy) computing Python libraries to develop an object-oriented, flexible and modular framework for time-series InSAR applications. The toolbox is currently configured to work with outputs from ROI-PAC, ISCE and DORIS, but can easily be extended to support products from other SAR/InSAR processors. The toolbox libraries include support for hierarchical data format (HDF5) memory mapped files, parallel processing with Python's multi-processing module and support for many convex optimization solvers like CSDP, CVXOPT etc. An extensive set of routines to deal with ASCII and XML files has also been included for controlling the processing parameters.
Automatic loudness control in short-form content for broadcasting.
Pires, Leandro da S; Vieira, Maurílio N; Yehia, Hani C
2017-03-01
During the early years of the International Telecommunication Union (ITU) loudness calculation standard for sound broadcasting [ITU-R (2006), Rec. BS Series, 1770], the need for additional loudness descriptors to evaluate short-form content, such as commercials and live inserts, was identified. This work proposes a loudness control scheme to prevent loudness jumps, which can bother audiences. It employs short-form content audio detection and dynamic range processing methods for the maximum loudness level criteria. Detection is achieved by combining principal component analysis for dimensionality reduction and support vector machines for binary classification. Subsequent processing is based on short-term loudness integrators and Hilbert transformers. The performance was assessed using quality classification metrics and demonstrated through a loudness control example.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
NASA Astrophysics Data System (ADS)
Steenberg, Ryan
Advancements in medicine have allowed surgeons a menu of options in post-mastectomy breast reconstruction. A conundrum exists, however, in flap selection when faced with varying patient body types. In the case of the athletic patient who does not have the appropriate amount of donor site tissue to warrant a Transverse Rectus Abdominus Musculocutaneuos Flap (TRAM) the Transverse Musculocutaneous Gracilis Flap (TMG) is an appropriate alternative due to its functional and aesthetic benefits. An intricate and timely process, the TMG procedure can be difficult to understand for the layperson. Therefore, a need for a condensed and standardized description exists. By breaking the process down and illustrating the procedure one can effectively deliver the information for use across all realms of publication and education.
The Effect of Geocenter Motion on Jason-2 and Jason-1 Orbits and the Mean Sea Level
NASA Technical Reports Server (NTRS)
Melachroinos, Stavros A.; Beckley, Brian D.; Lemoine, Frank G.; Zelensky, Nikita P.; Rowlands, David D.; Luthcke, Scott B.
2012-01-01
We have investigated the impact of geocenter motion on Jason-2 orbits. This was accomplished by computing a series of Jason-1, Jason-2 GPS-based and SLR/DORIS-based orbits using ITRF2008 and the IGS repro1 framework based on the most recent GSFC standards. From these orbits, we extract the Jason-2 orbit frame translational parameters per cycle by the means of a Helmert transformation between a set of reference orbits and a set of test orbits. The fitted annual and seasonal terms of these time-series are compared to two different geocenter motion models. Subsequently, we included the geocenter motion corrections in the POD process as a degree-1 loading displacement correction to the tracking network. The analysis suggested that the GSFC's Jason-2 std0905 GPS-based orbits are closely tied to the center of mass (CM) of the Earth whereas the SLR/DORIS std0905 orbits are tied to the center of figure (CF) of the ITRF2005 (Melachroinos et al., 2012). In this study we extend the investigation to the centering of the GPS constellation and the way those are tied in the Jason-1 and Jason-2 POD process. With a new set of standards, we quantify the GPS and SLR/DORIS-based orbit centering during the Jason-1 and Jason-2 inter-calibration period and how this impacts the orbit radial error over the globe, which is assimilated into mean sea level (MSL) error, from the omission of the full term of the geocenter motion correction.
Managing Highway Maintenance: Standards for Maintenance Work, Part 3, Unit 8, Level 2.
ERIC Educational Resources Information Center
Federal Highway Administration (DOT), Washington, DC. Offices of Research and Development.
Part of the series "Managing Highway Maintenance," the unit explains various uses of maintenance standards and how standards should be interpreted and communicated to formen and crew leaders. Several examples are given of the decisions made when applying the standards to routine work. The preceding units on standards (parts 1 and 2)…
ERIC Educational Resources Information Center
La Marca, Paul M.; Redfield, Doris; Winter, Phoebe C.
Alignment of content standards, performance standards, and assessments is crucial. This guide contains information to assist states and districts in aligning their assessment systems to their content and performance standards. It includes a review of current literature, both published and fugitive. The research is woven together with a few basic…
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Symposium on Toxic Substance Control: Decontamination, April 22 - 24, 1980, Columbus, Ohio.
1981-06-01
standard decontaminants is used. TABLE 1. Standard Chemical Decontaminants Decontaminant Agents Used On STB Blister and nerve agents DS-2 All chemical... agents M258 Kit Sodium Hydroxide, Ethanol, G-Series nerve agents Phenol, Water Chloramine B, ZnCI2, Blister ana V-Series Ethanol, Water nerve agents A...is a point source alarm that actively samples ambient air and reacts to low concentrations of nerve agents . The M-8 alarm detector also detects several
Vasileva, Liliya V; Getova, Damianka P; Doncheva, Nina D; Marchev, Andrey S; Georgiev, Milen I
2016-12-04
Rhodiola rosea L., family Crassulaceae also known as Golden Root or Arctic root is one of the most widely used medicinal plants with effect on cognitive dysfunction, psychological stress and depression. The aim of the study was to examine the effect of a standardized commercial Rhodiola extract on learning and memory processes in naive rats as well as its effects in rats with scopolamine-induced memory impairment. Sixty male Wistar rats were used in the study. The experiment was conducted in two series - on naive rats and on rats with scopolamine-induced model of impaired memory. The active avoidance test was performed in an automatic conventional shuttle box set-up. The criteria used were the number of conditional stimuli (avoidances), the number of unconditioned stimuli (escapes) as well as the number of intertrial crossings. The chemical fingerprinting of the standardized commercial Rhodiola extract was performed by means of nuclear magnetic resonance (NMR). Naive rats treated with standardized Rhodiola extract increased the number of avoidances during the learning session and memory retention test compared to the controls. Rats with scopolamine-induced memory impairment treated with Rhodiola extract showed an increase in the number of avoidances during the learning session and on the memory tests compared to the scopolamine group. The other two parameters were not changed in rats treated with the extract of Rhodiola in the two series. It was found that the studied Rhodiola extract exerts a beneficial effect on learning and memory processes in naive rats and rats with scopolamine-induced memory impairment. The observed effect is probably due to multiple underlying mechanisms including its modulating effect on acetylcholine levels in the brain and MAO-inhibitory activity leading to stimulation of the monoamines' neurotransmission. In addition the pronounced stress-protective properties of Rhodiola rosea L. could also play a role in the improvement of cognitive functions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
May, T; Koch-Singenstreu, M; Ebling, J; Stantscheff, R; Müller, L; Jacobi, F; Polag, D; Keppler, F; König, H
2015-08-01
A synthetic DNA fragment containing primer binding sites for the quantification of ten different microbial groups was constructed and evaluated as a reliable enumeration standard for quantitative real-time PCR (qPCR) analyses. This approach has been exemplary verified for the quantification of several methanogenic orders and families in a series of samples drawn from a mesophilic biogas plant. Furthermore, the total amounts of bacteria as well as the number of sulfate-reducing and propionic acid bacteria as potential methanogenic interaction partners were successfully determined. The obtained results indicated a highly dynamic microbial community structure which was distinctly affected by the organic loading rate, the substrate selection, and the amount of free volatile fatty acids in the fermenter. Methanosarcinales was the most predominant methanogenic order during the 3 months of observation despite fluctuating process conditions. During all trials, the modified quantification standard indicated a maximum of reproducibility and efficiency, enabling this method to open up a wide range of novel application options.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model pre- dicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic & Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 < or = s < or = 0.5298 with mean and standard deviation of 0.4678 and 0.0454, indicating little variation between ARs. Observations show that the number of occurrences N(E) of coronal flares with a total energy released > or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model predicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic and Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 < or = s < or = 0.5298 with mean and standard deviation of 0.4678 and 0.0454, indicating little variation between ARs. Observations show that the number of occurrences N(E) of coronal flares with a total energy released > or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-29
... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2011... installed on Airbus Model A330-200 and -300 series airplanes, Model A340-200 and -300 series airplanes, and Model A340-500 and -600 series airplanes. That NPRM proposed to supersede an existing AD. That NPRM...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... the time- series of costs and benefits into annualized values. First, DOE calculated a present value... the present value, DOE then calculated the corresponding time- series of fixed annual payments over a... does not imply that the time-series of cost and benefits from which the annualized values were...
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
The examination of headache activity using time-series research designs.
Houle, Timothy T; Remble, Thomas A; Houle, Thomas A
2005-05-01
The majority of research conducted on headache has utilized cross-sectional designs which preclude the examination of dynamic factors and principally rely on group-level effects. The present article describes the application of an individual-oriented process model using time-series analytical techniques. The blending of a time-series approach with an interactive process model allows consideration of the relationships of intra-individual dynamic processes, while not precluding the researcher to examine inter-individual differences. The authors explore the nature of time-series data and present two necessary assumptions underlying the time-series approach. The concept of shock and its contribution to headache activity is also presented. The time-series approach is not without its problems and two such problems are specifically reported: autocorrelation and the distribution of daily observations. The article concludes with the presentation of several analytical techniques suited to examine the time-series interactive process model.
NASA Astrophysics Data System (ADS)
Vecchio, Kenneth S.; Fitzpatrick, Michael D.; Klarstrom, Dwaine
1995-03-01
Strain-controlled low-cycle fatigue tests have been conducted in air at elevated temperature to determine the influence of subsolvus thermomechanical processing on the low-cycle fatigue (LCF) behavior of HAYNES 230 alloy. A series of tests at various strain ranges was conducted on material experimentally processed at 1121 °C, which is below the M23C6 carbide solvus temperature, and on material fully solution annealed at 1232 °C. A comparative strain-life analysis was performed on the LCF results, and the cyclic hardening/softening characteristics were examined. At 760 °C and 871 °C, the fatigue life of the experimental 230/1121 material was improved relative to the standard 230/1232 material up to a factor of 3. The fatigue life advantage of the experimental material was related primarily to a lower plastic (inelastic) strain amplitude response for a given imposed total strain range. It appears the increase in monotonic flow stress exhibited by the finer grain size experimental material has been translated into an increase in cyclic flow stress at the 760 °C and 871 °C test temperatures. Both materials exhibited entirely transgranular fatigue crack initiation and propagation modes at these temperatures. The LCF performance of the experimental material in tests performed at 982 °C was improved relative to the standard material up to a factor as high as 2. The life advantage of the 230/1121 material occurred despite having a larger plastic strain amplitude than the standard 230/1232 material for a given total strain range. Though not fully understood at present, it is suspected that this behavior is related to the deleterious influence of grain boundaries in the fatigue crack initiations of the standard processed material relative to the experimental material, and ultimately to differences in carbide morphology as a result of thermomechanical processing.
Selecting clinical quality indicators for laboratory medicine.
Barth, Julian H
2012-05-01
Quality in laboratory medicine is often described as doing the right test at the right time for the right person. Laboratory processes currently operate under the oversight of an accreditation body which gives confidence that the process is good. However, there are aspects of quality that are not measured by these processes. These are largely focused on ensuring that the most clinically appropriate test is performed and interpreted correctly. Clinical quality indicators were selected through a two-phase process. Firstly, a series of focus groups of clinical scientists were held with the aim of developing a list of quality indicators. These were subsequently ranked in order by an expert panel of primary and secondary care physicians. The 10 top indicators included the communication of critical results, comprehensive education to all users and adequate quality assurance for point-of-care testing. Laboratories should ensure their tests are used to national standards, that they have clinical utility, are calibrated to national standards and have long-term stability for chronic disease management. Laboratories should have error logs and demonstrate evidence of measures introduced to reduce chances of similar future errors. Laboratories should make a formal scientific evaluation of analytical quality. This paper describes the process of selection of quality indicators for laboratory medicine that have been validated sequentially by deliverers and users of the service. They now need to be converted into measureable variables related to outcome and validated in practice.
Beda, Alessandro; Simpson, David M; Faes, Luca
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Deng, Liansheng; Zhou, Xiaohui; Ma, Yifang
2014-05-01
Higher-order ionospheric (HIO) corrections are proposed to become a standard part for precise GPS data analysis. For this study, we deeply investigate the impacts of the HIO corrections on the coordinate time series by implementing re-processing of the GPS data from Crustal Movement Observation Network of China (CMONOC). Nearly 13 year data are used in our three processing runs: (a) run NO, without HOI corrections, (b) run IG, both second- and third-order corrections are modeled using the International Geomagnetic Reference Field 11 (IGRF11) to model the magnetic field, (c) run ID, the same with IG but dipole magnetic model are applied. Both spectral analysis and noise analysis are adopted to investigate these effects. Results show that for CMONOC stations, HIO corrections are found to have brought an overall improvement. After the corrections are applied, the noise amplitudes decrease, with the white noise amplitudes showing a more remarkable variation. Low-latitude sites are more affected. For different coordinate components, the impacts vary. The results of an analysis of stacked periodograms show that there is a good match between the seasonal amplitudes and the HOI corrections, and the observed variations in the coordinate time series are related to HOI effects. HOI delays partially explain the seasonal amplitudes in the coordinate time series, especially for the U component. The annual amplitudes for all components are decreased for over one-half of the selected CMONOC sites. Additionally, the semi-annual amplitudes for the sites are much more strongly affected by the corrections. However, when diplole model is used, the results are not as optimistic as IGRF model. Analysis of dipole model indicate that HIO delay lead to the increase of noise amplitudes, and that HIO delays with dipole model can generate false periodic signals. When dipole model are used in modeling HIO terms, larger residual and noise are brought in rather than the effective improvements.
NASA Technical Reports Server (NTRS)
Smith, Elizabeth A.
2001-01-01
Standard, text-book based learning for earth, ocean, and atmospheric sciences has been limited by the unavailability of quantitative teaching materials. While a descriptive presentation, in a lecture format, of discrete satellite images is often adequate for high school classrooms, this is seldom the case at the undergraduate level. In order to address these concerns, a series of numerical exercises for the Macintosh was developed for use with satellite-derived Sea Surface Temperature, pigment and sea ice concentration data. Using a modified version of NIH Image, to analyze actual satellite data, students are able to better understand ocean processes, such as circulation, upwelling, primary production, and ocean/atmosphere coupling. Graphical plots, image math, and numerical comparisons are utilized to substantiate temporal and spatial trends in sea surface temperature and ocean color. Particularly for institutions that do not offer a program in remote sensing, the subject matter is presented as modular units, each of which can be readily incorporated into existing curricula. These materials have been produced in both CD-ROM and WWW format, making them useful for classroom or lab setting. Depending upon the level of available computer support, graphics can be displayed directly from the CD-ROM, or as a series of color view graphs for standard overhead projection.
NASA Astrophysics Data System (ADS)
Witt, Thomas J.; Fletcher, N. E.
2010-10-01
We investigate some statistical properties of ac voltages from a white noise source measured with a digital lock-in amplifier equipped with finite impulse response output filters which introduce correlations between successive voltage values. The main goal of this work is to propose simple solutions to account for correlations when calculating the standard deviation of the mean (SDM) for a sequence of measurement data acquired using such an instrument. The problem is treated by time series analysis based on a moving average model of the filtering process. Theoretical expressions are derived for the power spectral density (PSD), the autocorrelation function, the equivalent noise bandwidth and the Allan variance; all are related to the SDM. At most three parameters suffice to specify any of the above quantities: the filter time constant, the time between successive measurements (both set by the lock-in operator) and the PSD of the white noise input, h0. Our white noise source is a resistor so that the PSD is easily calculated; there are no free parameters. Theoretical expressions are checked against their respective sample estimates and, with the exception of two of the bandwidth estimates, agreement to within 11% or better is found.
Energy availabilities for state and local development: projected energy patterns for 1985 and 1990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogt, D. P.; Rice, P. L.; Corey, T. A.
1979-11-01
This report (one of a series) presents projections of the energy supply, demand, and net imports of seven fuel types (gasoline, distillates, residual oil, crude, natural gas, coal, electricity) and four final consuming sectors. To facilitate detailed regional analysis these projections have been prepared for Bureau of Economic Analysis (BEA) areas, states, census regions, and the nation for 1985 and 1990. The data are formatted to present regional energy availability from primary extraction, as well as from energy-transformation processes. The tables depict energy balances between availability and use for each specific fuel. The objective of this series is to providemore » a consistent base of historic and projected energy information within a standard format. Such a framework should aid regional policymakers in their consideration of regional growth issues that may be influenced by the regional energy system. However, for analysis of specific regions, this basic data should be supplemented by additional information which only the local policy analyst can bring to bear in his or her assessment of the energy conditions that characterize the region. Earlier volumes in this series have proved useful for both specific and general analysis of this type, and it is hoped that the current volume will prove equally so. This volume presents an updated benchmark projection series, which captures recent developments in the business as usual projections of energy supply and consumption due to national policy developments since the 1976 National Energy Outlook projection series were prepared.« less
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Certification standards transfer: from committee to laboratory.
Lehmann, H P
1998-12-01
The ISO 9000 Standards series were developed to provide the international manufacturing industry with a framework to ensure purchased products meet quality criteria. Section 4 of ISO 9001, Quality System Model for Quality Assurance in Design, Development, Production, Installation and Servicing, contains 20 aspects of a quality system that must be addressed by an organization in order to receive ISO 9001 certification. This concept is extended to the clinical laboratory, where a quality system program establishes for the customer (patient/clinician) that the purchased product (requested information on a submitted specimen-test result) meets established quality norms. In order to satisfy the customer, the providing organization must have policies and procedures in place that ensure a quality product, and be certified. To become certified the organization must, through an inspection process, demonstrate to an independent accrediting agency that it meets defined standards. In the United States, the government through the Clinical Laboratory Improvement Amendment (CLIA) 1988 established quality standards for the clinical laboratory. The College of American Pathologists (CAP), through its Laboratory Accreditation Program (LAP), serves as an independent agency that certifies that laboratories meet standards. To demonstrate the applicability of an established clinical laboratory accreditation program to ISO 9001 certification, the standards and checklists of CLIA 1988 and the CAP LAP will be examined to determine their conformance to ISO 9001, Section 4.
Contact sensitization to cosmetic series of allergens in a general population in Beijing.
Zhao, Jian; Li, Lin-Feng
2014-03-01
Cosmetic allergic contact dermatitis (CACD) due to common cosmetic allergens in standard series has been extensively studied; however, the prevalence of contact allergy to other cosmetic allergens other than those in standard series is largely unknown. In this study, the frequency of contact sensitization to a European cosmetic series of allergens (Chemotechnique Diagnostics, Vellinge, Sweden) in healthy university student volunteers were detected in Beijing. Of 201 students studied, fifty-eight exhibited positive results, and 9 of them reported had cosmetics related dermatitis previously. The total positivity rate was not correlated to gender. The leading allergens were thimerosal (19.4%), shellac (3.0%), cocamidopropyl betaine (2.0%), hexamethylenetetramine (1.5%), dodecyl gallate (1.5%), hexahydro-1,3,5-tris-(2-hydroxyethyl)triazine (1.0%) and methyldibromo glutaronitrile (1.0%). The positivity rate of thimerosal patch test in men (9.8%) was lower than that of women (23.6%, P < 0.05, Chi square test), but no difference could be found between the prevalence of other cosmetic allergens in men and women (P > 0.05, Chi square test). These results suggested that some cosmetic-related contact allergies may be missed by just testing patients with the European standard series or T.R.U.E. test system only, we recommend shellac, cocamidopropyl betaine, hexamethylenetetramine and dodecyl gallate as the additionally candidates for patch testing in patients with suspected CACD. © 2014 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Wei; Jarnagin, Ronald E.; Gowri, Krishnan
2008-09-30
This Technical Support Document (TSD) describes the process and methodology for development of the Advanced Energy Design Guide for Highway Lodgings (AEDG-HL or the Guide), a design guidance document intended to provide recommendations for achieving 30% energy savings in highway lodging properties over levels contained in ANSI/ASHRAE/IESNA Standard 90.1-1999, Energy Standard for Buildings Except Low-Rise Residential Buildings. The AEDG-HL is the fifth in a series of guides being developed by a partnership of organizations, including the American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (ASHRAE), the American Institute of Architects (AIA), the Illuminating Engineering Society of North America (IESNA),more » the United States Green Buildings Council (USGBC), and the U.S. Department of Energy (DOE).« less
Three-dimensional measurement system for crime scene documentation
NASA Astrophysics Data System (ADS)
Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert
2017-10-01
Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.
NASA Astrophysics Data System (ADS)
Bernatowicz, P.; Czerski, I.; Jaźwiński, J.; Szymański, S.
2004-08-01
In the standard NMR spectra, the lineshape patterns produced by a molecular rate process are often poorly structured. When alternative theoretical models of such a process are to be compared, even quantitative lineshape fits may then give inconclusive results. A detailed description is presented of an approach involving fits of the competing models to series of Carr-Purcell echo spectra. Its high discriminative power has already been exploited in a number of cases of practical significance. An explanation is given why it can be superior to methods based on the standard spectra. Its applicability in practice is now illustrated on example of the methyl proton spectra in 1,2,3,4-tetrachloro-9,10-dimethyltriptycene TCDMT. It is shown that, in the echo spectra, the recently discovered effect of nonclassical stochastic reorientation of the methyl group can be identified clearly while it is practically nondiscernible in the standard spectra of TCDMT. This is the first detection of the effect at temperatures above 200 K. It is also shown that in computer-assisted interpretation of exchange-broadened echo spectra, the usual description of the stimulating radiofrequency pulses in terms of rotation operators ought to be replaced by a more realistic pulse model.
ERIC Educational Resources Information Center
Fretwell, David H.; Lewis, Morgan V.; Deij, Arjen
The key issues, alternatives, and implications for developing countries to consider when designing systems to define occupational standards, related training standards, and assessments were analyzed. The analysis focused on the following issues: the rationale for development of standards; clarification of definitions, terminology, and assumptions;…
Systematic Evaluation and Uncertainty Analysis of the Refuse-Derived Fuel Process in Taiwan.
Chang, Ying-Hsi; Chang, Ni-Bin; Chen, W C
1998-06-01
In the last few years, Taiwan has set a bold agenda in solid waste recycling and incineration programs. Not only were the recycling activities and incineration projects promoted by government agencies, but the related laws and regulations were continuously promulgated by the Legislative Yen. The solid waste presorting process that is to be considered prior to the existing incineration facilities has received wide attention. This paper illustrates a thorough evaluation for the first refuse-derived fuel pilot process from both quantitative and qualitative aspects. The process is to be installed and integrated with a large-scale municipal incinerator. This pilot process, developed by an engineering firm in Tainan County, consists of standard unit operations of shredding, magnetic separation, trommel screening, and air classification. A series of sampling and analyses were initialized in order to characterize its potentials in the solid waste management system. The probabilistic modeling for various types o f waste pro perties derived in this analysis may provide a basic understanding of system reliability.
The Impact of Proposed Radio Frequency Radiation Standards on Military Operations.
1985-03-01
psychological testing has been accomplished on a number of the overexposees. The evaluators have, on occasion, attempted to draw some conclusions, but...NEUILLY- 6i UCAED THELLE E .MR 5hhhhI-0 /G618Eh1hEEhhhmhlhEmqhh 11. 11111 1.1 6 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS 1963-A tod...series lasting approximately 3 to 4 minutes. Since the average of the staircase current during the period of each series of tests was slightly below the
Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A
2015-01-01
The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.
Examination of a size-change test for photovoltaic encapsulation materials
NASA Astrophysics Data System (ADS)
Miller, David C.; Gu, Xiaohong; Ji, Liang; Kelly, George; Nickel, Nichole; Norum, Paul; Shioda, Tsuyoshi; Tamizhmani, Govindasamy; Wohlgemuth, John H.
2012-10-01
We examine a proposed test standard that can be used to evaluate the maximum representative change in linear dimensions of sheet encapsulation products for photovoltaic modules (resulting from their thermal processing). The proposed protocol is part of a series of material-level tests being developed within Working Group 2 of the Technical Committee 82 of the International Electrotechnical Commission. The characterization tests are being developed to aid module design (by identifying the essential characteristics that should be communicated on a datasheet), quality control (via internal material acceptance and process control), and failure analysis. Discovery and interlaboratory experiments were used to select particular parameters for the size-change test. The choice of a sand substrate and aluminum carrier is explored relative to other options. The temperature uniformity of +/-5°C for the substrate was confirmed using thermography. Considerations related to the heating device (hot-plate or oven) are explored. The time duration of 5 minutes was identified from the time-series photographic characterization of material specimens (EVA, ionomer, PVB, TPO, and TPU). The test procedure was revised to account for observed effects of size and edges. The interlaboratory study identified typical size-change characteristics, and also verified the absolute reproducibility of +/-5% between laboratories.
Kernel methods and flexible inference for complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Capobianco, Enrico
2008-07-01
Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.
Copula-based prediction of economic movements
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Hirsh, I. D.
2016-06-01
In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.
De, Dipankar; Khullar, Geeti; Handa, Sanjeev
2015-01-01
Parthenium hysterophorus is the leading cause of phytogenic allergic contact dermatitis in India. The Indian Standard Series currently supplied by Systopic Laboratories Ltd and manufactured by Chemotechnique Diagnostics ® contains parthenolide as the only allergen representing plant allergens. The study was conducted to assess the performance of the Chemotechnique plant series (PL-1000), consisting of 14 allergens, in patients with clinically suspected occupational contact dermatitis to plant allergens. Ninety patients were patch tested with the Chemotechnique plant series from 2011 to 2013. Demographic details, clinical diagnosis and patch test results were recorded in the contact dermatitis clinic proforma. Of 90 patients, 24 (26.7%) showed positive reactions to one or more allergens in the plant series. Positive patch tests were elicited most commonly by sesquiterpene lactone mix in 19 (78.6%) patients, followed by parthenolide in 14 (57.1%), Achillea millefolium in 10 (42.9%) and others in decreasing order. The plant allergen series prepared by Chemotechnique Diagnostics is possibly not optimal for diagnosing suspected allergic contact dermatitis to plants in north Indians. Sesquiterpene lactone mix should replace parthenolide as the plant allergen in the Indian Standard Series until relevant native plant extracts are commercially available for patch testing.
Efficient Bayesian inference for natural time series using ARFIMA processes
NASA Astrophysics Data System (ADS)
Graves, Timothy; Gramacy, Robert; Franzke, Christian; Watkins, Nicholas
2016-04-01
Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. We present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators [1]. In addition we show how the method can be used to perform joint inference of the stability exponent and the memory parameter when ARFIMA is extended to allow for alpha-stable innovations. Such models can be used to study systems where heavy tails and long range memory coexist. [1] Graves et al, Nonlin. Processes Geophys., 22, 679-700, 2015; doi:10.5194/npg-22-679-2015.
NASA Astrophysics Data System (ADS)
Strang, C.; Lemus, J.; Schoedinger, S.
2006-12-01
Ocean sciences were idiosyncratically left out of the National Science Education Standards and most state standards, resulting in a decline in the public's attention to ocean issues. Concepts about the ocean are hardly taught in K-12 schools, and hardly appear in K-12 curriculum materials, textbooks, assessments or standards. NGS, COSEE, NMEA, NOAA, the US Commission on Ocean Policy, the Pew Ocean Commission have all urgently called for inclusion of the ocean in science standards as a means to increase ocean literacy nationwide. There has never been consensus, however, about what ocean literacy is or what concepts should be included in future standards. Scientists interested in education and outreach activities have not had a framework to guide them in prioritizing the content they present or in determining how that content fits into the context of what K-12 students and the public need to know about science in general. In 2004, an on-line workshop on Ocean Literacy Through Science Standards began the process of developing consensus about what that framework should include. Approximately 100 ocean scientists and educators participated in the workshop, followed by a series of meetings and extensive review by leading scientists, resulting in a series of draft documents and statements. The importance of community-wide involvement and consensus was reinforced through circulation of the draft documents for public comment April -May, 2005. The community agreed on an Ocean Literacy definition, tagline, seven ocean principles, 44 concepts and a matrix aligning the concepts to the National Science Education Standards (NSES). The elements are described in more detail in the final Ocean Literacy brochure. Broad ownership of the resulting documents is a tribute to the inclusiveness of the process used to develop them. The emerging consensus on Ocean Literacy has become an instrument for change, and has served as an important tool guiding the ocean sciences education efforts of scientists, educators, and most importantly, has provided a common language for scientists and educators working together. In this past year, a similar community-wide effort has been mounted to develop an "Ocean Literacy Scope and Sequence" to serve as a critical companion to "Ocean Literacy: The Essential Principles of Ocean Sciences Grades K-12." The Scope and Sequence shows how the principles and concepts develop and build in logical and developmentally sound learning progressions across grade spans K-12. This document will provide further guidance to teachers, curriculum developers, textbook writers, and ocean scientists, as to what concepts about the ocean are appropriate to introduce at various grade spans. It will show the relationship between the new discoveries of cutting edge science and the basic science concepts on which they are built and which students are accountable to understand. Those concerned about science education and about the future health of the ocean must be poised to influence the development of science standards by local educational agencies, state departments of education and professional societies and associations. In order to be effective, we must have tools, products, documents, web sites that contain agreed upon science content and processes related to the ocean.
R-Area Reactor 1993 annual groundwater monitoring report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-09-01
Groundwater was sampled and analyzed during 1993 from wells monitoring the following locations in R Area: Well cluster P20 east of R Area (one well each in the water table and the McBean formation), the R-Area Acid/Caustic Basin (the four water-table wells of the RAC series), the R-Area Ash Basin/Coal Pile (one well of the RCP series in the Congaree formation and one in the water table), the R-Area Disassembly Basin (the three water-table wells of the RDB series), the R-Area Burning/Rubble Pits (the four water-table wells of the RRP series), and the R-Area Seepage Basins (numerous water-table wells inmore » the RSA, RSB, RSC, RSD, RSE, and RSF series). Lead was the only constituent detected above its 50{mu}g/L standard in any but the seepage basin wells; it exceeded that level in one B well and in 23 of the seepage basin wells. Cadmium exceeded its drinking water standard (DWS) in 30 of the seepage basin wells, as did mercury in 10. Nitrate-nitrite was above DWS once each in two seepage basin wells. Tritium was above DWS in six seepage basin wells, as was gross alpha activity in 22. Nonvolatile beta exceeded its screening standard in 29 wells. Extensive radionuclide analyses were requested during 1993 for the RCP series and most of the seepage basin wells. Strontium-90 in eight wells was the only specific radionuclide other than tritium detected above DWS; it appeared about one-half of the nonvolatile beta activity in those wells.« less
Blind guidance system based on laser triangulation
NASA Astrophysics Data System (ADS)
Wu, Jih-Huah; Wang, Jinner-Der; Fang, Wei; Lee, Yun-Parn; Shan, Yi-Chia; Kao, Hai-Ko; Ma, Shih-Hsin; Jiang, Joe-Air
2012-05-01
We propose a new guidance system for the blind. An optical triangulation method is used in the system. The main components of the proposed system comprise of a notebook computer, a camera, and two laser modules. The track image of the light beam on the ground or on the object is captured by the camera and then the image is sent to the notebook computer for further processing and analysis. Using a developed signal-processing algorithm, our system can determine the object width and the distance between the object and the blind person through the calculation of the light line positions on the image. A series of feasibility tests of the developed blind guidance system were conducted. The experimental results show that the distance between the test object and the blind can be measured with a standard deviation of less than 8.5% within the range of 40 and 130 cm, while the test object width can be measured with a standard deviation of less than 4.5% within the range of 40 and 130 cm. The application potential of the designed system to the blind guidance can be expected.
Encryption key distribution via chaos synchronization
NASA Astrophysics Data System (ADS)
Keuninckx, Lars; Soriano, Miguel C.; Fischer, Ingo; Mirasso, Claudio R.; Nguimdo, Romain M.; van der Sande, Guy
2017-02-01
We present a novel encryption scheme, wherein an encryption key is generated by two distant complex nonlinear units, forced into synchronization by a chaotic driver. The concept is sufficiently generic to be implemented on either photonic, optoelectronic or electronic platforms. The method for generating the key bitstream from the chaotic signals is reconfigurable. Although derived from a deterministic process, the obtained bit series fulfill the randomness conditions as defined by the National Institute of Standards test suite. We demonstrate the feasibility of our concept on an electronic delay oscillator circuit and test the robustness against attacks using a state-of-the-art system identification method.
3D-printed devices for continuous-flow organic chemistry
Dragone, Vincenza; Sans, Victor; Rosnes, Mali H; Kitson, Philip J
2013-01-01
Summary We present a study in which the versatility of 3D-printing is combined with the processing advantages of flow chemistry for the synthesis of organic compounds. Robust and inexpensive 3D-printed reactionware devices are easily connected using standard fittings resulting in complex, custom-made flow systems, including multiple reactors in a series with in-line, real-time analysis using an ATR-IR flow cell. As a proof of concept, we utilized two types of organic reactions, imine syntheses and imine reductions, to show how different reactor configurations and substrates give different products. PMID:23766811
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Recommendations for Selecting Drug-Drug Interactions for Clinical Decision Support
Tilson, Hugh; Hines, Lisa E.; McEvoy, Gerald; Weinstein, David M.; Hansten, Philip D.; Matuszewski, Karl; le Comte, Marianne; Higby-Baker, Stefanie; Hanlon, Joseph T.; Pezzullo, Lynn; Vieson, Kathleen; Helwig, Amy L.; Huang, Shiew-Mei; Perre, Anthony; Bates, David W.; Poikonen, John; Wittie, Michael A.; Grizzle, Amy J.; Brown, Mary; Malone, Daniel C.
2016-01-01
Purpose To recommend principles for including drug-drug interactions (DDIs) in clinical decision support. Methods A conference series was conducted to improve clinical decision support (CDS) for DDIs. The Content Workgroup met monthly by webinar from January 2013 to February 2014, with two in-person meetings to reach consensus. The workgroup consisted of 20 experts in pharmacology, drug information, and CDS from academia, government agencies, health information (IT) vendors, and healthcare organizations. Workgroup members addressed four key questions: (1) What process should be used to develop and maintain a standard set of DDIs?; (2) What information should be included in a knowledgebase of standard DDIs?; (3) Can/should a list of contraindicated drug pairs be established?; and (4) How can DDI alerts be more intelligently filtered? Results To develop and maintain a standard set of DDIs for CDS in the United States, we recommend a transparent, systematic, and evidence-driven process with graded recommendations by a consensus panel of experts and oversight by a national organization. We outline key DDI information needed to help guide clinician decision-making. We recommend judicious classification of DDIs as contraindicated, as only a small set of drug combinations are truly contraindicated. Finally, we recommend more research to identify methods to safely reduce repetitive and less relevant alerts. Conclusion A systematic ongoing process is necessary to select DDIs for alerting clinicians. We anticipate that our recommendations can lead to consistent and clinically relevant content for interruptive DDIs, and thus reduce alert fatigue and improve patient safety. PMID:27045070
Bayesian methods for outliers detection in GNSS time series
NASA Astrophysics Data System (ADS)
Qianqian, Zhang; Qingming, Gui
2013-07-01
This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.
Efficient Bayesian inference for natural time series using ARFIMA processes
NASA Astrophysics Data System (ADS)
Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.
2015-11-01
Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
Sornborger, Andrew T; Lauderdale, James D
2016-11-01
Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, C ( τ ), as opposed to standard methods that decompose the time series, X ( t ), using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
Managing Highway Maintenance: Standards for Maintenance Work, Part 2, Unit 8, Level 2.
ERIC Educational Resources Information Center
Federal Highway Administration (DOT), Washington, DC. Offices of Research and Development.
Part of the series "Managing Highway Maintenance," the unit describes the ways maintenance standards are developed and some of the factors which are considered in setting standards; the preceding unit on standards (part 1) should be completed before reading this unit. The format is a programed, self-instruction approach in which…
ERIC Educational Resources Information Center
National Sanitation Foundation, Ann Arbor, MI.
THIS STANDARD OF SODA FOUNTAIN-LUNCHEONETTE EQUIPMENT IS THE FIRST IN A SERIES OF NATIONAL SANITATION FOUNDATION STANDARDS. THESE STANDARDS ARE ISSUED IN RECOGNITION OF THE LONG FELT NEED FOR A COMMON UNDERSTANDING OF THE PROBLEMS OF SANITATION INVOLVING INDUSTRIAL AND ADMINISTRATIVE HEALTH OFFICIALS WHOSE OBLIGATION IT IS TO ENFORCE REGULATIONS.…
Energy production advantage of independent subcell connection for multijunction photovoltaics
Warmann, Emily C.; Atwater, Harry A.
2016-07-07
Increasing the number of subcells in a multijunction or "spectrum splitting" photovoltaic improves efficiency under the standard AM1.5D design spectrum, but it can lower efficiency under spectra that differ from the standard if the subcells are connected electrically in series. Using atmospheric data and the SMARTS multiple scattering and absorption model, we simulated sunny day spectra over 1 year for five locations in the United States and determined the annual energy production of spectrum splitting ensembles with 2-20 subcells connected electrically in series or independently. While electrically independent subcells have a small efficiency advantage over series-connected ensembles under the AM1.5Dmore » design spectrum, they have a pronounced energy production advantage under realistic spectra over 1 year. Simulated energy production increased with subcell number for the electrically independent ensembles, but it peaked at 8-10 subcells for those connected in series. As a result, electrically independent ensembles with 20 subcells produce up to 27% more energy annually than the series-connected 20-subcell ensemble. This energy production advantage persists when clouds are accounted for.« less
Energy production advantage of independent subcell connection for multijunction photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warmann, Emily C.; Atwater, Harry A.
Increasing the number of subcells in a multijunction or "spectrum splitting" photovoltaic improves efficiency under the standard AM1.5D design spectrum, but it can lower efficiency under spectra that differ from the standard if the subcells are connected electrically in series. Using atmospheric data and the SMARTS multiple scattering and absorption model, we simulated sunny day spectra over 1 year for five locations in the United States and determined the annual energy production of spectrum splitting ensembles with 2-20 subcells connected electrically in series or independently. While electrically independent subcells have a small efficiency advantage over series-connected ensembles under the AM1.5Dmore » design spectrum, they have a pronounced energy production advantage under realistic spectra over 1 year. Simulated energy production increased with subcell number for the electrically independent ensembles, but it peaked at 8-10 subcells for those connected in series. As a result, electrically independent ensembles with 20 subcells produce up to 27% more energy annually than the series-connected 20-subcell ensemble. This energy production advantage persists when clouds are accounted for.« less
The effect of clumped population structure on the variability of spreading dynamics.
Black, Andrew J; House, Thomas; Keeling, Matt J; Ross, Joshua V
2014-10-21
Processes that spread through local contact, including outbreaks of infectious diseases, are inherently noisy, and are frequently observed to be far noisier than predicted by standard stochastic models that assume homogeneous mixing. One way to reproduce the observed levels of noise is to introduce significant individual-level heterogeneity with respect to infection processes, such that some individuals are expected to generate more secondary cases than others. Here we consider a population where individuals can be naturally aggregated into clumps (subpopulations) with stronger interaction within clumps than between them. This clumped structure induces significant increases in the noisiness of a spreading process, such as the transmission of infection, despite complete homogeneity at the individual level. Given the ubiquity of such clumped aggregations (such as homes, schools and workplaces for humans or farms for livestock) we suggest this as a plausible explanation for noisiness of many epidemic time series. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Effect of Improved Sub-Daily Earth Rotation Models on Global GPS Data Processing
NASA Astrophysics Data System (ADS)
Yoon, S.; Choi, K. K.
2017-12-01
Throughout the various International GNSS Service (IGS) products, strong periodic signals have been observed around the 14 day period. This signal is clearly visible in all IGS time-series such as those related to orbit ephemerides, Earth rotation parameters (ERP) and ground station coordinates. Recent studies show that errors in the sub-daily Earth rotation models are the main factors that induce such noise. Current IGS orbit processing standards adopted the IERS 2010 convention and its sub-daily Earth rotation model. Since the IERS convention had published, recent advances in the VLBI analysis have made contributions to update the sub-daily Earth rotation models. We have compared several proposed sub-daily Earth rotation models and show the effect of using those models on orbit ephemeris, Earth rotation parameters and ground station coordinates generated by the NGS global GPS data processing strategy.
Nanoscopic length scale dependence of hydrogen bonded molecular associates’ dynamics in methanol
Bertrand, C. E.; Self, J. L.; Copley, J. R. D.; Faraone, A.
2017-01-01
In a recent paper [C. E. Bertrand et al., J. Chem. Phys. 145, 014502 (2016)], we have shown that the collective dynamics of methanol shows a fast relaxation process related to the standard density-fluctuation heat mode and a slow non-Fickian mode originating from the hydrogen bonded molecular associates. Here we report on the length scale dependence of this slow relaxation process. Using quasielastic neutron scattering and molecular dynamics simulations, we show that the dynamics of the slow process is affected by the structuring of the associates, which is accessible through polarized neutron diffraction experiments. Using a series of partially deuterated samples, the dynamics of the associates is investigated and is found to have a similar time scale to the lifetime of hydrogen bonding in the system. Both the structural relaxation and the dynamics of the associates are thermally activated by the breaking of hydrogen bonding. PMID:28527447
Study of Track Irregularity Time Series Calibration and Variation Pattern at Unit Section
Jia, Chaolong; Wei, Lili; Wang, Hanning; Yang, Jiulin
2014-01-01
Focusing on problems existing in track irregularity time series data quality, this paper first presents abnormal data identification, data offset correction algorithm, local outlier data identification, and noise cancellation algorithms. And then proposes track irregularity time series decomposition and reconstruction through the wavelet decomposition and reconstruction approach. Finally, the patterns and features of track irregularity standard deviation data sequence in unit sections are studied, and the changing trend of track irregularity time series is discovered and described. PMID:25435869
ISO 9000: The Librarian's Role.
ERIC Educational Resources Information Center
Dobson, Chris; Ernst, Carolyn
1999-01-01
Describes the special library's role in implementing ISO 9000 (i.e., a series of international quality-assurance standards developed by the International Organization of Standards). Topics discussed include document and data control, keeping the standards current, documentation of procedures, the ISO 9000 audit, and benefits for the library. (MES)
Rennie, Drummond; Bossuyt, Patrick M. M.
2008-01-01
Summary Objective The Standards for Reporting of Diagnostic Accuracy (STARD) statement provided guidelines for investigators conducting diagnostic accuracy studies. We reviewed each item in the statement for its applicability to clinical examination diagnostic accuracy research, viewing each discrete aspect of the history and physical examination as a diagnostic test. Setting Nonsystematic review of the STARD statement. Interventions Two former STARD Group participants and 1 editor of a journal series on clinical examination research reviewed each STARD item. Suggested interpretations and comments were shared to develop consensus. Measurements and Main Results The STARD Statement applies generally well to clinical examination diagnostic accuracy studies. Three items are the most important for clinical examination diagnostic accuracy studies, and investigators should pay particular attention to their requirements: describe carefully the patient recruitment process, describe participant sampling and address if patients were from a consecutive series, and describe whether the clinicians were masked to the reference standard tests and whether the interpretation of the reference standard test was masked to the clinical examination components or overall clinical impression. The consideration of these and the other STARD items in clinical examination diagnostic research studies would improve the quality of investigations and strengthen conclusions reached by practicing clinicians. Conclusions The STARD statement provides a very useful framework for diagnostic accuracy studies. The group correctly anticipated that there would be nuances applicable to studies of the clinical examination. We offer guidance that should enhance their usefulness to investigators embarking on original studies of a patient’s history and physical examination. PMID:18347878
Paradoxical Behavior of Granger Causality
NASA Astrophysics Data System (ADS)
Witt, Annette; Battaglia, Demian; Gail, Alexander
2013-03-01
Granger causality is a standard tool for the description of directed interaction of network components and is popular in many scientific fields including econometrics, neuroscience and climate science. For time series that can be modeled as bivariate auto-regressive processes we analytically derive an expression for spectrally decomposed Granger Causality (SDGC) and show that this quantity depends only on two out of four groups of model parameters. Then we present examples of such processes whose SDGC expose paradoxical behavior in the sense that causality is high for frequency ranges with low spectral power. For avoiding misinterpretations of Granger causality analysis we propose to complement it by partial spectral analysis. Our findings are illustrated by an example from brain electrophysiology. Finally, we draw implications for the conventional definition of Granger causality. Bernstein Center for Computational Neuroscience Goettingen
NASA Technical Reports Server (NTRS)
Kapoor, V. J.; Valco, G. J.; Skebe, G. G.; Evans, J. C., Jr.
1985-01-01
Integrated circuit technology has been successfully applied to the design and fabrication of 0.5 x 0.5-cm planar multijunction solar-cell chips. Each of these solar cells consisted of six voltage-generating unit cells monolithically connected in series and fabricated on a 75-micron-thick, p-type, single crystal, silicon substrate. A contact photolithic process employing five photomask levels together with a standard microelectronics batch-processing technique were used to construct the solar-cell chip. The open-circuit voltage increased rapidly with increasing illumination up to 5 AM1 suns where it began to saturate at the sum of the individual unit-cell voltages at a maximum of 3.0 V. A short-circuit current density per unit cell of 240 mA/sq cm was observed at 10 AM1 suns.
SpcAudace: Spectroscopic processing and analysis package of Audela software
NASA Astrophysics Data System (ADS)
Mauclaire, Benjamin
2017-11-01
SpcAudace processes long slit spectra with automated pipelines and performs astrophysical analysis of the latter data. These powerful pipelines do all the required steps in one pass: standard preprocessing, masking of bad pixels, geometric corrections, registration, optimized spectrum extraction, wavelength calibration and instrumental response computation and correction. Both high and low resolution long slit spectra are managed for stellar and non-stellar targets. Many types of publication-quality figures can be easily produced: pdf and png plots or annotated time series plots. Astrophysical quantities can be derived from individual or large amount of spectra with advanced functions: from line profile characteristics to equivalent width and periodogram. More than 300 documented functions are available and can be used into TCL scripts for automation. SpcAudace is based on Audela open source software.
On-line process control monitoring system
O'Rourke, Patrick E.; Van Hare, David R.; Prather, William S.
1992-01-01
An on-line, fiber-optic based apparatus for monitoring the concentration of a chemical substance at a plurality of locations in a chemical processing system comprises a plurality of probes, each of which is at a different location in the system, a light source, optic fibers for carrying light to and from the probes, a multiplexer for switching light from the source from one probe to the next in series, a diode array spectrophotometer for producing a spectrum from the light received from the probes, and a computer programmed to analyze the spectra so produced. The probes allow the light to pass through the chemical substance so that a portion of the light is absorbed before being returned to the multiplexer. A standard and a reference cell are included for data validation and error checking.
FREIGHT CONTAINER LIFTING STANDARD
DOE Office of Scientific and Technical Information (OSTI.GOV)
POWERS DJ; SCOTT MA; MACKEY TC
2010-01-13
This standard details the correct methods of lifting and handling Series 1 freight containers following ISO-3874 and ISO-1496. The changes within RPP-40736 will allow better reading comprehension, as well as correcting editorial errors.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.
Sea change: Charting the course for biogeochemical ocean time-series research in a new millennium
NASA Astrophysics Data System (ADS)
Church, Matthew J.; Lomas, Michael W.; Muller-Karger, Frank
2013-09-01
Ocean time-series provide vital information needed for assessing ecosystem change. This paper summarizes the historical context, major program objectives, and future research priorities for three contemporary ocean time-series programs: The Hawaii Ocean Time-series (HOT), the Bermuda Atlantic Time-series Study (BATS), and the CARIACO Ocean Time-Series. These three programs operate in physically and biogeochemically distinct regions of the world's oceans, with HOT and BATS located in the open-ocean waters of the subtropical North Pacific and North Atlantic, respectively, and CARIACO situated in the anoxic Cariaco Basin of the tropical Atlantic. All three programs sustain near-monthly shipboard occupations of their field sampling sites, with HOT and BATS beginning in 1988, and CARIACO initiated in 1996. The resulting data provide some of the only multi-disciplinary, decadal-scale determinations of time-varying ecosystem change in the global ocean. Facilitated by a scoping workshop (September 2010) sponsored by the Ocean Carbon Biogeochemistry (OCB) program, leaders of these time-series programs sought community input on existing program strengths and for future research directions. Themes that emerged from these discussions included: 1. Shipboard time-series programs are key to informing our understanding of the connectivity between changes in ocean-climate and biogeochemistry 2. The scientific and logistical support provided by shipboard time-series programs forms the backbone for numerous research and education programs. Future studies should be encouraged that seek mechanistic understanding of ecological interactions underlying the biogeochemical dynamics at these sites. 3. Detecting time-varying trends in ocean properties and processes requires consistent, high-quality measurements. Time-series must carefully document analytical procedures and, where possible, trace the accuracy of analyses to certified standards and internal reference materials. 4. Leveraged implementation, testing, and validation of autonomous and remote observing technologies at time-series sites provide new insights into spatiotemporal variability underlying ecosystem changes. 5. The value of existing time-series data for formulating and validating ecosystem models should be promoted. In summary, the scientific underpinnings of ocean time-series programs remain as strong and important today as when these programs were initiated. The emerging data inform our knowledge of the ocean's biogeochemistry and ecology, and improve our predictive capacity about planetary change.
Fisher, William A; Gruenwald, Ilan; Jannini, Emmanuele A; Lev-Sagie, Ahinoam; Lowenstein, Lior; Pyke, Robert E; Reisman, Yakov; Revicki, Dennis A; Rubio-Aurioles, Eusebio
2016-12-01
This series of articles outlines standards for clinical trials of treatments for male and female sexual dysfunctions, with a focus on research design and patient-reported outcome assessment. These articles consist of revision, updating, and integration of articles on standards for clinical trials in male and female sexual dysfunction from the 2010 International Consultation on Sexual Medicine developed by the authors as part of the 2015 International Consultation on Sexual Medicine. We are guided in this effort by several principles. In contrast to previous versions of these guidelines, we merge discussion of standards for clinical trials in male and female sexual dysfunction in an integrated approach that emphasizes the common foundational practices that underlie clinical trials in the two settings. We present a common expected standard for clinical trial design in male and female sexual dysfunction, a common rationale for the design of phase I to IV clinical trials, and common considerations for selection of study population and study duration in male and female sexual dysfunction. We present a focused discussion of fundamental principles in patient- (and partner-) reported outcome assessment and complete this series of articles with specific discussions of selected aspects of clinical trials that are unique to male and to female sexual dysfunction. Our consideration of standards for clinical trials in male and female sexual dysfunction attempts to embody sensitivity to existing and new regulatory guidance and to address implications of the evolution of the diagnosis of sexual dysfunction that have been brought forward in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. The first article in this series focuses on phase I to phase IV clinical trial design considerations. Subsequent articles in this series focus on the measurement of patient-reported outcomes, unique aspects of clinical trial design for men, and unique aspects of clinical trial design for women. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Bradywood, Alison; Farrokhi, Farrokh; Williams, Barbara; Kowalczyk, Mark; Blackmore, C Craig
2017-02-01
Quality improvement with before and after evaluation of the intervention. To improve lumbar spine postoperative care and quality outcomes through a series of Lean quality improvement events designed to address root causes of error and variation. Lumbar spine fusion procedures are common, but highly variable in process of care, outcomes, and cost. We implemented a standardized lumbar spine fusion clinical care pathway through a series of Lean quality improvement events. The pathway included an evidence-based electronic order set; a patient visual tool; and multidisciplinary communication, and was designed to delineate expectations for patients, staff, and providers. To evaluate the effectiveness of the intervention, we performed a quality improvement study with before and after evaluation of consecutive patients from January 2012 to September 2014. Outcomes were hospital length of stay and quality measures before and after the April 1, 2013 intervention. Data were analyzed with chi-square and t tests for before and after comparisons, and were explored graphically for temporal trends with statistical process control charts. Our study population was 458 patients (mean 65 years, 65% women). Length of stay decreased from 3.9 to 3.4 days, a difference of 0.5 days (CI 0.3, 0.8, P < 0.001). Discharge disposition also improved with 75% (183/244) being discharged to home postintervention versus 64% (136/214) preintervention (P = 0.002). Urinary catheter removal also improved (P = 0.003). Patient satisfaction scores were not significantly changed. Applying Lean methods to produce standardized clinical pathways is an effective way of improving quality and reducing waste for lumbar spine fusion patients. We believe that quality improvements of this type are valuable for all spine patients, to provide best care outcomes at lowest cost. 4.
Herscovici, Sarah; Pe'er, Avivit; Papyan, Surik; Lavie, Peretz
2007-02-01
Scoring of REM sleep based on polysomnographic recordings is a laborious and time-consuming process. The growing number of ambulatory devices designed for cost-effective home-based diagnostic sleep recordings necessitates the development of a reliable automatic REM sleep detection algorithm that is not based on the traditional electroencephalographic, electrooccolographic and electromyographic recordings trio. This paper presents an automatic REM detection algorithm based on the peripheral arterial tone (PAT) signal and actigraphy which are recorded with an ambulatory wrist-worn device (Watch-PAT100). The PAT signal is a measure of the pulsatile volume changes at the finger tip reflecting sympathetic tone variations. The algorithm was developed using a training set of 30 patients recorded simultaneously with polysomnography and Watch-PAT100. Sleep records were divided into 5 min intervals and two time series were constructed from the PAT amplitudes and PAT-derived inter-pulse periods in each interval. A prediction function based on 16 features extracted from the above time series that determines the likelihood of detecting a REM epoch was developed. The coefficients of the prediction function were determined using a genetic algorithm (GA) optimizing process tuned to maximize a price function depending on the sensitivity, specificity and agreement of the algorithm in comparison with the gold standard of polysomnographic manual scoring. Based on a separate validation set of 30 patients overall sensitivity, specificity and agreement of the automatic algorithm to identify standard 30 s epochs of REM sleep were 78%, 92%, 89%, respectively. Deploying this REM detection algorithm in a wrist worn device could be very useful for unattended ambulatory sleep monitoring. The innovative method of optimization using a genetic algorithm has been proven to yield robust results in the validation set.
rasdaman Array Database: current status
NASA Astrophysics Data System (ADS)
Merticariu, George; Toader, Alexandru
2015-04-01
rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which defines request types for inserting, updating and deleting coverages. A web client, designed for both novice and experienced users, is also available for the service and its extensions. The client offers an intuitive interface that allows users to work with multi-dimensional coverages by abstracting the specifics of the standard definitions of the requests. The Web Coverage Processing Service defines a language for on-the-fly processing and filtering multi-dimensional raster coverages. rasdaman exposes this service through the WCS processing extension. Demonstrations are provided online via the Earthlook website (earthlook.org) which presents use-cases from a wide variety of application domains, using the rasdaman system as processing engine.
Modeling turbidity and flow at daily steps in karst using ARIMA/ARFIMA-GARCH error models
NASA Astrophysics Data System (ADS)
Massei, N.
2013-12-01
Hydrological and physico-chemical variations recorded at karst springs usually reflect highly non-linear processes and the corresponding time series are then very often also highly non-linear. Among others, turbidity, as an important parameter regarding water quality and management, is a very complex response of karst systems to rain events, involving direct transfer of particles from point-source recharge as well as resuspension of particles previously deposited and stored within the system. For those reasons, turbidity modeling has not been well taken in karst hydrological models so far. Most of the time, the modeling approaches would involve stochastic linear models such ARIMA-type models and their derivatives (ARMA, ARMAX, ARIMAX, ARFIMA...). Yet, linear models usually fail to represent well the whole (stochastic) process variability, and their residuals still contain useful information that can be used to either understand the whole variability or to enhance short-term predictability and forecasting. Model residuals are actually not i.i.d., which can be identified by the fact that squared residuals still present clear and significant serial correlation. Indeed, high (low) amplitudes are followed in time by high (low) amplitudes, which can be seen on residuals time series as periods of time during which amplitudes are higher (lower) then the mean amplitude. This is known as the ARCH effet (AutoRegressive Conditional Heteroskedasticity), and the corresponding non-linear process affecting residuals of a linear model can be modeled using ARCH or generalized ARCH (GARCH) non-linear modeling, which approaches are very well known in econometrics. Here we investigated the capability of ARIMA-GARCH error models to represent a ~20-yr daily turbidity time series recorded at a karst spring used for water supply of the city of Le Havre (Upper Normandy, France). ARIMA and ARFIMA models were used to represent the mean behavior of the time series and the residuals clearly appeared to present a pronounced ARCH effect, as confirmed by Ljung-Box and McLeod-Li tests. We then identified and fitted GARCH models to the residuals of ARIMA and ARFIMA models in order to model the conditional variance and volatility of the turbidity time series. The results eventually showed that serial correlation was succesfully removed in the last standardized residuals of the GARCH model, and hence that the ARIMA-GARCH error model appeared consistent for modeling such time series. The approach finally improved short-term (e.g a few steps-ahead) turbidity forecasting.
Saunders, Luke J; Russell, Richard A; Crabb, David P
2015-01-01
Swedish Interactive Thresholding Algorithm (SITA) testing strategies for the Humphrey Field Analyzer have become a clinical standard. Measurements from SITA Fast are thought to be more variable than SITA Standard, yet some clinics routinely use SITA Fast because it is quicker. To examine the measurement precision of the 2 SITA strategies across a range of sensitivities using a large number of visual field (VF) series from 4 glaucoma clinics in England. Retrospective cohort study at Moorfields Eye Hospital in London, England; Gloucestershire Eye Unit at Cheltenham General Hospital; Queen Alexandra Hospital in Portsmouth, England; and the Calderdale and Huddersfield National Health Service Foundation Trust that included 66,974 Humphrey 24-2 SITA Standard VFs (10,124 eyes) and 19,819 Humphrey 24-2 SITA Fast VFs (3654 eyes) recorded between May 20, 1997, and September 20, 2012. Pointwise ordinary least squares linear regression of measured sensitivity over time was conducted using VF series of 1 random eye from each patient. Residuals from the regression were pooled according to fitted sensitivities. For each sensitivity (decibel) level, the standard deviation of the residuals was used to estimate measurement precision and were compared for SITA Standard and SITA Fast. Simulations of progression from different VF baselines were used to evaluate how different levels of precision would affect time to detect VF progression. Median years required to detect progression. Median (interquartile range) patient age, follow-up, and series lengths for SITA Standard were 64 (53-72) years, 6.0 (4.0-8.5) years, and 6 (4-8) VFs, respectively; for SITA Fast, medians (interquartile range) were 70 (61-78) years, 5.1 (3.2-7.3) years, and 5 (4-6) VFs. Measurement precision worsened as sensitivity decreased for both test strategies. In the 20 to 5 dB range, SITA Fast was less precise than SITA Standard; this difference was largest between 15 to 10 dB, where variability in both methods peaked. Translated to median time to detection, differences in measurement precision were negligible, suggesting minimal effects on time to detect progression. Although SITA Standard is a more precise testing algorithm than SITA Fast at lower VF sensitivities, it is unlikely to make a sizeable difference to improving the time to detect VF progression.
Towards Semantic Web Services on Large, Multi-Dimensional Coverages
NASA Astrophysics Data System (ADS)
Baumann, P.
2009-04-01
Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), H˜DF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context of OGC standardization. The author is co-chair of OGC's WCS Working Group (WG) and Coverages WG.
Kay, Jack F
2016-05-01
The Codex Committee on Residues of Veterinary Drugs in Food (CCRVDF) fulfils a number of functions revolving around standard setting. The core activities of the CCRVDF include agreeing priorities for assessing veterinary drug residues, recommending maximum residue limits for veterinary drugs in foods of animal origin, considering methods of sampling and analyses, and developing codes of practice. Draft standards are developed and progress through an agreed series of steps common to all Codex Alimentarius Commission Committees. Meetings of the CCRVDF are held at approximately 18-month intervals. To ensure effective progress is made with meetings at this frequency, the CCRVDF makes use of a number of management tools. These include circular letters to interested parties, physical and electronic drafting groups between plenary sessions, meetings of interested parties immediately prior to sessions, as well as break out groups within sessions and detailed discussions within the CCRVDF plenary sessions. A range of these approaches is required to assist advances within the standards setting process and can be applied to other Codex areas and international standard setting more generally. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Schoechle, Timothy, Ed.
2009-01-01
Recent trends have shown increasing privatization of standardization activities under various corporations, trade associations, and consortia, raising significant public policy issues about how the public interest may be represented. This book establishes a framework of analysis for public policy discussion and debate. Discussing topics such as…
Streamlining environmental product declarations: a stage model
NASA Astrophysics Data System (ADS)
Lefebvre, Elisabeth; Lefebvre, Louis A.; Talbot, Stephane; Le Hen, Gael
2001-02-01
General public environmental awareness and education is increasing, therefore stimulating the demand for reliable, objective and comparable information about products' environmental performances. The recently published standard series ISO 14040 and ISO 14025 are normalizing the preparation of Environmental Product Declarations (EPDs) containing comprehensive information relevant to a product's environmental impact during its life cycle. So far, only a few environmentally leading manufacturing organizations have experimented the preparation of EPDs (mostly from Europe), demonstrating its great potential as a marketing weapon. However the preparation of EPDs is a complex process, requiring collection and analysis of massive amounts of information coming from disparate sources (suppliers, sub-contractors, etc.). In a foreseeable future, the streamlining of the EPD preparation process will require product manufacturers to adapt their information systems (ERP, MES, SCADA) in order to make them capable of gathering, and transmitting the appropriate environmental information. It also requires strong functional integration all along the product supply chain in order to ensure that all the information is made available in a standardized and timely manner. The goal of the present paper is two fold: first to propose a transitional model towards green supply chain management and EPD preparation; second to identify key technologies and methodologies allowing to streamline the EPD process and subsequently the transition toward sustainable product development
ERIC Educational Resources Information Center
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
International Space Station (ISS)
2001-05-14
Astronaut James S. Voss, Expedition Two flight engineer, works with a series of cables on the EXPRESS Rack in the United State's Destiny laboratory on the International Space Station (ISS). The EXPRESS Rack is a standardized payload rack system that transports, stores, and supports experiments aboard the ISS. EXPRESS stands for EXpedite the PRocessing of Experiments to the Space Station, reflecting the fact that this system was developed specifically to maximize the Station's research capabilities. The EXPRESS Rack system supports science payloads in several disciplines, including biology, chemistry, physics, ecology, and medicine. With the EXPRESS Rack, getting experiments to space has never been easier or more affordable. With its standardized hardware interfaces and streamlined approach, the EXPRESS Rack enables quick, simple integration of multiple payloads aboard the ISS. The system is comprised of elements that remain on the ISS, as well as elements that travel back and forth between the ISS and Earth via the Space Shuttle.
Astronaut James S. Voss Performs Tasks in the Destiny Laboratory
NASA Technical Reports Server (NTRS)
2001-01-01
Astronaut James S. Voss, Expedition Two flight engineer, works with a series of cables on the EXPRESS Rack in the United State's Destiny laboratory on the International Space Station (ISS). The EXPRESS Rack is a standardized payload rack system that transports, stores, and supports experiments aboard the ISS. EXPRESS stands for EXpedite the PRocessing of Experiments to the Space Station, reflecting the fact that this system was developed specifically to maximize the Station's research capabilities. The EXPRESS Rack system supports science payloads in several disciplines, including biology, chemistry, physics, ecology, and medicine. With the EXPRESS Rack, getting experiments to space has never been easier or more affordable. With its standardized hardware interfaces and streamlined approach, the EXPRESS Rack enables quick, simple integration of multiple payloads aboard the ISS. The system is comprised of elements that remain on the ISS, as well as elements that travel back and forth between the ISS and Earth via the Space Shuttle.
Refractory Metal Heat Pipe Life Test - Test Plan and Standard Operating Procedures
NASA Technical Reports Server (NTRS)
Martin, J. J.; Reid, R. S.
2010-01-01
Refractory metal heat pipes developed during this project shall be subjected to various operating conditions to evaluate life-limiting corrosion factors. To accomplish this objective, various parameters shall be investigated, including the effect of temperature and mass fluence on long-term corrosion rate. The test series will begin with a performance test of one module to evaluate its performance and to establish the temperature and power settings for the remaining modules. The performance test will be followed by round-the-clock testing of 16 heat pipes. All heat pipes shall be nondestructively inspected at 6-month intervals. At longer intervals, specific modules will be destructively evaluated. Both the nondestructive and destructive evaluations shall be coordinated with Los Alamos National Laboratory. During the processing, setup, and testing of the heat pipes, standard operating procedures shall be developed. Initial procedures are listed here and, as hardware is developed, will be updated, incorporating findings and lessons learned.
Craven, Galen T; Nitzan, Abraham
2018-01-28
Statistical properties of Brownian motion that arise by analyzing, separately, trajectories over which the system energy increases (upside) or decreases (downside) with respect to a threshold energy level are derived. This selective analysis is applied to examine transport properties of a nonequilibrium Brownian process that is coupled to multiple thermal sources characterized by different temperatures. Distributions, moments, and correlation functions of a free particle that occur during upside and downside events are investigated for energy activation and energy relaxation processes and also for positive and negative energy fluctuations from the average energy. The presented results are sufficiently general and can be applied without modification to the standard Brownian motion. This article focuses on the mathematical basis of this selective analysis. In subsequent articles in this series, we apply this general formalism to processes in which heat transfer between thermal reservoirs is mediated by activated rate processes that take place in a system bridging them.
NASA Astrophysics Data System (ADS)
Craven, Galen T.; Nitzan, Abraham
2018-01-01
Statistical properties of Brownian motion that arise by analyzing, separately, trajectories over which the system energy increases (upside) or decreases (downside) with respect to a threshold energy level are derived. This selective analysis is applied to examine transport properties of a nonequilibrium Brownian process that is coupled to multiple thermal sources characterized by different temperatures. Distributions, moments, and correlation functions of a free particle that occur during upside and downside events are investigated for energy activation and energy relaxation processes and also for positive and negative energy fluctuations from the average energy. The presented results are sufficiently general and can be applied without modification to the standard Brownian motion. This article focuses on the mathematical basis of this selective analysis. In subsequent articles in this series, we apply this general formalism to processes in which heat transfer between thermal reservoirs is mediated by activated rate processes that take place in a system bridging them.
Curtailing Perovskite Processing Limitations via Lamination at the Perovskite/Perovskite Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Hest, Marinus F; Moore, David; Klein, Talysa
Standard layer-by-layer solution processing methods constrain lead-halide perovskite device architectures. The layer below the perovskite must be robust to the strong organic solvents used to form the perovskite while the layer above has a limited thermal budget and must be processed in nonpolar solvents to prevent perovskite degradation. To circumvent these limitations, we developed a procedure where two transparent conductive oxide/transport material/perovskite half stacks are independently fabricated and then laminated together at the perovskite/perovskite interface. Using ultraviolet-visible absorption spectroscopy, external quantum efficiency, X-ray diffraction, and time-resolved photoluminesence spectroscopy, we show that this procedure improves photovoltaic properties of the perovskite layer.more » Applying this procedure, semitransparent devices employing two high-temperature oxide transport layers were fabricated, which realized an average efficiency of 9.6% (maximum: 10.6%) despite series resistance limitations from the substrate design. Overall, the developed lamination procedure curtails processing constraints, enables new device designs, and affords new opportunities for optimization.« less
NASA Astrophysics Data System (ADS)
Bai, Weihua; Liu, Congliang; Meng, Xiangguang; Sun, Yueqiang; Kirchengast, Gottfried; Du, Qifei; Wang, Xianyi; Yang, Guanglin; Liao, Mi; Yang, Zhongdong; Zhao, Danyang; Xia, Junming; Cai, Yuerong; Liu, Lijun; Wang, Dongwei
2018-02-01
The Global Navigation Satellite System (GNSS) Occultation Sounder (GNOS) is one of the new-generation payloads onboard the Chinese FengYun 3 (FY-3) series of operational meteorological satellites for sounding the Earth's neutral atmosphere and ionosphere. The GNOS was designed for acquiring setting and rising radio occultation (RO) data by using GNSS signals from both the Chinese BeiDou System (BDS) and the US Global Positioning System (GPS). An ultra-stable oscillator with 1 s stability (Allan deviation) at the level of 10-12 was installed on the FY-3C GNOS, and thus both zero-difference and single-difference excess phase processing methods should be feasible for FY-3C GNOS observations. In this study we focus on evaluating zero-difference processing of BDS RO data vs. single-difference processing, in order to investigate the zero-difference feasibility for this new instrument, which after its launch in September 2013 started to use BDS signals from five geostationary orbit (GEO) satellites, five inclined geosynchronous orbit (IGSO) satellites and four medium Earth orbit (MEO) satellites. We used a 3-month set of GNOS BDS RO data (October to December 2013) for the evaluation and compared atmospheric bending angle and refractivity profiles, derived from single- and zero-difference excess phase data, against co-located profiles from European Centre for Medium-Range Weather Forecasts (ECMWF) analyses. We also compared against co-located refractivity profiles from radiosondes. The statistical evaluation against these reference data shows that the results from single- and zero-difference processing are reasonably consistent in both bias and standard deviation, clearly demonstrating the feasibility of zero differencing for GNOS BDS RO observations. The average bias (and standard deviation) of the bending angle and refractivity profiles were found to be about 0.05 to 0.2 % (and 0.7 to 1.6 %) over the upper troposphere and lower stratosphere. Zero differencing was found to perform slightly better, as may be expected from its lower vulnerability to noise. The validation results indicate that GNOS can provide, on top of GPS RO profiles, accurate and precise BDS RO profiles both from single- and zero-difference processing. The GNOS observations by the series of FY-3 satellites are thus expected to provide important contributions to numerical weather prediction and global climate change analysis.
Simulation of time series by distorted Gaussian processes
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1977-01-01
Distorted stationary Gaussian process can be used to provide computer-generated imitations of experimental time series. A method of analyzing a source time series and synthesizing an imitation is shown, and an example using X-band radiometer data is given.
Framework for Design of Traceability System on Organic Rice Certification
NASA Astrophysics Data System (ADS)
Purwandoko, P. B.; Seminar, K. B.; Sutrisno; Sugiyanta
2018-05-01
Nowadays, the preferences of organic products such as organic rice have been increased. It because of the people awareness of the healthy and eco-friendly food product consumption has grown. Therefore, it is very important to ensure organic quality of the product that will be produced. Certification is a series of process that holds to ensure the quality of products meets all criteria of organic standards. Currently, there is a problem that traceability information system for organic rice certification has been not available. The current system still conducts manually caused the loss of information during storage process. This paper aimed at developing a traceability framework on organic rice certification process. First, the main discussed issues are organic certification process. Second, unified modeling language (UML) is used to build the model of user requirement in order to develop traceability system for all actors in the certification process. Furthermore, the information captured model along certification process will be explained in this paper. The model shows the information flow that has to be recorded for each actor. Finally, the challenges in the implementation system will be discussed in this paper.
Comparison of ITRF2014 station coordinate input time series of DORIS, VLBI and GNSS
NASA Astrophysics Data System (ADS)
Tornatore, Vincenza; Tanır Kayıkçı, Emine; Roggero, Marco
2016-12-01
In this paper station coordinate time series from three space geodesy techniques that have contributed to the realization of the International Terrestrial Reference Frame 2014 (ITRF2014) are compared. In particular the height component time series extracted from official combined intra-technique solutions submitted for ITRF2014 by DORIS, VLBI and GNSS Combination Centers have been investigated. The main goal of this study is to assess the level of agreement among these three space geodetic techniques. A novel analytic method, modeling time series as discrete-time Markov processes, is presented and applied to the compared time series. The analysis method has proven to be particularly suited to obtain quasi-cyclostationary residuals which are an important property to carry out a reliable harmonic analysis. We looked for common signatures among the three techniques. Frequencies and amplitudes of the detected signals have been reported along with their percentage of incidence. Our comparison shows that two of the estimated signals, having one-year and 14 days periods, are common to all the techniques. Different hypotheses on the nature of the signal having a period of 14 days are presented. As a final check we have compared the estimated velocities and their standard deviations (STD) for the sites that co-located the VLBI, GNSS and DORIS stations, obtaining a good agreement among the three techniques both in the horizontal (1.0 mm/yr mean STD) and in the vertical (0.7 mm/yr mean STD) component, although some sites show larger STDs, mainly due to lack of data, different data spans or noisy observations.
Generalised Pareto distribution: impact of rounding on parameter estimation
NASA Astrophysics Data System (ADS)
Pasarić, Z.; Cindrić, K.
2018-05-01
Problems that occur when common methods (e.g. maximum likelihood and L-moments) for fitting a generalised Pareto (GP) distribution are applied to discrete (rounded) data sets are revealed by analysing the real, dry spell duration series. The analysis is subsequently performed on generalised Pareto time series obtained by systematic Monte Carlo (MC) simulations. The solution depends on the following: (1) the actual amount of rounding, as determined by the actual data range (measured by the scale parameter, σ) vs. the rounding increment (Δx), combined with; (2) applying a certain (sufficiently high) threshold and considering the series of excesses instead of the original series. For a moderate amount of rounding (e.g. σ/Δx ≥ 4), which is commonly met in practice (at least regarding the dry spell data), and where no threshold is applied, the classical methods work reasonably well. If cutting at the threshold is applied to rounded data—which is actually essential when dealing with a GP distribution—then classical methods applied in a standard way can lead to erroneous estimates, even if the rounding itself is moderate. In this case, it is necessary to adjust the theoretical location parameter for the series of excesses. The other solution is to add an appropriate uniform noise to the rounded data ("so-called" jittering). This, in a sense, reverses the process of rounding; and thereafter, it is straightforward to apply the common methods. Finally, if the rounding is too coarse (e.g. σ/Δx 1), then none of the above recipes would work; and thus, specific methods for rounded data should be applied.
Mishra, Alok; Swati, D
2015-09-01
Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-09
.... Above GS-15 Positions B. Classification 1. Occupational Series 2. Classification Standards and Position... Duty Locations Appendix B: Occupational Series by Occupational Family Appendix C: Intervention Model..., MD; Lakehurst, NJ; and Orlando, FL. These facilities support research, development, test, evaluation...
Manufacture and quality control of interconnecting wire harnesses
NASA Technical Reports Server (NTRS)
1973-01-01
Four-volume series of documents has been prepared as standard reference. Each volume may be used separately and covers wire and cable preparation as well as harness fabrication and installation. Series should be useful addition to libraries of manufactures of electrical and electronic equipment.
Recommendations for selecting drug-drug interactions for clinical decision support.
Tilson, Hugh; Hines, Lisa E; McEvoy, Gerald; Weinstein, David M; Hansten, Philip D; Matuszewski, Karl; le Comte, Marianne; Higby-Baker, Stefanie; Hanlon, Joseph T; Pezzullo, Lynn; Vieson, Kathleen; Helwig, Amy L; Huang, Shiew-Mei; Perre, Anthony; Bates, David W; Poikonen, John; Wittie, Michael A; Grizzle, Amy J; Brown, Mary; Malone, Daniel C
2016-04-15
Recommendations for including drug-drug interactions (DDIs) in clinical decision support (CDS) are presented. A conference series was conducted to improve CDS for DDIs. A work group consisting of 20 experts in pharmacology, drug information, and CDS from academia, government agencies, health information vendors, and healthcare organizations was convened to address (1) the process to use for developing and maintaining a standard set of DDIs, (2) the information that should be included in a knowledge base of standard DDIs, (3) whether a list of contraindicated drug pairs can or should be established, and (4) how to more intelligently filter DDI alerts. We recommend a transparent, systematic, and evidence-driven process with graded recommendations by a consensus panel of experts and oversight by a national organization. We outline key DDI information needed to help guide clinician decision-making. We recommend judicious classification of DDIs as contraindicated and more research to identify methods to safely reduce repetitive and less-relevant alerts. An expert panel with a centralized organizer or convener should be established to develop and maintain a standard set of DDIs for CDS in the United States. The process should be evidence driven, transparent, and systematic, with feedback from multiple stakeholders for continuous improvement. The scope of the expert panel's work should be carefully managed to ensure that the process is sustainable. Support for research to improve DDI alerting in the future is also needed. Adoption of these steps may lead to consistent and clinically relevant content for interruptive DDIs, thus reducing alert fatigue and improving patient safety. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Geologic map of the Valjean Hills 7.5' quadrangle, San Bernardino County, California
Calzia, J.P.; Troxel, Bennie W.; digital database by Raumann, Christian G.
2003-01-01
FGDC-compliant metadata for the ARC/INFO coverages. The Correlation of Map Units and Description of Map Units is in the editorial format of USGS Geologic Investigations Series (I-series) maps but has not been edited to comply with I-map standards. Within the geologic map data package, map units are identified by standard geologic map criteria such as formation-name, age, and lithology. Even though this is an Open-File Report and includes the standard USGS Open-File disclaimer, the report closely adheres to the stratigraphic nomenclature of the U.S. Geological Survey. Descriptions of units can be obtained by viewing or plotting the .pdf file (3 above) or plotting the postscript file (2 above).
Rapid variability of Antarctic Bottom Water transport into the Pacific Ocean inferred from GRACE
NASA Astrophysics Data System (ADS)
Mazloff, Matthew R.; Boening, Carmen
2016-04-01
Air-ice-ocean interactions in the Antarctic lead to formation of the densest waters on Earth. These waters convect and spread to fill the global abyssal oceans. The heat and carbon storage capacity of these water masses, combined with their abyssal residence times that often exceed centuries, makes this circulation pathway the most efficient sequestering mechanism on Earth. Yet monitoring this pathway has proven challenging due to the nature of the formation processes and the depth of the circulation. The Gravity Recovery and Climate Experiment (GRACE) gravity mission is providing a time series of ocean mass redistribution and offers a transformative view of the abyssal circulation. Here we use the GRACE measurements to infer, for the first time, a 2003-2014 time series of Antarctic Bottom Water export into the South Pacific. We find this export highly variable, with a standard deviation of 1.87 sverdrup (Sv) and a decorrelation timescale of less than 1 month. A significant trend is undetectable.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.
Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza
2018-03-02
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Mazzini, Virginia; Craig, Vincent S J
2017-10-01
The importance of electrolyte solutions cannot be overstated. Beyond the ionic strength of electrolyte solutions the specific nature of the ions present is vital in controlling a host of properties. Therefore ion specificity is fundamentally important in physical chemistry, engineering and biology. The observation that the strengths of the effect of ions often follows well established series suggests that a single predictive and quantitative description of specific-ion effects covering a wide range of systems is possible. Such a theory would revolutionise applications of physical chemistry from polymer precipitation to drug design. Current approaches to understanding specific-ion effects involve consideration of the ions themselves, the solvent and relevant interfaces and the interactions between them. Here we investigate the specific-ion effects trends of standard partial molar volumes and electrostrictive volumes of electrolytes in water and eleven non-aqueous solvents. We choose these measures as they relate to bulk properties at infinite dilution, therefore they are the simplest electrolyte systems. This is done to test the hypothesis that the ions alone exhibit a specific-ion effect series that is independent of the solvent and unrelated to surface properties. The specific-ion effects trends of standard partial molar volumes and normalised electrostrictive volumes examined in this work show a fundamental ion-specific series that is reproduced across the solvents, which is the Hofmeister series for anions and the reverse lyotropic series for cations, supporting the hypothesis. This outcome is important in demonstrating that ion specificity is observed at infinite dilution and demonstrates that the complexity observed in the manifestation of specific-ion effects in a very wide range of systems is due to perturbations of solvent, surfaces and concentration on the underlying fundamental series. This knowledge will guide a general understanding of specific-ion effects and assist in the development of a quantitative predictive theory of ion specificity.
Web service activities at the IRIS DMC to support federated and multidisciplinary access
NASA Astrophysics Data System (ADS)
Trabant, Chad; Ahern, Timothy K.
2013-04-01
At the IRIS Data Management Center (DMC) we have developed a suite of web service interfaces to access our large archive of, primarily seismological, time series data and related metadata. The goals of these web services include providing: a) next-generation and easily used access interfaces for our current users, b) access to data holdings in a form usable for non-seismologists, c) programmatic access to facilitate integration into data processing workflows and d) a foundation for participation in federated data discovery and access systems. To support our current users, our services provide access to the raw time series data and metadata or conversions of the raw data to commonly used formats. Our services also support simple, on-the-fly signal processing options that are common first steps in many workflows. Additionally, high-level data products derived from raw data are available via service interfaces. To support data access by researchers unfamiliar with seismic data we offer conversion of the data to broadly usable formats (e.g. ASCII text) and data processing to convert the data to Earth units. By their very nature, web services are programmatic interfaces. Combined with ubiquitous support for web technologies in programming & scripting languages and support in many computing environments, web services are very well suited for integrating data access into data processing workflows. As programmatic interfaces that can return data in both discipline-specific and broadly usable formats, our services are also well suited for participation in federated and brokered systems either specific to seismology or multidisciplinary. Working within the International Federation of Digital Seismograph Networks, the DMC collaborated on the specification of standardized web service interfaces for use at any seismological data center. These data access interfaces, when supported by multiple data centers, will form a foundation on which to build discovery and access mechanisms for data sets spanning multiple centers. To promote the adoption of these standardized services the DMC has developed portable implementations of the software needed to host these interfaces, minimizing the work required at each data center. Within the COOPEUS project framework, the DMC is working with EU partners to install web services implementations at multiple data centers in Europe.
CMOS compatible fabrication process of MEMS resonator for timing reference and sensing application
NASA Astrophysics Data System (ADS)
Huynh, Duc H.; Nguyen, Phuong D.; Nguyen, Thanh C.; Skafidas, Stan; Evans, Robin
2015-12-01
Frequency reference and timing control devices are ubiquitous in electronic applications. There is at least one resonator required for each of this device. Currently electromechanical resonators such as crystal resonator, ceramic resonator are the ultimate choices. This tendency will probably keep going for many more years. However, current market demands for small size, low power consumption, cheap and reliable products, has divulged many limitations of this type of resonators. They cannot be integrated into standard CMOS (Complement metaloxide- semiconductor) IC (Integrated Circuit) due to material and fabrication process incompatibility. Currently, these devices are off-chip and they require external circuitries to interface with the ICs. This configuration significantly increases the overall size and cost of the entire electronic system. In addition, extra external connection, especially at high frequency, will potentially create negative impacts on the performance of the entire system due to signal degradation and parasitic effects. Furthermore, due to off-chip packaging nature, these devices are quite expensive, particularly for high frequency and high quality factor devices. To address these issues, researchers have been intensively studying on an alternative for type of resonator by utilizing the new emerging MEMS (Micro-electro-mechanical systems) technology. Recent progress in this field has demonstrated a MEMS resonator with resonant frequency of 2.97 GHz and quality factor (measured in vacuum) of 42900. Despite this great achievement, this prototype is still far from being fully integrated into CMOS system due to incompatibility in fabrication process and its high series motional impedance. On the other hand, fully integrated MEMS resonator had been demonstrated but at lower frequency and quality factor. We propose a design and fabrication process for a low cost, high frequency and a high quality MEMS resonator, which can be integrated into a standard CMOS IC. This device is expected to operate in hundreds of Mhz frequency range; quality factor surpasses 10000 and series motional impedance low enough that could be matching into conventional system without enormous effort. This MEMS resonator can be used in the design of many blocks in wireless and RF (Radio Frequency) systems such as low phase noise oscillator, band pass filter, power amplifier and in many sensing application.
Inquiry and Learning: Realizing Science Standards in the Classroom. The Thinking Series.
ERIC Educational Resources Information Center
Layman, John W.; And Others
This book provides a focused, extended response to the question How does standards-based science instruction look and feel in the classroom? This question is addressed by considering two related issues: (1) "How can teachers cultivate the quality of scientific thinking and understanding defined by standards?" and (2) "How can…
Spotlight on General Music: Teaching Toward the Standards
ERIC Educational Resources Information Center
Rowman & Littlefield Education, 2005
2005-01-01
General music teachers at all levels--elementary, middle school, and high school--will find ideas, suggestions, and lesson plans for teaching to the National Standards in this new addition to the popular Spotlight series. It includes sections on teaching each of the nine standards, as well as chapters about secondary general music, assessment, and…
Wang, Wenguang; Ma, Xiaoli; Guo, Xiaoyu; Zhao, Mingbo; Tu, Pengfei; Jiang, Yong
2015-09-18
In order to solve the bottleneck of reference standards shortage for comprehensive quality control of traditional Chinese medicines (TCMs), a series of strategies, including one single reference standard to determine multi-compounds (SSDMC), quantitative analysis by standardized reference extract (QASRE), and quantitative nuclear magnetic resonance spectroscopy (qNMR) were proposed, and Mahoniae Caulis was selected as an example to develop and validate these methods for simultaneous determination of four alkaloids, columbamine, jatrorrhizine, palmatine, and berberine. Comprehensive comparisons among these methods and with the conventional external standard method (ESM) were carried out. The relative expanded uncertainty of measurement was firstly used to compare their credibility. The results showed that all these three new developed methods can accurately accomplish the quantification by using only one purified reference standard, but each of them has its own advantages and disadvantages as well as the specific application scope, which were also discussed in detail in this paper. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Neri, Rebecca; Lozano, Maritza; Chang, Sandy; Herman, Joan
2016-01-01
New college and career ready standards (CCRS) have established more rigorous expectations of learning for all learners, including English learner (EL) students, than what was expected in previous standards. A common feature in these new content-area standards, such as the Common Core State Standards in English language arts and mathematics and the…
ERIC Educational Resources Information Center
NASA Educator Resource Center at Marshall Space Flight Center, 2007
2007-01-01
The Human Exploration Project (HEP) units have several common characteristics. All units: (1) Are based upon the Technological Literacy standards (ITEA, 2000/2002); (2) Coordinate with Science (AAAS, 1993) and Mathematics standards (NCTM, 2000); (3) Utilize a standards-based development approach (ITEA, 2005); (4) Stand alone and coordinate with…
Local contrast-enhanced MR images via high dynamic range processing.
Chandra, Shekhar S; Engstrom, Craig; Fripp, Jurgen; Neubert, Ales; Jin, Jin; Walker, Duncan; Salvado, Olivier; Ho, Charles; Crozier, Stuart
2018-09-01
To develop a local contrast-enhancing and feature-preserving high dynamic range (HDR) image processing algorithm for multichannel and multisequence MR images of multiple body regions and tissues, and to evaluate its performance for structure visualization, bias field (correction) mitigation, and automated tissue segmentation. A multiscale-shape and detail-enhancement HDR-MRI algorithm is applied to data sets of multichannel and multisequence MR images of the brain, knee, breast, and hip. In multisequence 3T hip images, agreement between automatic cartilage segmentations and corresponding synthesized HDR-MRI series were computed for mean voxel overlap established from manual segmentations for a series of cases. Qualitative comparisons between the developed HDR-MRI and standard synthesis methods were performed on multichannel 7T brain and knee data, and multisequence 3T breast and knee data. The synthesized HDR-MRI series provided excellent enhancement of fine-scale structure from multiple scales and contrasts, while substantially reducing bias field effects in 7T brain gradient echo, T 1 and T 2 breast images and 7T knee multichannel images. Evaluation of the HDR-MRI approach on 3T hip multisequence images showed superior outcomes for automatic cartilage segmentations with respect to manual segmentation, particularly around regions with hyperintense synovial fluid, across a set of 3D sequences. The successful combination of multichannel/sequence MR images into a single-fused HDR-MR image format provided consolidated visualization of tissues within 1 omnibus image, enhanced definition of thin, complex anatomical structures in the presence of variable or hyperintense signals, and improved tissue (cartilage) segmentation outcomes. © 2018 International Society for Magnetic Resonance in Medicine.
MIMO model of an interacting series process for Robust MPC via System Identification.
Wibowo, Tri Chandra S; Saad, Nordin
2010-07-01
This paper discusses the empirical modeling using system identification technique with a focus on an interacting series process. The study is carried out experimentally using a gaseous pilot plant as the process, in which the dynamic of such a plant exhibits the typical dynamic of an interacting series process. Three practical approaches are investigated and their performances are evaluated. The models developed are also examined in real-time implementation of a linear model predictive control. The selected model is able to reproduce the main dynamic characteristics of the plant in open-loop and produces zero steady-state errors in closed-loop control system. Several issues concerning the identification process and the construction of a MIMO state space model for a series interacting process are deliberated. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Rodgers, Cheryl; Bertini, Vanessa; Conway, Mary Ashe; Crosty, Ashley; Filice, Angela; Herring, Ruth Anne; Isbell, Julie; Lown DrPH, E Anne; Miller, Kristina; Perry, Margaret; Sanborn, Paula; Spreen, Nicole; Tena, Nancy; Winkle, Cindi; Darling, Joan; Slaven, Abigail; Sullivan, Jeneane; Tomlinson, Kathryn M; Windt, Kate; Hockenberry, Marilyn; Landier, Wendy
2018-03-01
Parents of children newly diagnosed with cancer must acquire new knowledge and skills in order to safely care for their child at home. Institutional variation exists in the methods and content used by nurses in providing the initial education. The goal of this project was to develop a checklist, standardized across institutions, to guide nursing education provided to parents of children newly diagnosed with cancer. A team of 21 members (19 nurses and 2 parent advocates) used current hospital educational checklists, expert consensus recommendations, and a series of iterative activities and discussions to develop one standardized checklist. The final checklist specifies primary topics that are essential to teach prior to the initial hospital discharge, secondary topics that should be discussed within the first month after the cancer diagnosis, and tertiary topics that should be discussed prior to completion of therapy. This checklist is designed to guide education and will set the stage for future studies to identify effective teaching strategies that optimize the educational process for parents of children newly diagnosed with cancer.
NASA Astrophysics Data System (ADS)
Wang, J.; Song, J.; Gao, M.; Zhu, L.
2014-02-01
The trans-boundary area between Northern China, Mongolia and eastern Siberia of Russia is a continuous geographical area located in north eastern Asia. Many common issues in this region need to be addressed based on a uniform resources and environmental data warehouse. Based on the practice of joint scientific expedition, the paper presented a data integration solution including 3 steps, i.e., data collection standards and specifications making, data reorganization and process, data warehouse design and development. A series of data collection standards and specifications were drawn up firstly covering more than 10 domains. According to the uniform standard, 20 resources and environmental survey databases in regional scale, and 11 in-situ observation databases were reorganized and integrated. North East Asia Resources and Environmental Data Warehouse was designed, which included 4 layers, i.e., resources layer, core business logic layer, internet interoperation layer, and web portal layer. The data warehouse prototype was developed and deployed initially. All the integrated data in this area can be accessed online.
NASA Astrophysics Data System (ADS)
Maling, George C.
2005-09-01
Bill Lang joined IBM in the late 1950s with a mandate from Thomas Watson Jr. himself to establish an acoustics program at IBM. Bill created the facilities in Poughkeepsie, developed the local program, and was the leader in having other IBM locations with development and manufacturing responsibilities construct facilities and hire staff under the Interdivisional Liaison Program. He also directed IBMs acoustics technology program. In the mid-1960s, he led an IEEE standards group in Audio and Electroacoustics, and, with the help of James Cooley, Peter Welch, and others, introduced the fast Fourier transform to the acoustics community. He was the convenor of ISO TC 43 SC1 WG6 that began writing the 3740 series of standards in the 1970s. It was his suggestion to promote professionalism in noise control engineering, and, through meetings with Leo Beranek and others, led the founding of INCE/USA in 1971. He was also a leader of the team that founded International INCE in 1974, and he served as president from 1988 until 1999.
Pitot tube calculations with a TI-59
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, K.
Industrial plant and stack analysis dictates that flow measurements in ducts be accurate. This is usually accomplished by running a traverse with a pitot tube across the duct or flue. A traverse is a series of measurements taken at predetermined points across the duct. The values of these measurements are calculated into point flow rates and averaged. A program for the Texas Instruments TI-59 programmable calculator follows. The program will perform calculations for an infinite number of test points, both with the standard (combined impact type) pitot tube and the S-type (combined reverse type). The type of tube is selectedmore » by inputting an indicating valve that triggers a flag in the program. To use the standard pitot tube, a 1 is input into key E. When the S-type is used, a zero is input into key E. The program output will note if the S-type had been used. Since most process systems are not at standard conditions (32/sup 0/F, 1 atm) the program will take this into account.« less
A Lifecycle Approach to Brokered Data Management for Hydrologic Modeling Data Using Open Standards.
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Booth, N.; Kunicki, T.; Walker, J.
2012-12-01
The U.S. Geological Survey Center for Integrated Data Analytics has formalized an information management-architecture to facilitate hydrologic modeling and subsequent decision support throughout a project's lifecycle. The architecture is based on open standards and open source software to decrease the adoption barrier and to build on existing, community supported software. The components of this system have been developed and evaluated to support data management activities of the interagency Great Lakes Restoration Initiative, Department of Interior's Climate Science Centers and WaterSmart National Water Census. Much of the research and development of this system has been in cooperation with international interoperability experiments conducted within the Open Geospatial Consortium. Community-developed standards and software, implemented to meet the unique requirements of specific disciplines, are used as a system of interoperable, discipline specific, data types and interfaces. This approach has allowed adoption of existing software that satisfies the majority of system requirements. Four major features of the system include: 1) assistance in model parameter and forcing creation from large enterprise data sources; 2) conversion of model results and calibrated parameters to standard formats, making them available via standard web services; 3) tracking a model's processes, inputs, and outputs as a cohesive metadata record, allowing provenance tracking via reference to web services; and 4) generalized decision support tools which rely on a suite of standard data types and interfaces, rather than particular manually curated model-derived datasets. Recent progress made in data and web service standards related to sensor and/or model derived station time series, dynamic web processing, and metadata management are central to this system's function and will be presented briefly along with a functional overview of the applications that make up the system. As the separate pieces of this system progress, they will be combined and generalized to form a sort of social network for nationally consistent hydrologic modeling.
ERIC Educational Resources Information Center
Mansfield, Bob, Ed.
This is the third publication in the European Training Foundation's (ETF's) series of manuals designed to support development of vocational education and training (VET) standards. This volume looks at ways in which VET standards are linked to labor market demands and how relevant VET standards are to the needs of employment in a market economy.…
ERIC Educational Resources Information Center
Ji, Chang; Boisvert, Susanne M.; Arida, Ann-Marie C.; Day, Shannon E.
2008-01-01
An internal standard method applicable to undergraduate instrumental analysis or environmental chemistry laboratory has been designed and tested to determine the Henry's law constants for a series of alkyl nitriles. In this method, a mixture of the analytes and an internal standard is prepared and used to make a standard solution (organic solvent)…
Tests on thirteen navy type model propellers
NASA Technical Reports Server (NTRS)
Durand, W F
1927-01-01
The tests on these model propellers were undertaken for the purpose of determining the performance coefficients and characteristics for certain selected series of propellers of form and type as commonly used in recent navy designs. The first series includes seven propellers of pitch ratio varying by 0.10 to 1.10, the area, form of blade, thickness, etc., representing an arbitrary standard propeller which had shown good results. The second series covers changes in thickness of blade section, other things equal, and the third series, changes in blade area, other things equal. These models are all of 36-inch diameter. Propellers A to G form the series on pitch ratio, C, N. I. J the series on thickness of section, and K, M, C, L the series on area. (author)
Bai, Ru-feng; Ma, Shu-hua; Zhang, Hai-dong; Chang, Lin; Zhang, Zhong; Liu, Li; Zhang, Feng-qin; Guo, Zhao-ming; Shi, Mei-sen
2014-03-01
A block of an injury instrument will be left in wounds sometimes, and the suspect instrument can be discriminated by comparison with the block that was left through elemental analysis. In this study, three brands (Shibazi, Zhangxiaoquan, Qiaoxifu) of kitchen knives with forged, chop, and slice application series were analyzed by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and Infrared Absorption to investigate the type, number of elements and the reference range used for comparing. The results show that when regarding one or more element as the discriminative threshold, together with 5% relative standard deviation (RSD) as the reference range, all the samples could be distinguished among different series. Furthermore, within the same series, the discriminative capability could reach up to 88.57% for all samples. In addition, elements with high content, such as Cr, Mn, and C, were useful to discriminate among different series, and trace elements, such as Ni, Si, and Cu, were useful within the same series. However, in practice, it is necessary to evaluate the accuracy of the method by Standard Reference Material (SRM) before an examination is performed.
Sweet, W.C.; Ethington, Raymond L.; Harris, A.G.
2005-01-01
Ranges of conodonts in stratigraphic sections at five localities in the Monitor and Antelope ranges of central Nevada are used graphically to assemble a standard reference section for the lower Middle Ordovician Whiterockian Series. The base of the series is officially 0.3 m above the base of the Antelope Valley Limestone in the stratotype in Whiterock Canyon (Monitor Range). The top is the level at which Baltoniodus gerdae makes a brief appearance in an exposure of the Copenhagen Formation on the flanks of Hill 8308 in the western Antelope Range. Graphic compilation of the sections considered in this report also indicates that a level correlative with the base of the Whiterockian Series in the stratotype section is 66 m above the base of the Antelope Valley Limestone in its de facto type section on Martin Ridge in the eastern part of the Monitor Range. Ranges, diversity, and the composition of the conodont faunas differ markedly in lithofacies adjacent to the basal boundary of the series; hence we are unable to identify a single conodont species, in a credible developmental sequence, to serve as biological marker of that boundary.
Influence of skin peeling procedure in allergic contact dermatitis.
Kim, Jung Eun; Park, Hyun Jeong; Cho, Baik Kee; Lee, Jun Young
2008-03-01
The prevalence of allergic contact dermatitis in patients who have previously undergone skin peeling has been rarely studied. We compared the frequency of positive patch test (PT) reactions in a patient group with a history of peeling, to that of a control group with no history of peeling. The Korean standard series and cosmetic series were performed on a total of 262 patients. 62 patients had previously undergone peeling and 200 patients did not. The frequency of positive PT reactions on Korean standard series was significantly higher in the peeling group compared with that of the control group (P < 0.05, chi-square test). However, the most commonly identified allergens were mostly cosmetic-unrelated allergens. The frequency of positive PT reactions on cosmetic series in the peeling group was higher than that of the control group, but lacked statistical significance. The frequency (%) of positive PT reactions on cosmetic series in the high-frequency peel group was higher than that of the low-frequency group, but lacked statistical significance. It appears peeling may not generally affect the development of contact sensitization. Further work is required focusing on the large-scale prospective studies by performing a PT before and after peeling.
Interaction of solid organic acids with carbon nanotube field effect transistors
NASA Astrophysics Data System (ADS)
Klinke, Christian; Afzali, Ali; Avouris, Phaedon
2006-10-01
A series of solid organic acids were used to p-dope carbon nanotubes. The extent of doping is shown to be dependent on the pKa value of the acids. Highly fluorinated carboxylic acids and sulfonic acids are very effective in shifting the threshold voltage and making carbon nanotube field effect transistors to be more p-type devices. Weaker acids like phosphonic or hydroxamic acids had less effect. The doping of the devices was accompanied by a reduction of the hysteresis in the transfer characteristics. In-solution doping survives standard fabrication processes and renders p-doped carbon nanotube field effect transistors with good transport characteristics.
Long-term noise statistics from the Gulf of Mexico
NASA Astrophysics Data System (ADS)
Eller, Anthony I.; Ioup, George E.; Ioup, Juliette W.; Larue, James P.
2003-04-01
Long-term, omnidirectional acoustic noise measurements were conducted in the northeastern Gulf of Mexico during the summer of 2001. These efforts were a part of the Littoral Acoustic Demonstration Center project, Phase I. Initial looks at the noise time series, processed in standard one-third-octave bands from 10 to 5000 Hz, show noise levels that differ substantially from customary deep-water noise spectra. Contributing factors to this highly dynamic noise environment are an abundance of marine mammal emissions and various industrial noises. Results presented here address long-term temporal variability, temporal coherence times, the fluctuation spectrum, and coherence of fluctuations across the frequency spectrum. [Research supported by ONR.
A strawman SLR program plan for the 1990s
NASA Technical Reports Server (NTRS)
Degnan, John J.
1994-01-01
A series of programmatic and technical goals for the satellite laser ranging (SLR) network are presented. They are: (1) standardize the performance of the global SLR network; (2) improve the geographic distribution of stations; (3) reduce costs of field operations and data processing; (4) expand the 24 hour temporal coverage to better serve the growing constellation of satellites; (5) improve absolute range accuracy to 2 mm at key stations; (6) improve satellite force, radiative propagation, and station motion models and investigate alternative geodetic analysis techniques; (7) support technical intercomparison and the Terrestrial Reference Frame through global collocations; (8) investigate potential synergisms between GPS and SLR.
Encryption key distribution via chaos synchronization
Keuninckx, Lars; Soriano, Miguel C.; Fischer, Ingo; Mirasso, Claudio R.; Nguimdo, Romain M.; Van der Sande, Guy
2017-01-01
We present a novel encryption scheme, wherein an encryption key is generated by two distant complex nonlinear units, forced into synchronization by a chaotic driver. The concept is sufficiently generic to be implemented on either photonic, optoelectronic or electronic platforms. The method for generating the key bitstream from the chaotic signals is reconfigurable. Although derived from a deterministic process, the obtained bit series fulfill the randomness conditions as defined by the National Institute of Standards test suite. We demonstrate the feasibility of our concept on an electronic delay oscillator circuit and test the robustness against attacks using a state-of-the-art system identification method. PMID:28233876
Acoustic Doppler Current Profiler Data Processing System manual [ADCP
Cote, Jessica M.; Hotchkiss, Frances S.; Martini, Marinna A.; Denham, Charles R.; revisions by Ramsey, Andree L.; Ruane, Stephen
2000-01-01
This open-file report describes the data processing software currently in use by the U.S. Geological Survey (USGS), Woods Hole Coastal and Marine Science Center (WHCMSC), to process time series of acoustic Doppler current data obtained by Teledyne RD Instruments Workhorse model ADCPs. The Sediment Transport Instrumentation Group (STG) at the WHCMSC has a long-standing commitment to providing scientists high quality oceanographic data published in a timely manner. To meet this commitment, STG has created this software to aid personnel in processing and reviewing data as well as evaluating hardware for signs of instrument malfunction. The output data format for the data is network Common Data Form (netCDF), which meets USGS publication standards. Typically, ADCP data are recorded in beam coordinates. This conforms to the USGS philosophy to post-process rather than internally process data. By preserving the original data quality indicators as well as the initial data set, data can be evaluated and reprocessed for different types of analyses. Beam coordinate data are desirable for internal and surface wave experiments, for example. All the code in this software package is intended to run using the MATLAB program available from The Mathworks, Inc. As such, it is platform independent and can be adapted by the USGS and others for specialized experiments with non-standard requirements. The software is continuously being updated and revised as improvements are required. The most recent revision may be downloaded from: http://woodshole.er.usgs.gov/operations/stg/Pubs/ADCPtools/adcp_index.htm The USGS makes this software available at the user?s discretion and responsibility.
DOT National Transportation Integrated Search
2015-11-01
This report captures the production process and programs and steps used to produce the finance tables and charts published in the Federal Highway Administrations (FHWAs) Highway Statistics Series publication site made available to Congress and ...
NASA Astrophysics Data System (ADS)
Magnuson, Brian
A proof-of-concept software-in-the-loop study is performed to assess the accuracy of predicted net and charge-gaining energy consumption for potential effective use in optimizing powertrain management of hybrid vehicles. With promising results of improving fuel efficiency of a thermostatic control strategy for a series, plug-ing, hybrid-electric vehicle by 8.24%, the route and speed prediction machine learning algorithms are redesigned and implemented for real- world testing in a stand-alone C++ code-base to ingest map data, learn and predict driver habits, and store driver data for fast startup and shutdown of the controller or computer used to execute the compiled algorithm. Speed prediction is performed using a multi-layer, multi-input, multi- output neural network using feed-forward prediction and gradient descent through back- propagation training. Route prediction utilizes a Hidden Markov Model with a recurrent forward algorithm for prediction and multi-dimensional hash maps to store state and state distribution constraining associations between atomic road segments and end destinations. Predicted energy is calculated using the predicted time-series speed and elevation profile over the predicted route and the road-load equation. Testing of the code-base is performed over a known road network spanning 24x35 blocks on the south hill of Spokane, Washington. A large set of training routes are traversed once to add randomness to the route prediction algorithm, and a subset of the training routes, testing routes, are traversed to assess the accuracy of the net and charge-gaining predicted energy consumption. Each test route is traveled a random number of times with varying speed conditions from traffic and pedestrians to add randomness to speed prediction. Prediction data is stored and analyzed in a post process Matlab script. The aggregated results and analysis of all traversals of all test routes reflect the performance of the Driver Prediction algorithm. The error of average energy gained through charge-gaining events is 31.3% and the error of average net energy consumed is 27.3%. The average delta and average standard deviation of the delta of predicted energy gained through charge-gaining events is 0.639 and 0.601 Wh respectively for individual time-series calculations. Similarly, the average delta and average standard deviation of the delta of the predicted net energy consumed is 0.567 and 0.580 Wh respectively for individual time-series calculations. The average delta and standard deviation of the delta of the predicted speed is 1.60 and 1.15 respectively also for the individual time-series measurements. The percentage of accuracy of route prediction is 91%. Overall, test routes are traversed 151 times for a total test distance of 276.4 km.
Time Series Model Identification by Estimating Information, Memory, and Quantiles.
1983-07-01
Standards, Sect. D, 68D, 937-951. Parzen, Emanuel (1969) "Multiple time series modeling" Multivariate Analysis - II, edited by P. Krishnaiah , Academic... Krishnaiah , North Holland: Amsterdam, 283-295. Parzen, Emanuel (1979) "Forecasting and Whitening Filter Estimation" TIMS Studies in the Management...principle. Applications of Statistics, P. R. Krishnaiah , ed. North Holland: Amsterdam, 27-41. Box, G. E. P. and Jenkins, G. M. (1970) Time Series Analysis
ERIC Educational Resources Information Center
Reason, Paul L., Comp.; White, Alpheus L., Comp.
1957-01-01
This handbook is the basic guide to financial accounting for local and State school systems in the United States. It is the second in a series of four handbooks in the State of Educational Records and Reports Series undertaken at the request of a number of national organizations. Handbook I, "The Common Core of State Educational Information," was…
2016-09-01
Method Scientific Operating Procedure Series : SOP-C En vi ro nm en ta l L ab or at or y Jonathon Brame and Chris Griggs September 2016...BET) Method Scientific Operating Procedure Series : SOP-C Jonathon Brame and Chris Griggs Environmental Laboratory U.S. Army Engineer Research and...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing
Record, M Thomas; Guinn, Emily; Pegram, Laurel; Capp, Michael
2013-01-01
Understanding how Hofmeister salt ions and other solutes interact with proteins, nucleic acids, other biopolymers and water and thereby affect protein and nucleic acid processes as well as model processes (e.g. solubility of model compounds) in aqueous solution is a longstanding goal of biophysical research. Empirical Hofmeister salt and solute "m-values" (derivatives of the observed standard free energy change for a model or biopolymer process with respect to solute or salt concentration m3) are equal to differences in chemical potential derivatives: m-value = delta(dmu2/dm3) = delta mu23, which quantify the preferential interactions of the solute or salt with the surface of the biopolymer or model system (component 2) exposed or buried in the process. Using the solute partitioning model (SPM), we dissect mu23 values for interactions of a solute or Hofmeister salt with a set of model compounds displaying the key functional groups of biopolymers to obtain interaction potentials (called alpha-values) that quantify the interaction of the solute or salt per unit area of each functional group or type of surface. Interpreted using the SPM, these alpha-values provide quantitative information about both the hydration of functional groups and the competitive interaction of water and the solute or salt with functional groups. The analysis corroborates and quantifies previous proposals that the Hofmeister anion and cation series for biopolymer processes are determined by ion-specific, mostly unfavorable interactions with hydrocarbon surfaces; the balance between these unfavorable nonpolar interactions and often-favorable interactions of ions with polar functional groups determine the series null points. The placement of urea and glycine betaine (GB) at opposite ends of the corresponding series of nonelectrolytes results from the favorable interactions of urea, and unfavorable interactions of GB, with many (but not all) biopolymer functional groups. Interaction potentials and local-bulk partition coefficients quantifying the distribution of solutes (e.g. urea, glycine betaine) and Hofmeister salt ions in the vicinity of each functional group make good chemical sense when interpreted in terms of competitive noncovalent interactions. These interaction potentials allow solute and Hofmeister (noncoulombic) salt effects on protein and nucleic acid processes to be interpreted or predicted, and allow the use of solutes and salts as probes of
River Basin Standards Interoperability Pilot
NASA Astrophysics Data System (ADS)
Pesquer, Lluís; Masó, Joan; Stasch, Christoph
2016-04-01
There is a lot of water information and tools in Europe to be applied in the river basin management but fragmentation and a lack of coordination between countries still exists. The European Commission and the member states have financed several research and innovation projects in support of the Water Framework Directive. Only a few of them are using the recently emerging hydrological standards, such as the OGC WaterML 2.0. WaterInnEU is a Horizon 2020 project focused on creating a marketplace to enhance the exploitation of EU funded ICT models, tools, protocols and policy briefs related to water and to establish suitable conditions for new market opportunities based on these offerings. One of WaterInnEU's main goals is to assess the level of standardization and interoperability of these outcomes as a mechanism to integrate ICT-based tools, incorporate open data platforms and generate a palette of interchangeable components that are able to use the water data emerging from the recently proposed open data sharing processes and data models stimulated by initiatives such as the INSPIRE directive. As part of the standardization and interoperability activities in the project, the authors are designing an experiment (RIBASE, the present work) to demonstrate how current ICT-based tools and water data can work in combination with geospatial web services in the Scheldt river basin. The main structure of this experiment, that is the core of the present work, is composed by the following steps: - Extraction of information from river gauges data in OGC WaterML 2.0 format using SOS services (preferably compliant to the OGC SOS 2.0 Hydrology Profile Best Practice). - Model floods using a WPS 2.0, WaterML 2.0 data and weather forecast models as input. - Evaluation of the applicability of Sensor Notification Services in water emergencies. - Open distribution of the input and output data as OGC web services WaterML, / WCS / WFS and with visualization utilities: WMS. The architecture tests the combination of Gauge data in a WPS that is triggered by a meteorological alert. The data is translated into OGC WaterML 2.0 time series data format and will be ingested in a SOS 2.0. SOS data is visualized in a SOS Client that is able to handle time series. The meteorological forecast data (with the supervision of an operator manipulating the WPS user interface) ingests with WaterML 2.0 time series and terrain data is input for a flooding modelling algorithm. The WPS is able to produce flooding datasets in the form of coverages that is offered to clients via a WCS 2.0 service or a WMS 1.3 service, and downloaded and visualized by the respective clients. The WPS triggers a notification or an alert that will be monitored from an emergency control response service. Acronyms AS: Alert Service ES: Event Service ICT: Information and Communication Technology NS: Notification Service OGC: Open Geospatial Consortium RIBASE: River Basin Standards Interoperability Pilot SOS: Sensor Observation Service WaterML: Water Markup Language WCS: Web Coverage Service WMS: Web Map Service WPS: Web Processing Service
ERIC Educational Resources Information Center
Miller, Leslie
2007-01-01
Wondering how to make the study of the immune system and infectious agents more relevant to your students' lives? The online adventure series, Medical Mysteries, can provide the context and motivation. The series combines the drama of television's "CSI" episodes with science to address several of the National Science Education Content Standards.…
78 FR 65489 - Standard Claims and Appeals Forms
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... electronically through eBenefits, he or she is guided through a series of interview-style questions that are... of the interview in eBenefits prompts claimants to answer all pertinent questions in order to obtain... online application through the series of interview questions, and electronically saving the application...
IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING
NASA Technical Reports Server (NTRS)
Roth, D. J.
1994-01-01
IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.
Jing, Helen G; Madore, Kevin P; Schacter, Daniel L
2017-12-01
A critical adaptive feature of future thinking involves the ability to generate alternative versions of possible future events. However, little is known about the nature of the processes that support this ability. Here we examined whether an episodic specificity induction - brief training in recollecting details of a recent experience that selectively impacts tasks that draw on episodic retrieval - (1) boosts alternative event generation and (2) changes one's initial perceptions of negative future events. In Experiment 1, an episodic specificity induction significantly increased the number of alternative positive outcomes that participants generated to a series of standardized negative events, compared with a control induction not focused on episodic specificity. We also observed larger decreases in the perceived plausibility and negativity of the original events in the specificity condition, where participants generated more alternative outcomes, relative to the control condition. In Experiment 2, we replicated and extended these findings using a series of personalized negative events. Our findings support the idea that episodic memory processes are involved in generating alternative outcomes to anticipated future events, and that boosting the number of alternative outcomes is related to subsequent changes in the perceived plausibility and valence of the original events, which may have implications for psychological well-being. Published by Elsevier B.V.
Biofiltration of methanol vapor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shareefdeen, Z.; Baltzis, B.C.; Oh, Youngsook
1993-03-05
Biofiltration of solvent and fuel vapors may offer a cost-effective way to comply with increasingly strict air emission standards. An important step in the development of this technology is to derive and validate mathematical models of the biofiltration process for predictive and scaleup calculations. For the study of methanol vapor biofiltration, an 8-membered bacterial consortium was obtained from methanol-exposed soil. The bacteria were immobilized on solid support and packed into a 5-cm diameter, 60-cm-high column provided with appropriate flowmeters and sampling ports. The solid support was prepared by mixing two volumes of peat with three volumes of perlite particles. Twomore » series of experiments were performed. In the first, the inlet methanol concentration was kept constant while the superficial air velocity was varied from run to run. In the second series, the air flow rate (velocity) was kept constant while the inlet methanol concentration was varied. The unit proved effective in removing methanol at rates up to 112.8 g h[sup [minus]1] m[sup [minus]3] packing. A mathematical model has been derived and validated. The model described and predicted experimental results closely. Both experimental data and model predictions suggest that the methanol biofiltration process was limited by oxygen diffusion and methanol degradation kinetics.« less
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
Salient features of dependence in daily US stock market indices
NASA Astrophysics Data System (ADS)
Gil-Alana, Luis A.; Cunado, Juncal; de Gracia, Fernando Perez
2013-08-01
This paper deals with the analysis of long range dependence in the US stock market. We focus first on the log-values of the Dow Jones Industrial Average, Standard and Poors 500 and Nasdaq indices, daily from February, 1971 to February, 2007. The volatility processes are examined based on the squared and the absolute values of the returns series, and the stability of the parameters across time is also investigated in both the level and the volatility processes. A method that permits us to estimate fractional differencing parameters in the context of structural breaks is conducted in this paper. Finally, the “day of the week” effect is examined by looking at the order of integration for each day of the week, providing also a new modeling approach to describe the dependence in this context.
A coarse-grained Monte Carlo approach to diffusion processes in metallic nanoparticles
NASA Astrophysics Data System (ADS)
Hauser, Andreas W.; Schnedlitz, Martin; Ernst, Wolfgang E.
2017-06-01
A kinetic Monte Carlo approach on a coarse-grained lattice is developed for the simulation of surface diffusion processes of Ni, Pd and Au structures with diameters in the range of a few nanometers. Intensity information obtained via standard two-dimensional transmission electron microscopy imaging techniques is used to create three-dimensional structure models as input for a cellular automaton. A series of update rules based on reaction kinetics is defined to allow for a stepwise evolution in time with the aim to simulate surface diffusion phenomena such as Rayleigh breakup and surface wetting. The material flow, in our case represented by the hopping of discrete portions of metal on a given grid, is driven by the attempt to minimize the surface energy, which can be achieved by maximizing the number of filled neighbor cells.
Three-dimensional printing fiber reinforced hydrogel composites.
Bakarich, Shannon E; Gorkin, Robert; in het Panhuis, Marc; Spinks, Geoffrey M
2014-09-24
An additive manufacturing process that combines digital modeling and 3D printing was used to prepare fiber reinforced hydrogels in a single-step process. The composite materials were fabricated by selectively pattering a combination of alginate/acrylamide gel precursor solution and an epoxy based UV-curable adhesive (Emax 904 Gel-SC) with an extrusion printer. UV irradiation was used to cure the two inks into a single composite material. Spatial control of fiber distribution within the digital models allowed for the fabrication of a series of materials with a spectrum of swelling behavior and mechanical properties with physical characteristics ranging from soft and wet to hard and dry. A comparison with the "rule of mixtures" was used to show that the swollen composite materials adhere to standard composite theory. A prototype meniscus cartilage was prepared to illustrate the potential application in bioengineering.
Pei, Yu-Cheng; Chen, Jean-Lon; Wong, Alice M K; Tseng, Kevin C
2017-01-01
Case series. IV (case series). Robot-assisted therapy for upper limb rehabilitation is an emerging research topic and its design process must integrate engineering, neurological pathophysiology, and clinical needs. This study developed/evaluated the usefulness of a novel rehabilitation device, the MirrorPath , designed for the upper limb rehabilitation of patients with hemiplegic stroke. The process follows Tseng's methodology for innovative product design and development, namely two stages, device development and usability assessment. During the development process, the design was guided by patients' rehabilitation needs as defined by patients and their therapists. The design applied synchronic movement of the bilateral upper limbs, an approach that is compatible with the bilateral movement therapy and proprioceptive neuromuscular facilitation theories. MirrorPath consists of a robotic device that guides upper limb movement linked to a control module containing software controlling the robotic movement. Five healthy subjects were recruited in the pretest, and 4 patients, 4 caregivers, and 4 therapists were recruited in the formal test for usability. All recruited subjects were allocated to the test group, completed the evaluation, and their data were all analyzed. The total system usability scale score obtained from the patients, caregivers, and therapists was 71.8 ± 11.9, indicating a high level of usability and product acceptance. Following a standard development process, we could yield a design that meets clinical needs. This low-cost device provides a feasible platform for carrying out robot-assisted bilateral movement therapy of patients with hemiplegic stroke. identifier NCT02698605.
NASA Astrophysics Data System (ADS)
Deng, Liansheng; Jiang, Weiping; Li, Zhao; Chen, Hua; Wang, Kaihua; Ma, Yifang
2017-02-01
Higher-order ionospheric (HOI) delays are one of the principal technique-specific error sources in precise global positioning system analysis and have been proposed to become a standard part of precise GPS data processing. In this research, we apply HOI delay corrections to the Crustal Movement Observation Network of China's (CMONOC) data processing (from January 2000 to December 2013) and furnish quantitative results for the effects of HOI on CMONOC coordinate time series. The results for both a regional reference frame and global reference frame are analyzed and compared to clarify the HOI effects on the CMONOC network. We find that HOI corrections can effectively reduce the semi-annual signals in the northern and vertical components. For sites with lower semi-annual amplitudes, the average decrease in magnitude can reach 30 and 10 % for the northern and vertical components, respectively. The noise amplitudes with HOI corrections and those without HOI corrections are not significantly different. Generally, the HOI effects on CMONOC networks in a global reference frame are less obvious than the results in the regional reference frame, probably because the HOI-induced errors are smaller in comparison to the higher noise levels seen when using a global reference frame. Furthermore, we investigate the combined contributions of environmental loading and HOI effects on the CMONOC stations. The largest loading effects on the vertical displacement are found in the mid- to high-latitude areas. The weighted root mean square differences between the corrected and original weekly GPS height time series of the loading model indicate that the mass loading adequately reduced the scatter on the CMONOC height time series, whereas the results in the global reference frame showed better agreements between the GPS coordinate time series and the environmental loading. When combining the effects of environmental loading and HOI corrections, the results with the HOI corrections reduced the scatter on the observed GPS height coordinates better than the height when estimated without HOI corrections, and the combined solutions in the regional reference frame indicate more preferred improvements. Therefore, regional reference frames are recommended to investigate the HOI effects on regional networks.
Thermodynamic Mixing Behavior Of F-OH Apatite Crystalline Solutions
NASA Astrophysics Data System (ADS)
Hovis, G. L.
2011-12-01
It is important to establish a thermodynamic data base for accessory minerals and mineral series that are useful in determining fluid composition during petrologic processes. As a starting point for apatite-system thermodynamics, Hovis and Harlov (2010, American Mineralogist 95, 946-952) reported enthalpies of mixing for a F-Cl apatite series. Harlov synthesized all such crystalline solutions at the GFZ-Potsdam using a slow-cooled molten-flux method. In order to expand thermodynamic characterization of the F-Cl-OH apatite system, a new study has been initiated along the F-OH apatite binary. Synthesis of this new series made use of National Institute of Standards and Technology (NIST) 2910a hydroxylapatite, a standard reference material made at NIST "by solution reaction of calcium hydroxide with phosphoric acid." Synthesis efforts at Lafayette College have been successful in producing fluorapatite through ion exchange between hydroxylapatite 2910a and fluorite. In these experiments, a thin layer of hydroxylapatite powder was placed on a polished CaF2 disc (obtained from a supplier of high-purity crystals for spectroscopy), pressed firmly against the disc, then annealed at 750 °C (1 bar) for three days. Longer annealing times did not produce further change in unit-cell dimensions of the resulting fluorapatite, but it is uncertain at this time whether this procedure produces a pure-F end member (chemical analyses to be performed in the near future). It is clear from the unit-cell dimensions, however, that the newly synthesized apatite contains a high percentage of fluorine, probably greater than 90 mol % F. Intermediate compositions for a F-OH apatite series were made by combining 2910a hydroxylapatite powder with the newly synthesized fluorapatite in various proportions, then conducting chemical homogenization experiments at 750 °C on each mixture. X-ray powder diffraction data indicated that these experiments were successful in producing chemically homogeneous intermediate series members, as doubled peaks merged into single diffraction maxima, the latter changing position systematically with bulk composition. All of the resulting F-OH apatite series members have hexagonal symmetry. The "a" unit-cell dimension behaves linearly with composition, and "c" is nearly constant across the series. Unit-cell volume also is linear with F:OH ratio, thus behaving in a thermodynamically ideal manner. Solution calorimetric experiments have been conducted in 20.0 wt % HCl at 50 °C on all series members. Enthalpies of F-OH mixing are nonexistent at F-rich compositions but have small negative values toward the hydroxylapatite end member. There is no enthalpy barrier, therefore, to complete F-OH mixing across the series, indicated as well by the ease of chemical homogenization for intermediate F:OH series members. In addition to the synthetic specimens described above, natural samples of hydroxylapatite, fluorapatite, and chlorapatite have been obtained for study from the U.S. National Museum of Natural History, as well as the American Museum of Natural History (our sincere appreciation to both museums for providing samples). Solution calorimetric results for these samples will be compared with data for the synthetic OH, F, and Cl apatite analogs noted above.
NASA Astrophysics Data System (ADS)
Nardi, F.; Grimaldi, S.; Petroselli, A.
2012-12-01
Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.
Ingram, Jackie; Gaines, Peggy; Kite, Roberta; Morgan, Marcia; Spurling, Sheila; Winsett, Rebecca P
2013-01-01
The purpose of this study was to examine bacterial growth in colonoscopes in a series of graduated shelf times. There is no conclusive evidence on the length of time colonoscopes can be safely stored before requiring redisinfection. Standards for processing scopes after use are described and supported by the professional organizations of gastroenterology and infection control; however, shelf life varies from 3 to 5 days and most recommendations are based on clinical consensus. In this study, four colonoscopes were used in a clinical procedure, underwent automated high-level disinfection with 2.6% buffered glutaraldehyde, and cultured after 3, 5, 7, 14, 21, 28, 42, and 56 days of shelf time. Two investigators collected all the cultures after interrater reliability was established. Cultures were processed in the microbiology laboratory. No medically significant growth was detected at any of the culture points. At Day 14 and Day 42, one of four scopes grew fewer than two colony-forming units of a medically insignificant bacterium. Using professional standards for high-level disinfection growth was suppressed for up to 8 weeks. Further evidence to assess fungal or viral growth is needed to be able to make suggestions for colonoscope shelf life.
ERIC Educational Resources Information Center
Sleeter, Christine E.; Carmona, Judith Flores
2016-01-01
In this second edition of her bestseller, Christine Sleeter and new coauthor Judith Flores Carmona show how educators can learn to teach rich, academically rigorous, multicultural curricula within a standards-based environment. The authors have meticulously updated each chapter to address current changes in education policy and practice. New…
ERIC Educational Resources Information Center
Clements, Douglas H., Ed.; DiBiase, Ann-Marie, Ed.; Sarama, Julie, Ed.
2004-01-01
This book brings together the combined wisdom of a diverse group of experts involved with early childhood mathematics. The book originates from the landmark 2000 Conference on Standards for Pre-kindergarten and Kindergarten Mathematics Education, attended by representatives from almost every state developing standards for young children's…
ERIC Educational Resources Information Center
Burrill, Gail; And Others
The 1989 document, "Curriculum and Evaluation Standards for School Mathematics" (the "Standards"), provides a vision and a framework for revising and strengthening the K-12 mathematics curriculum in North American schools and for evaluating both the mathematics curriculum and students' progress. When completed, it is expected…
Cosmographic analysis with Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.
Screw-Thread Standards for Federal Services, 1957. Handbook H28 (1957), Part 3
1957-09-01
MOUNTING THREADS PHOTOGRAPHIC EQUIPMENT THREADS ISO METRIC THREADS; MISCELLANEOUS THREADS CLASS 5 INTERFERENCE-FIT THREADS, TRIAL STANDARD WRENCH...Bibliography on measurement of pitch diameter by means of wires 60 Appendix 14. Metric screw-thread standards 61 1. ISO thread profiles...61 2. Standard series for ISO metric threads 62 3. Designations for ISO metric threads 62 Tables Page Table XII. 1.—Basic
ERIC Educational Resources Information Center
Simon, Elaine; Foley, Ellen; Passantino, Claire
The focus of this report is the way standards are influencing instruction in Philadelphia classrooms and how the various parts of the system are working together to support standards-driven instruction at the classroom level. The 1996-97 school year was still an early one in the implementation of standards-based instruction in Philadelphia. The…
Revealing Real-Time Emotional Responses: a Personalized Assessment based on Heartbeat Dynamics
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
Revealing real-time emotional responses: a personalized assessment based on heartbeat dynamics.
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-21
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
Time Series Spectroscopic and Photometric Observations of the Massive DAV BPM 37093
NASA Astrophysics Data System (ADS)
Nitta, Atsuko; Kepler, S. O.; Chene, Andre–Nicolas; Koester, D.; Provencal, J. L.; Sullivan, D. J.; Chote, Paul; Safeko, Ramotholo; Kanaan, Antonio; Romero, Alejandra; Corti, Mariela; Corti, Mariela; Kilic, Mukremin; Winget, D. E.
2015-06-01
BPM 37093 was the first of only a handful of massive (1.05+/-0.05 M⊙; Bergeron 2004;Koester & Allard 2000) white dwarf pulsators discovered (Kanaan et al. 1992). These stars are particularly interesting because the crystallized mass-fraction as a function of mass and temperature is poorly constrained by observation, yet this process adds 1-2 Gyr uncertainty in ages of the oldest white dwarf stars observed and hence, in the ages of associations that contain them (Abrikosov 1960; Kirzhnits 1960; Salpeter 1961). Last year, we discovered that ESO uses BPM 37093 as a standard star and extracted corresponding spectra from the public archive. The data suggested a large variation in the observed hydrogen line profiles that could potentially be due to pulsations, but the measurement did not reach a detection-quality threshold. To further explore this possibility, though, we obtained 4hrs of continuous time series spectroscopy of BPM 37093 with Gemini in the Northern Spring of 2014. We present our preliminary results from these data along with those from the accompanying time series photometric observations we gathered from Mt. John (New Zealand), South African Astronomical Observatory (SAAO), Panchromatic Robotic optical Monitoring and Polarimetry Telescopes (PROMPT) in Chile, and Complejo Astronomico El Leoncito (Argentina) to support the Gemini observations.
A Standard System to Study Vertebrate Embryos
Werneburg, Ingmar
2009-01-01
Staged embryonic series are important as reference for different kinds of biological studies. I summarise problems that occur when using ‘staging tables’ of ‘model organisms’. Investigations of developmental processes in a broad scope of taxa are becoming commonplace. Beginning in the 1990s, methods were developed to quantify and analyse developmental events in a phylogenetic framework. The algorithms associated with these methods are still under development, mainly due to difficulties of using non-independent characters. Nevertheless, the principle of comparing clearly defined newly occurring morphological features in development (events) in quantifying analyses was a key innovation for comparative embryonic research. Up to date no standard was set for how to define such events in a comparative approach. As a case study I compared the external development of 23 land vertebrate species with a focus on turtles, mainly based on reference staging tables. I excluded all the characters that are only identical for a particular species or general features that were only analysed in a few species. Based on these comparisons I defined 104 developmental characters that are common either for all vertebrates (61 characters), gnathostomes (26), tetrapods (3), amniotes (7), or only for sauropsids (7). Characters concern the neural tube, somite, ear, eye, limb, maxillary and mandibular process, pharyngeal arch, eyelid or carapace development. I present an illustrated guide listing all the defined events. This guide can be used for describing developmental series of any vertebrate species or for documenting specimen variability of a particular species. The guide incorporates drawings and photographs as well as consideration of species identifying developmental features such as colouration. The simple character-code of the guide is extendable to further characters pertaining to external and internal morphological, physiological, genetic or molecular development, and also for other vertebrate groups not examined here, such as Chondrichthyes or Actinopterygii. An online database to type in developmental events for different stages and species could be a basis for further studies in comparative embryology. By documenting developmental events with the standard code, sequence heterochrony studies (i.e. Parsimov) and studies on variability can use this broad comparative data set. PMID:19521537
Moran, John L; Solomon, Patricia J
2013-05-24
Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Monthly mean raw mortality (at hospital discharge) time series, 1995-2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) "in-control" status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.
Earth Surface Monitoring with COSI-Corr, Techniques and Applications
NASA Astrophysics Data System (ADS)
Leprince, S.; Ayoub, F.; Avouac, J.
2009-12-01
Co-registration of Optically Sensed Images and Correlation (COSI-Corr) is a software package developed at the California Institute of Technology (USA) for accurate geometrical processing of optical satellite and aerial imagery. Initially developed for the measurement of co-seismic ground deformation using optical imagery, COSI-Corr is now used for a wide range of applications in Earth Sciences, which take advantage of the software capability to co-register, with very high accuracy, images taken from different sensors and acquired at different times. As long as a sensor is supported in COSI-Corr, all images between the supported sensors can be accurately orthorectified and co-registered. For example, it is possible to co-register a series of SPOT images, a series of aerial photographs, as well as to register a series of aerial photographs with a series of SPOT images, etc... Currently supported sensors include the SPOT 1-5, Quickbird, Worldview 1 and Formosat 2 satellites, the ASTER instrument, and frame camera acquisitions from e.g., aerial survey or declassified satellite imagery. Potential applications include accurate change detection between multi-temporal and multi-spectral images, and the calibration of pushbroom cameras. In particular, COSI-Corr provides a powerful correlation tool, which allows for accurate estimation of surface displacement. The accuracy depends on many factors (e.g., cloud, snow, and vegetation cover, shadows, temporal changes in general, steadiness of the imaging platform, defects of the imaging system, etc...) but in practice, the standard deviation of the measurements obtained from the correlation of mutli-temporal images is typically around 1/20 to 1/10 of the pixel size. The software package also includes post-processing tools such as denoising, destriping, and stacking tools to facilitate data interpretation. Examples drawn from current research in, e.g., seismotectonics, glaciology, and geomorphology will be presented. COSI-Corr is developed in IDL (Interactive Data Language), integrated under the user friendly interface ENVI (Environment for Visualizing Images), and is distributed free of charge for academic research purposes.
Linear and quadratic models of point process systems: contributions of patterned input to output.
Lindsay, K A; Rosenberg, J R
2012-08-01
In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.
A high-fidelity weather time series generator using the Markov Chain process on a piecewise level
NASA Astrophysics Data System (ADS)
Hersvik, K.; Endrerud, O.-E. V.
2017-12-01
A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.
The Warrior Heritage. A Study of Rhodesia.
1980-05-01
And then, seconds before impact and certain death, the patrol rises to its feet and, standing shoulder to shoulder, the men shout their last defiance...are the Tusker-Kudu- Rhino series and the Puma-Hippo series. Vehicles of the first series are made from the standard Land Rover, long wheel base frame...bolts. The basic vehicle thus created is then surmounted 92 with either a Rhino , Kudu or Tusker body. Tuskers have a drum-like roll bar cage, covered with
NASA Astrophysics Data System (ADS)
Hashemi Sanatgar, Razieh; Campagne, Christine; Nierstrasz, Vincent
2017-05-01
In this paper, 3D printing as a novel printing process was considered for deposition of polymers on synthetic fabrics to introduce more flexible, resource-efficient and cost effective textile functionalization processes than conventional printing process like screen and inkjet printing. The aim is to develop an integrated or tailored production process for smart and functional textiles which avoid unnecessary use of water, energy, chemicals and minimize the waste to improve ecological footprint and productivity. Adhesion of polymer and nanocomposite layers which were 3D printed directly onto the textile fabrics using fused deposition modeling (FDM) technique was investigated. Different variables which may affect the adhesion properties including 3D printing process parameters, fabric type and filler type incorporated in polymer were considered. A rectangular shape according to the peeling standard was designed as 3D computer-aided design (CAD) to find out the effect of the different variables. The polymers were printed in different series of experimental design: nylon on polyamide 66 (PA66) fabrics, polylactic acid (PLA) on PA66 fabric, PLA on PLA fabric, and finally nanosize carbon black/PLA (CB/PLA) and multi-wall carbon nanotubes/PLA (CNT/PLA) nanocomposites on PLA fabrics. The adhesion forces were quantified using the innovative sample preparing method combining with the peeling standard method. Results showed that different variables of 3D printing process like extruder temperature, platform temperature and printing speed can have significant effect on adhesion force of polymers to fabrics while direct 3D printing. A model was proposed specifically for deposition of a commercial 3D printer Nylon filament on PA66 fabrics. In the following, among the printed polymers, PLA and its composites had high adhesion force to PLA fabrics.
TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series
NASA Astrophysics Data System (ADS)
Czerwinski, Fabian; Oddershede, Lene B.
2011-02-01
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours
NASA Astrophysics Data System (ADS)
Kaiser, Olga; Martius, Olivia; Horenko, Illia
2017-04-01
Regression based Generalized Pareto Distribution (GPD) models are often used to describe the dynamics of hydrological threshold excesses relying on the explicit availability of all of the relevant covariates. But, in real application the complete set of relevant covariates might be not available. In this context, it was shown that under weak assumptions the influence coming from systematically missing covariates can be reflected by a nonstationary and nonhomogenous dynamics. We present a data-driven, semiparametric and an adaptive approach for spatio-temporal regression based clustering of threshold excesses in a presence of systematically missing covariates. The nonstationary and nonhomogenous behavior of threshold excesses is describes by a set of local stationary GPD models, where the parameters are expressed as regression models, and a non-parametric spatio-temporal hidden switching process. Exploiting nonparametric Finite Element time-series analysis Methodology (FEM) with Bounded Variation of the model parameters (BV) for resolving the spatio-temporal switching process, the approach goes beyond strong a priori assumptions made is standard latent class models like Mixture Models and Hidden Markov Models. Additionally, the presented FEM-BV-GPD provides a pragmatic description of the corresponding spatial dependence structure by grouping together all locations that exhibit similar behavior of the switching process. The performance of the framework is demonstrated on daily accumulated precipitation series over 17 different locations in Switzerland from 1981 till 2013 - showing that the introduced approach allows for a better description of the historical data.
NASA Technical Reports Server (NTRS)
Ulsig, Laura; Nichol, Caroline J.; Huemmrich, Karl F.; Landis, David R.; Middleton, Elizabeth M.; Lyapustin, Alexei I.; Mammarella, Ivan; Levula, Janne; Porcar-Castell, Albert
2017-01-01
Long-term observations of vegetation phenology can be used to monitor the response of terrestrial ecosystems to climate change. Satellite remote sensing provides the most efficient means to observe phenological events through time series analysis of vegetation indices such as the Normalized Difference Vegetation Index (NDVI). This study investigates the potential of a Photochemical Reflectance Index (PRI), which has been linked to vegetation light use efficiency, to improve the accuracy of MODIS-based estimates of phenology in an evergreen conifer forest. Timings of the start and end of the growing season (SGS and EGS) were derived from a 13-year-long time series of PRI and NDVI based on a MAIAC (multi-angle implementation of atmospheric correction) processed MODIS dataset and standard MODIS NDVI product data. The derived dates were validated with phenology estimates from ground-based flux tower measurements of ecosystem productivity. Significant correlations were found between the MAIAC time series and ground-estimated SGS (R (sup 2) equals 0.36-0.8), which is remarkable since previous studies have found it difficult to observe inter-annual phenological variations in evergreen vegetation from satellite data. The considerably noisier NDVI product could not accurately predict SGS, and EGS could not be derived successfully from any of the time series. While the strongest relationship overall was found between SGS derived from the ground data and PRI, MAIAC NDVI exhibited high correlations with SGS more consistently (R (sup 2) is greater than 0.6 in all cases). The results suggest that PRI can serve as an effective indicator of spring seasonal transitions, however, additional work is necessary to confirm the relationships observed and to further explore the usefulness of MODIS PRI for detecting phenology.
"The NASA Sci Files": The Case of the Biological Biosphere. [Videotape].
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.
The NASA Science Files is a series of instructional programs consisting of broadcast, print, and online elements. Emphasizing standards-based instruction, problem-based learning, and science as inquiry, the series seeks to motivate students in grades 3-5 to become critical thinkers and active problem solvers. Each program supports the national…
40 CFR 1048.140 - What are the provisions for certifying Blue Sky Series engines?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Blue Sky Series engines? 1048.140 Section 1048.140 Protection of Environment ENVIRONMENTAL PROTECTION... ENGINES Emission Standards and Related Requirements § 1048.140 What are the provisions for certifying Blue... emission control for engines designated as “Blue Sky Series” engines. If you certify an engine family under...
40 CFR 1048.140 - What are the provisions for certifying Blue Sky Series engines?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Blue Sky Series engines? 1048.140 Section 1048.140 Protection of Environment ENVIRONMENTAL PROTECTION... ENGINES Emission Standards and Related Requirements § 1048.140 What are the provisions for certifying Blue... emission control for engines designated as “Blue Sky Series” engines. If you certify an engine family under...
40 CFR 1048.140 - What are the provisions for certifying Blue Sky Series engines?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Blue Sky Series engines? 1048.140 Section 1048.140 Protection of Environment ENVIRONMENTAL PROTECTION... ENGINES Emission Standards and Related Requirements § 1048.140 What are the provisions for certifying Blue... emission control for engines designated as “Blue Sky Series” engines. If you certify an engine family under...
40 CFR 1048.140 - What are the provisions for certifying Blue Sky Series engines?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Blue Sky Series engines? 1048.140 Section 1048.140 Protection of Environment ENVIRONMENTAL PROTECTION... ENGINES Emission Standards and Related Requirements § 1048.140 What are the provisions for certifying Blue... emission control for engines designated as “Blue Sky Series” engines. If you certify an engine family under...
40 CFR 1048.140 - What are the provisions for certifying Blue Sky Series engines?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Blue Sky Series engines? 1048.140 Section 1048.140 Protection of Environment ENVIRONMENTAL PROTECTION... ENGINES Emission Standards and Related Requirements § 1048.140 What are the provisions for certifying Blue... emission control for engines designated as “Blue Sky Series” engines. If you certify an engine family under...
Instructional Considerations for Implementing Student Assessments. Article #2 in a 4-part series
ERIC Educational Resources Information Center
Fisette, Jennifer L.; Placek, Judith H.; Avery, Marybell; Dyson, Ben; Fox, Connie; Franck, Marian; Graber, Kim; Rink, Judith; Zhu, Weimo
2009-01-01
The first article of the PE Metrics series, "Developing Quality Physical Education through Student Assessments" (January/February 2009 "Strategies" issue) focused on the importance of assessing student learning in relation to NASPE's content standards (NASPE, 2004). The article emphasized that unless students are appropriately assessed, it is…
Saxon Math. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2010
2010-01-01
"Saxon Math" is a textbook series covering grades K-12 based on incremental development and continual review of mathematical concepts to give students time to learn and practice concepts throughout the year. The series is aligned with standards of the National Council of Teachers of Mathematics (NCTM) and various states, and can be…
Equal Opportunity for Deeper Learning. Deeper Learning Research Series
ERIC Educational Resources Information Center
Noguera, Pedro; Darling-Hammond, Linda; Friedlaender, Diane
2015-01-01
Out of concern that the nation's schools--particularly those working with traditionally underserved populations--are not adequately preparing all students to succeed in college and careers, education policymakers have launched a series of major reform efforts in recent years. To help students meet the new standards, schools will need to provide…
Equal Opportunity for Deeper Learning. Executive Summary. Deeper Learning Research Series
ERIC Educational Resources Information Center
Noguera, Pedro; Darling-Hammond, Linda; Friedlaender, Diane
2015-01-01
Out of concern that the nation's schools--particularly those working with traditionally underserved populations--are not adequately preparing all students to succeed in college and careers, education policymakers have launched a series of major reform efforts in recent years. To help students meet the new standards, schools will need to provide…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
..., and deals with U.S. securities laws, regulations, sales practices and special products drawn from the standard Series 7 examination. The Series 37 version is for Canadian registrants who have successfully completed the basic core module of the CSI Global Education (``CSI'', formerly the Canadian Securities...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
..., and deals with U.S. securities laws, regulations, sales practices and special products drawn from the standard Series 7 examination. The Series 37 version is for Canadian registrants who have successfully completed the basic core module of the CSI Global Education (``CSI'', formerly the Canadian Securities...
78 FR 1991 - Major Capital Investment Projects
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-09
...) published on June 3, 2010 (75 FR 31383), which posed a series of questions about the current regulation and... system in which well- justified projects are funded. At the same time, FTA seeks to ensure that it does...; to use a series of standard factors in a simple spreadsheet to calculate vehicle miles traveled (VMT...
Structure and Form. Elementary Science Activity Series, Volume 2.
ERIC Educational Resources Information Center
Blackwell, Frank F.
This book is number 2 of a series of elementary science books that presents a wealth of ideas for science activities for the elementary school teacher. Each activity includes a standard set of information designed to help teachers determine the activity's appropriateness for their students, plan its implementation, and help children focus on a…
The Separation of Church and State. Exploring the Constitution Series.
ERIC Educational Resources Information Center
McWhirter, Darien A.
This textbook on the separation of church and state continues the "Exploring the Constitution Series," which introduces important areas of constitutional law. Intended to serve either as a reference work, a supplement to a standard textbook, or as the textbook for a course, this volume covers the constitutional issues of prayer in public…
THE SCHOOL HEALTH AND SAFETY PROGRAM.
ERIC Educational Resources Information Center
1963
INVOLVING INDIVIDUALS AS WELL AS ORGANIZATIONS, THE PROGRAM AIMED AT THE OPTIMUM HEALTH OF ALL CHILDREN, AND IMPROVEMENT OF HEALTH AND SAFETY STANDARDS WITHIN THE COMMUNITY. EACH OF THE CHILDREN WAS URGED TO HAVE A SUCCESSFUL VACCINATION FOR SMALL POX, THE DPT SERIES AND BOOSTER, THE POLIO SERIES, AND CORRECTIONS OF ALL DENTAL DEFECTS AND…
Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes
NASA Astrophysics Data System (ADS)
Białek, Małgorzata
2006-03-01
Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Synthesis of amino-functionalized silica nanoparticles for preparation of new laboratory standards
NASA Astrophysics Data System (ADS)
Alvarez-Toral, Aitor; Fernández, Beatriz; Malherbe, Julien; Claverie, Fanny; Pecheyran, Christophe; Pereiro, Rosario
2017-12-01
Platinum group elements (PGEs) are particularly interesting analytes in different fields, including environmental samples as well as high cost materials that contain them, such as for example automotive catalysts. This type of solid samples could be analysed by laser ablation (LA) coupled to ICP-MS, which allow to significantly reducing the analysis time since the time-consuming processes for sample preparation are not required. There is a considerable demand of standards with high PGEs concentration for quantification purposes, which cannot be carried out easily using LA-ICP-MS because the available standards (i.e. NIST SRM 61 × series) do not have such analytes in the same concentration range. In this paper, a new strategy is proposed for the synthesis of homogeneous laboratory standards with Pt, Pd and Rh concentrations that range from 77 μg/g of Pd up to 2035 μg/g of Rh. The proposed strategy is based on the synthesis of monodisperse amino-functionalized amorphous silica nanoparticles, which can retain metal ions. In addition to Pt, Pd and Rh, three lanthanides were also added to the nanoparticles (La, Ce, Nd). Sturdy pressed pellets can be made from the resulting nanopowder without the use of any binder. Elemental composition of standards made of nanoparticles was analysed by conventional nebulization ICP-MS and their homogeneity was successfully evaluated by LA-ICP-MS.
Evaluating the Effectiveness of the 2000-2001 NASA "Why?" Files Program
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Frank, Kari Lou; Ashcroft, Scott B.; Williams, Amy C.
2002-01-01
NASA 'Why?' Files, a research and standards-based, Emmy-award winning series of 60-minute instructional programs for grades 3-5, introduces students to NASA; integrates mathematics, science, and technology by using Problem-Based Learning (PBL), scientific inquiry, and the scientific method; and motivates students to become critical thinkers and active problem solvers. All four 2000-2001 NASA 'Why?' Files programs include an instructional broadcast, a lesson guide, an interactive web site, plus numerous instructional resources. In March 2001, 1,000 randomly selected program registrants participated in a survey. Of these surveys, 185 (154 usable) met the established cut-off date. Respondents reported that (1) they used the four programs in the 2000-2001 NASA 'Why?' Files series; (2) series goals and objectives were met; (3) programs met national mathematics, science, and technology standards; (4) program content was developmentally appropriate for grade level; and (5) programs enhanced/enriched the teaching of mathematics, science, and technology.
ERIC Educational Resources Information Center
Reutzel, D. Ray; Clark, Sarah K.; Jones, Cindy D.; Gillam, Sandra L.
2016-01-01
One of the most critical elements in the Common Core State Standards (CCSS) is the effective teaching of reading comprehension in the early years. This timely resource provides evidence-based practices for teachers to use as they work to meet standards associated with comprehending complex literature and informational texts. The authors offer a…
The State of State Standards--and the Common Core--in 2010
ERIC Educational Resources Information Center
Carmichael, Sheila Byrd; Martino, Gabrielle; Porter-Magee, Kathleen; Wilson, W. Stephen
2010-01-01
This review of state English language arts (ELA) and mathematics standards is the latest in a series of Fordham evaluations dating back to 1997. It comes at a critical juncture, as states across the land consider adoption of the Common Core State Standards. These are the authors' major findings: (1) Based on their criteria, the Common Core…
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
NASA Astrophysics Data System (ADS)
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Poisson-event-based analysis of cell proliferation.
Summers, Huw D; Wills, John W; Brown, M Rowan; Rees, Paul
2015-05-01
A protocol for the assessment of cell proliferation dynamics is presented. This is based on the measurement of cell division events and their subsequent analysis using Poisson probability statistics. Detailed analysis of proliferation dynamics in heterogeneous populations requires single cell resolution within a time series analysis and so is technically demanding to implement. Here, we show that by focusing on the events during which cells undergo division rather than directly on the cells themselves a simplified image acquisition and analysis protocol can be followed, which maintains single cell resolution and reports on the key metrics of cell proliferation. The technique is demonstrated using a microscope with 1.3 μm spatial resolution to track mitotic events within A549 and BEAS-2B cell lines, over a period of up to 48 h. Automated image processing of the bright field images using standard algorithms within the ImageJ software toolkit yielded 87% accurate recording of the manually identified, temporal, and spatial positions of the mitotic event series. Analysis of the statistics of the interevent times (i.e., times between observed mitoses in a field of view) showed that cell division conformed to a nonhomogeneous Poisson process in which the rate of occurrence of mitotic events, λ exponentially increased over time and provided values of the mean inter mitotic time of 21.1 ± 1.2 hours for the A549 cells and 25.0 ± 1.1 h for the BEAS-2B cells. Comparison of the mitotic event series for the BEAS-2B cell line to that predicted by random Poisson statistics indicated that temporal synchronisation of the cell division process was occurring within 70% of the population and that this could be increased to 85% through serum starvation of the cell culture. © 2015 International Society for Advancement of Cytometry.
Revision of Primary Series Maps
,
2000-01-01
In 1992, the U.S. Geological Survey (USGS) completed a 50-year effort to provide primary series map coverage of the United States. Many of these maps now need to be updated to reflect the construction of new roads and highways and other changes that have taken place over time. The USGS has formulated a graphic revision plan to help keep the primary series maps current. Primary series maps include 1:20,000-scale quadrangles of Puerto Rico, 1:24,000- or 1:25,000-scale quadrangles of the conterminous United States, Hawaii, and U.S. Territories, and 1:63,360-scale quadrangles of Alaska. The revision of primary series maps from new collection sources is accomplished using a variety of processes. The raster revision process combines the scanned content of paper maps with raster updating technologies. The vector revision process involves the automated plotting of updated vector files. Traditional processes use analog stereoplotters and manual scribing instruments on specially coated map separates. The ability to select from or combine these processes increases the efficiency of the National Mapping Division map revision program.
Production of Biodiesel from Lipid of Phytoplankton Chaetoceros calcitrans through Ultrasonic Method
Kwangdinata, Raymond; Raya, Indah; Zakir, Muhammad
2014-01-01
A research on production of biodiesel from lipid of phytoplankton Chaetoceros calcitrans through ultrasonic method has been done. In this research, we carried out a series of phytoplankton cultures to determine the optimum time of growth and biodiesel synthesis process from phytoplankton lipids. Process of biodiesel synthesis consists of two steps, that is, isolation of phytoplankton lipids and biodiesel synthesis from those lipids. Oil isolation process was carried out by ultrasonic extraction method using ethanol 96%, while biodiesel synthesis was carried out by transesterification reaction using methanol and KOH catalyst under sonication. Weight of biodiesel yield per biomass Chaetoceros calcitrans is 35.35%. Characterization of biodiesel was well carried out in terms of physical properties which are density and viscosity and chemical properties which are FFA content, saponification value, and iodine value. These values meet the American Society for Testing and Materials (ASTM D6751) standard levels, except for the viscosity value which was 1.14 g·cm−3. PMID:24688372
Sensory processing issues in young children presenting to an outpatient feeding clinic.
Davis, Ann M; Bruce, Amanda S; Khasawneh, Rima; Schulz, Trina; Fox, Catherine; Dunn, Winifred
2013-02-01
The aim of the study was to describe the relation between sensory issues and medical complexity in a series of patients presenting to an outpatient multidisciplinary feeding team for evaluation, by a standardized measure of sensory-processing abilities. A retrospective chart review of all of the patients seen from 2004 to 2009 on 2 key variables: medical diagnostic category and short sensory profile (SSP) score. On the SSP, 67.6% of children scored in the clinical ("definite difference") range. The most common diagnostic categories were developmental (n = 23), gastrointestinal (n = 16), and neurological (n = 13). Behavioral and cardiorespiratory medical diagnostic categories were significantly related to SSP total score and SSP definite difference score. Children who present for feeding evaluation do indeed tend to have clinically elevated scores regarding sensory processing, and these elevated scores are significantly related to certain medical diagnostic categories. Future research is needed to determine why these significant relations exist as well as their implications for treatment of feeding-related issues.
Lubner, Meghan G.; Pickhardt, Perry J.; Kim, David H.; Tang, Jie; Munoz del Rio, Alejandro; Chen, Guang-Hong
2014-01-01
Purpose To prospectively study CT dose reduction using the “prior image constrained compressed sensing” (PICCS) reconstruction technique. Methods Immediately following routine standard dose (SD) abdominal MDCT, 50 patients (mean age, 57.7 years; mean BMI, 28.8) underwent a second reduced-dose (RD) scan (targeted dose reduction, 70-90%). DLP, CTDIvol and SSDE were compared. Several reconstruction algorithms (FBP, ASIR, and PICCS) were applied to the RD series. SD images with FBP served as reference standard. Two blinded readers evaluated each series for subjective image quality and focal lesion detection. Results Mean DLP, CTDIvol, and SSDE for RD series was 140.3 mGy*cm (median 79.4), 3.7 mGy (median 1.8), and 4.2 mGy (median 2.3) compared with 493.7 mGy*cm (median 345.8), 12.9 mGy (median 7.9 mGy) and 14.6 mGy (median 10.1) for SD series, respectively. Mean effective patient diameter was 30.1 cm (median 30), which translates to a mean SSDE reduction of 72% (p<0.001). RD-PICCS image quality score was 2.8±0.5, improved over the RD-FBP (1.7±0.7) and RD-ASIR(1.9±0.8)(p<0.001), but lower than SD (3.5±0.5)(p<0.001). Readers detected 81% (184/228) of focal lesions on RD-PICCS series, versus 67% (153/228) and 65% (149/228) for RD-FBP and RD-ASIR, respectively. Mean image noise was significantly reduced on RD-PICCS series (13.9 HU) compared with RD-FBP (57.2) and RD-ASIR (44.1) (p<0.001). Conclusion PICCS allows for marked dose reduction at abdominal CT with improved image quality and diagnostic performance over reduced-dose FBP and ASIR. Further study is needed to determine indication-specific dose reduction levels that preserve acceptable diagnostic accuracy relative to higher-dose protocols. PMID:24943136
Hodgson, Catherine; Lambon Ralph, Matthew A
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.
NASA Astrophysics Data System (ADS)
Yang, Peng; Xia, Jun; Zhan, Chesheng; Zhang, Yongyong; Hu, Sheng
2018-04-01
In this study, the temporal variations of the standard precipitation index (SPI) were analyzed at different scales in Northwest China (NWC). Discrete wavelet transform (DWT) was used in conjunction with the Mann-Kendall (MK) test in this study. This study also investigated the relationships between original precipitation and different periodic components of SPI series with datasets spanning 55 years (1960-2014). The results showed that with the exception of the annual and summer SPI in the Inner Mongolia Inland Rivers Basin (IMIRB), spring SPI in the Qinghai Lake Rivers Basin (QLRB), and spring SPI in the Central Asia Rivers Basin (CARB), it had an increasing trend in other regions for other time series. In the spring, summer, and autumn series, though the MK trends test in most areas was at the insignificant level, they showed an increasing trend in precipitation. Meanwhile, the SPI series in most subbasins of NWC displayed a turning point in 1980-1990, with the significant increasing levels after 2000. Additionally, there was a significant difference between the trend of the original SPI series and the largest approximations. The annual and seasonal SPI series were composed of the short periodicities, which were less than a decade. The MK value would increase by adding the multiple D components (and approximations), and the MK value of the combined series was in harmony with that of the original series. Additionally, the major trend of the annual SPI in NWC was based on the four kinds of climate indices (e.g., Atlantic Oscillation [AO], North Atlantic Oscillation [NAO], Pacific Decadal Oscillation [PDO], and El Nino-Southern Oscillation index [ENSO/NINO]), especially the ENSO.
On statistical inference in time series analysis of the evolution of road safety.
Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora
2013-11-01
Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tóth, B.; Lillo, F.; Farmer, J. D.
2010-11-01
We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.
Bulur, Isil; Saracoglu, Zeynep Nurhan; Bilgin, Muzaffer
2018-01-01
Background Rosacea is a common dermatosis characterized by erythema, telangiectasia, papules and pustules. Objective We aimed to evaluate contact sensitivity in the rosacea patients. Methods We included 65 rosacea patients and 60 healthy volunteers in the study. The patient and control groups were patch tested with European baseline series and cosmetic series. Results A positive reaction to at least 1 allergen in the European standard series was found in 32.3% of rosacea patients and 20.0% of subjects in the control group while the relevant numbers were 30.8% of rosacea patients and 10% of controls with the cosmetic series (p=0.08). In total, we found a positive reaction to at least 1 allergen in 38.5% of patients and 25.0% of controls (p=0.15). We did not find a statistically significant relationship between a positive reaction to 1 allergen in total and the gender, skin type, rosacea type, ocular involvement, age and disease duration. There were more symptoms in patients with a positive reaction to allergens (p<0.001). Conclusion Contact sensitivity was detected more common in rosacea patients. Patch testing may be useful in the treatment and follow up of rosacea patients especially if symptoms such as itching, burning and stinging are present. PMID:29853742
Erdogan, Hilal Kaya; Bulur, Isil; Saracoglu, Zeynep Nurhan; Bilgin, Muzaffer
2018-06-01
Rosacea is a common dermatosis characterized by erythema, telangiectasia, papules and pustules. We aimed to evaluate contact sensitivity in the rosacea patients. We included 65 rosacea patients and 60 healthy volunteers in the study. The patient and control groups were patch tested with European baseline series and cosmetic series. A positive reaction to at least 1 allergen in the European standard series was found in 32.3% of rosacea patients and 20.0% of subjects in the control group while the relevant numbers were 30.8% of rosacea patients and 10% of controls with the cosmetic series ( p =0.08). In total, we found a positive reaction to at least 1 allergen in 38.5% of patients and 25.0% of controls ( p =0.15). We did not find a statistically significant relationship between a positive reaction to 1 allergen in total and the gender, skin type, rosacea type, ocular involvement, age and disease duration. There were more symptoms in patients with a positive reaction to allergens ( p <0.001). Contact sensitivity was detected more common in rosacea patients. Patch testing may be useful in the treatment and follow up of rosacea patients especially if symptoms such as itching, burning and stinging are present.
NASA Astrophysics Data System (ADS)
Selim, M. M.; Bezák, V.
2003-06-01
The one-dimensional version of the radiative transfer problem (i.e. the so-called rod model) is analysed with a Gaussian random extinction function (x). Then the optical length X = 0 Ldx(x) is a Gaussian random variable. The transmission and reflection coefficients, T(X) and R(X), are taken as infinite series. When these series (and also when the series representing T 2(X), T 2(X), R(X)T(X), etc.) are averaged, term by term, according to the Gaussian statistics, the series become divergent after averaging. As it was shown in a former paper by the authors (in Acta Physica Slovaca (2003)), a rectification can be managed when a `modified' Gaussian probability density function is used, equal to zero for X > 0 and proportional to the standard Gaussian probability density for X > 0. In the present paper, the authors put forward an alternative, showing that if the m.s.r. of X is sufficiently small in comparison with & $bar X$ ; , the standard Gaussian averaging is well functional provided that the summation in the series representing the variable T m-j (X)R j (X) (m = 1,2,..., j = 1,...,m) is truncated at a well-chosen finite term. The authors exemplify their analysis by some numerical calculations.
Water Level Monitoring on Tibetan Lakes Based on Icesat and Envisat Data Series
NASA Astrophysics Data System (ADS)
Li, H. W.; Qiao, G.; Wu, Y. J.; Cao, Y. J.; Mi, H.
2017-09-01
Satellite altimetry technique is an effective method to monitor the water level of lakes in a wide range, especially in sparsely populated areas, such as the Tibet Plateau (TP). To provide high quality data for time-series change detection of lake water level, an automatic and efficient algorithm for lake water footprint (LWF) detection in a wide range is used. Based on ICESat GLA14 Release634 data and ENVISat GDR 1Hz data, water level of 167 lakes were obtained from ICESat data series, and water level of 120 lakes were obtained from ENVISat data series. Among them, 67 lakes contained two data series. Mean standard deviation of all lakes is 0.088 meters (ICESat), 0.339 meters (ENVISat). Combination of multi-source altimetry data is helpful for us to get longer and more dense periods cover water level, study the lake level changes, manage water resources and understand the impacts of climate change better. In addition, the standard deviation of LWF elevation used to calculate the water level were analyzed by month. Based on lake data set for the TP from the 1960s, 2005, and 2014 in Scientific Data, it is found that the water level changes in the TP have a strong spatial correlation with the area changes.
Ohio Studies: Minimum Standards Leadership Series 1985.
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Div. of Elementary and Secondary Education.
This monograph is designed to provide materials, ideas, and strategies for school districts and teachers to broaden and expand the standards and requirements of Ohio studies. Section 1, "Introduction" provides an overview of the monograph. Section 2, "Organizing for Instruction" gives several alternative approaches to designing…
Mazzini, Virginia
2017-01-01
The importance of electrolyte solutions cannot be overstated. Beyond the ionic strength of electrolyte solutions the specific nature of the ions present is vital in controlling a host of properties. Therefore ion specificity is fundamentally important in physical chemistry, engineering and biology. The observation that the strengths of the effect of ions often follows well established series suggests that a single predictive and quantitative description of specific-ion effects covering a wide range of systems is possible. Such a theory would revolutionise applications of physical chemistry from polymer precipitation to drug design. Current approaches to understanding specific-ion effects involve consideration of the ions themselves, the solvent and relevant interfaces and the interactions between them. Here we investigate the specific-ion effects trends of standard partial molar volumes and electrostrictive volumes of electrolytes in water and eleven non-aqueous solvents. We choose these measures as they relate to bulk properties at infinite dilution, therefore they are the simplest electrolyte systems. This is done to test the hypothesis that the ions alone exhibit a specific-ion effect series that is independent of the solvent and unrelated to surface properties. The specific-ion effects trends of standard partial molar volumes and normalised electrostrictive volumes examined in this work show a fundamental ion-specific series that is reproduced across the solvents, which is the Hofmeister series for anions and the reverse lyotropic series for cations, supporting the hypothesis. This outcome is important in demonstrating that ion specificity is observed at infinite dilution and demonstrates that the complexity observed in the manifestation of specific-ion effects in a very wide range of systems is due to perturbations of solvent, surfaces and concentration on the underlying fundamental series. This knowledge will guide a general understanding of specific-ion effects and assist in the development of a quantitative predictive theory of ion specificity. PMID:29147533