Considering Time-Scale Requirements for the Future
2013-05-01
geocentric reference frame with the SI second realized on the rotating geoid as the scale unit. It is a continuous atomic time scale that was...the B8lycentric and Geocentric Celestial Reference Systems, two time scales, Barycentric Coor- dinate Time (TCB) and Geocentric Coordinate Time (TCG...defined in 2006 as a linear scaling of TCB having the approximate rate of TT. TCG is the time coordinate for the four dimensional geocentric coordinate
An improved driving waveform reference grayscale of electrophoretic displays
NASA Astrophysics Data System (ADS)
Wang, Li; Yi, Zichuan; Peng, Bao; Zhou, Guofu
2015-10-01
Driving waveform is an important component for gray scale display on the electrophoretic display (EPD). In the traditional driving waveform, a white reference gray scale is formed before writing a new image. However, the reflectance value can not reach agreement in each gray scale transformation. In this paper, a new driving waveform, which has a short waiting time after the formation of reference gray scale, is proposed to improve the consistency of reference gray scale. Firstly, the property of the particles in the microcapsule is analyzed and the change of the EPD reflectance after the white reference gray scale formation is studied. Secondly, the reflectance change curve is fitted by using polynomial and the duration of the waiting time is determined. Thirdly, a set of the new driving waveform is designed by using the rule of DC balance and some real E-ink commercial EPDs are used to test the performance. Experimental results show that the effect of the new driving waveform has a better performance than traditional waveforms.
IAU resolutions on reference systems and time scales in practice
NASA Astrophysics Data System (ADS)
Brumberg, V. A.; Groten, E.
2001-03-01
To be consistent with IAU/IUGG (1991) resolutions ICRS and ITRS should be treated as four-dimensional reference systems with TCB and TCG time scales, respectively, interrelated by a four-dimensional general relativistic transformation. This two-way transformation is given in the form adapted for actual application. The use of TB and TT instead of TCB and TCG, respectively, involves scaling factors complicating the use of this transformation in practice. New IAU B1 (2000) resolution is commented taking in mind some points of possible confusion in its practical application. The problem of the relationship of the theory of reference systems with the parameters of common relevance to astronomy, geodesy and geodynamics is briefly outlined.
Report of the panel on earth rotation and reference frames, section 7
NASA Technical Reports Server (NTRS)
Dickey, Jean O.; Dickman, Steven R.; Eubanks, Marshall T.; Feissel, Martine; Herring, Thomas A.; Mueller, Ivan I.; Rosen, Richard D.; Schutz, Robert E.; Wahr, John M.; Wilson, Charles R.
1991-01-01
Objectives and requirements for Earth rotation and reference frame studies in the 1990s are discussed. The objectives are to observe and understand interactions of air and water with the rotational dynamics of the Earth, the effects of the Earth's crust and mantle on the dynamics and excitation of Earth rotation variations over time scales of hours to centuries, and the effects of the Earth's core on the rotational dynamics and the excitation of Earth rotation variations over time scales of a year or longer. Another objective is to establish, refine and maintain terrestrial and celestrial reference frames. Requirements include improvements in observations and analysis, improvements in celestial and terrestrial reference frames and reference frame connections, and improved observations of crustal motion and mass redistribution on the Earth.
NASA Astrophysics Data System (ADS)
Wang, H. H.; Shi, Y. P.; Li, X. H.; Ni, K.; Zhou, Q.; Wang, X. H.
2018-03-01
In this paper, a scheme to measure the position of precision stages, with a high precision, is presented. The encoder is composed of a scale grating and a compact two-probe reading head, to read the zero position pulse signal and continuous incremental displacement signal. The scale grating contains different codes, multiple reference codes with different spacing superimposed onto the incremental grooves with an equal spacing structure. The codes of reference mask in the reading head is the same with the reference codes on the scale grating, and generate pulse signal to locate the reference position primarily when the reading head moves along the scale grating. After locating the reference position in a section by means of the pulse signal, the reference position can be located precisely with the amplitude of the incremental displacement signal. A kind of reference codes and scale grating were designed, and experimental results show that the primary precision of the design achieved is 1 μ m. The period of the incremental signal is 1μ m, and 1000/N nm precision can be achieved by subdivide the incremental signal in N times.
Rime-, mixed- and glaze-ice evaluations of three scaling laws
NASA Technical Reports Server (NTRS)
Anderson, David N.
1994-01-01
This report presents the results of tests at NASA Lewis to evaluate three icing scaling relationships or 'laws' for an unheated model. The laws were LWC x time = constant, one proposed by a Swedish-Russian group and one used at ONERA in France. Icing tests were performed in the NASA Lewis Icing Research Tunnel (IRT) with cylinders ranging from 2.5- to 15.2-cm diameter. Reference conditions were chosen to provide rime, mixed and glaze ice. Scaled conditions were tested for several scenarios of size and velocity scaling, and the resulting ice shapes compared. For rime-ice conditions, all three of the scaling laws provided scaled ice shapes which closely matched reference ice shapes. For mixed ice and for glaze ice none of the scaling laws produced consistently good simulation of the reference ice shapes. Explanations for the observed results are proposed, and scaling issues requiring further study are identified.
NASA Astrophysics Data System (ADS)
Wang, Siyao; Li, Bofeng; Li, Xingxing; Zang, Nan
2018-01-01
Integer ambiguity fixing with uncalibrated phase delay (UPD) products can significantly shorten the initialization time and improve the accuracy of precise point positioning (PPP). Since the tracking arcs of satellites and the behavior of atmospheric biases can be very different for the reference networks with different scales, the qualities of corresponding UPD products may be also various. The purpose of this paper is to comparatively investigate the influence of different scales of reference station networks on UPD estimation and user ambiguity resolution. Three reference station networks with global, wide-area and local scales are used to compute the UPD products and analyze their impact on the PPP-AR. The time-to-first-fix, the unfix rate and the incorrect fix rate of PPP-AR are analyzed. Moreover, in order to further shorten the convergence time for obtaining precise positioning, a modified partial ambiguity resolution (PAR) and corresponding validation strategy are presented. In this PAR method, the ambiguity subset is determined by removing the ambiguity one by one in the order of ascending elevations. Besides, for static positioning mode, a coordinate validation strategy is employed to enhance the reliability of the fixed coordinate. The experiment results show that UPD products computed by smaller station network are more accurate and lead to a better coordinate solution; the PAR method used in this paper can shorten the convergence time and the coordinate validation strategy can improve the availability of high precision positioning.
Anomalous scaling of stochastic processes and the Moses effect
NASA Astrophysics Data System (ADS)
Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.
2017-04-01
The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.
Anomalous scaling of stochastic processes and the Moses effect.
Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H
2017-04-01
The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.
Investigations on the hierarchy of reference frames in geodesy and geodynamics
NASA Technical Reports Server (NTRS)
Grafarend, E. W.; Mueller, I. I.; Papo, H. B.; Richter, B.
1979-01-01
Problems related to reference directions were investigated. Space and time variant angular parameters are illustrated in hierarchic structures or towers. Using least squares techniques, model towers of triads are presented which allow the formation of linear observation equations. Translational and rotational degrees of freedom (origin and orientation) are discussed along with and the notion of length and scale degrees of freedom. According to the notion of scale parallelism, scale factors with respect to a unit length are given. Three-dimensional geodesy was constructed from the set of three base vectors (gravity, earth-rotation and the ecliptic normal vector). Space and time variations are given with respect to a polar and singular value decomposition or in terms of changes in translation, rotation, deformation (shear, dilatation or angular and scale distortions).
On time scales and time synchronization using LORAN-C as a time reference signal
NASA Technical Reports Server (NTRS)
Chi, A. R.
1974-01-01
The long term performance of the eight LORAN-C chains is presented in terms of the Coordinated Universal Time (UTC) of the U.S. Naval Observatory (USNO); and the use of the LORAN-C navigation system for maintaining the user's clock to a UTC scale is described. The atomic time scale and the UTC of several national laboratories and observatories relative to the international atomic time are reported. Typical performance of several NASA tracking station clocks, relative to the USNO master clock, is also presented.
Sobol-Kwapinska, Malgorzata; Oles, Piotr K
2007-02-01
Referring to the 2005 article by Wittmann and Lehnhoff, the problem of using time metaphors for measuring awareness of time is posed. Starting from clarification of meaning, the Metaphors Slowness scale which was not homogeneous, an alternative interpretation of the result was proposed, given that metaphors refer to two separate aspects of time speed, an ongoing passage of time, and ex post reflection on passage of time that has already passed. The former refers to the judgment of an ongoing passage of time, and the latter to the judgment of passage of past time from a particular point in the past till now. Time perception is multifaceted and perhaps ambiguous. This particular aspect of time perception is covered by a notion of a "dialectical time", when opposite aspects of time are combined, e.g., pleasant and unpleasant ones.
Ten-Year Time Trends in Emotional and Behavioral Problems of Dutch Children Referred for Youth Care
ERIC Educational Resources Information Center
Veerman, Jan Willem; De Meyer, Ronald
2012-01-01
Emotional and behavioral problems assessed with the "Child Behavior Checklist" (CBCL) were analyzed from 2,739 Dutch children referred to Families First (FF) or Intensive Family Treatment (IFT) from 1999 to 2008, to examine time trends. From the year 2004 onward, six of the eight CBCL-syndrome scales yielded significant decreases from the…
JY1 time scale: a new Kalman-filter time scale designed at NIST
NASA Astrophysics Data System (ADS)
Yao, Jian; Parker, Thomas E.; Levine, Judah
2017-11-01
We report on a new Kalman-filter hydrogen-maser time scale (i.e. JY1 time scale) designed at the National Institute of Standards and Technology (NIST). The JY1 time scale is composed of a few hydrogen masers and a commercial Cs clock. The Cs clock is used as a reference clock to ease operations with existing data. Unlike other time scales, the JY1 time scale uses three basic time-scale equations, instead of only one equation. Also, this time scale can detect a clock error (i.e. time error, frequency error, or frequency drift error) automatically. These features make the JY1 time scale stiff and less likely to be affected by an abnormal clock. Tests show that the JY1 time scale deviates from the UTC by less than ±5 ns for ~100 d, when the time scale is initially aligned to the UTC and then is completely free running. Once the time scale is steered to a Cs fountain, it can maintain the time with little error even if the Cs fountain stops working for tens of days. This can be helpful when we do not have a continuously operated fountain or when the continuously operated fountain accidentally stops, or when optical clocks run occasionally.
Fluctuations in Cerebral Hemodynamics
2003-12-01
Determination of scaling properties Detrended Fluctuations Analysis (see (28) and references therein) is commonly used to determine scaling...pressure (averaged over a cardiac beat) of a healthy subject. First 1000 values of the time series are shown. (b) Detrended fluctuation analysis (DFA...1000 values of the time series are shown. (b) Detrended fluctuation analysis of the time series shown in (a). Fig . 3 Side-by-side boxplot for the
The Chip-Scale Atomic Clock - Prototype Evaluation
2007-11-01
39th Annual Precise Time and Time Interval (PTTI) Meeting THE CHIP-SCALE ATOMIC CLOCK – PROTOTYPE EVALUATION R. Lutwak *, A. Rashed...been supported by the Defense Advanced Research Projects Agency, Contract # NBCHC020050. REFERENCES [1] R. Lutwak , D. Emmons, W. Riley, and...D.C.), pp. 539-550. [2] R. Lutwak , D. Emmons, T. English, W. Riley, A. Duwel, M. Varghese, D. K. Serkland, and G. M. Peake, 2004, “The Chip-Scale
NASA Astrophysics Data System (ADS)
Lourens, L. J.; Ziegler, M.; Konijnendijk, T. Y. M.; Hilgen, F. J.; Bos, R.; Beekvelt, B.; van Loevezijn, A.; Collin, S.
2017-12-01
The astronomical theory of climate has revolutionized our understanding of past climate change and the development of highly accurate geologic time scales for the entire Cenozoic. Most of this understanding has come from the construction of astronomically tuned global ocean benthic foraminiferal oxygen isotope (δ18O) stacked record, derived by the international drilling operations of DSDP, ODP and IODP. The tuning includes fixed phase relationships between the obliquity and precession cycles and the inferred high-latitude climate, i.e. glacial-interglacial, response, which hark back to SPECMAP, using simple ice sheet models and a limited number of radiometric dates. This approach was largely implemented in the widely applied LR04 stack, though LR04 assumed shorter response times for the smaller ice caps during the Pliocene. In the past decades, an astronomically calibrated time scale for the Pliocene and Pleistocene of the Mediterranean has been developed, which has become the reference for the standard Geologic Time Scale. Typical of the Mediterranean marine sediments are the cyclic lithological alternations, reflecting the interference between obliquity and precession-paced low latitude climate variability, such as the African monsoon. Here we present the first benthic foraminiferal based oxygen isotope record of the Mediterranean reference scale, which strikingly mirrors the LR04. We will use this record to discuss the assumed open ocean glacial-interglacial related phase relations over the past 5.3 million years.
Regulatory modes and time management: how locomotors and assessors plan and perceive time.
Amato, Clara; Pierro, Antonio; Chirumbolo, Antonio; Pica, Gennaro
2014-06-01
This research investigated the relationship between regulatory mode orientations (locomotion and assessment), time management behaviours and the perceived control of time. "Locomotion" refers to the aspect of self-regulation involving the movement from state to state, whereas "assessment" is the comparative aspect of self-regulation that refers to the critical evaluation of alternative goals and the means for achieving them. The Italian versions of the Time Management Behavior Scale and the Perceived Control of Time Scale, as well as the Locomotion and Assessment Regulatory Modes Scales were administered to 339 Italian participants (249 students and 90 employees). The results supported the notion that locomotors and assessors differ in the ways they perceive the control of time. Locomotion was found to be positively related to perceived control of time. In contrast, assessment was negatively related to perceived control of time. Furthermore, the two time management dimensions of setting goals and priorities and preference for organisation were shown to mediate the relationship between locomotion and perceived control of time, whereas assessment proved to be unrelated to all time management behaviours. These findings highlight the importance of regulatory modes for human behaviour regarding time management and perceived control of time. © 2014 International Union of Psychological Science.
Earth History databases and visualization - the TimeScale Creator system
NASA Astrophysics Data System (ADS)
Ogg, James; Lugowski, Adam; Gradstein, Felix
2010-05-01
The "TimeScale Creator" team (www.tscreator.org) and the Subcommission on Stratigraphic Information (stratigraphy.science.purdue.edu) of the International Commission on Stratigraphy (www.stratigraphy.org) has worked with numerous geoscientists and geological surveys to prepare reference datasets for global and regional stratigraphy. All events are currently calibrated to Geologic Time Scale 2004 (Gradstein et al., 2004, Cambridge Univ. Press) and Concise Geologic Time Scale (Ogg et al., 2008, Cambridge Univ. Press); but the array of intercalibrations enable dynamic adjustment to future numerical age scales and interpolation methods. The main "global" database contains over 25,000 events/zones from paleontology, geomagnetics, sea-level and sequence stratigraphy, igneous provinces, bolide impacts, plus several stable isotope curves and image sets. Several regional datasets are provided in conjunction with geological surveys, with numerical ages interpolated using a similar flexible inter-calibration procedure. For example, a joint program with Geoscience Australia has compiled an extensive Australian regional biostratigraphy and a full array of basin lithologic columns with each formation linked to public lexicons of all Proterozoic through Phanerozoic basins - nearly 500 columns of over 9,000 data lines plus hot-curser links to oil-gas reference wells. Other datapacks include New Zealand biostratigraphy and basin transects (ca. 200 columns), Russian biostratigraphy, British Isles regional stratigraphy, Gulf of Mexico biostratigraphy and lithostratigraphy, high-resolution Neogene stable isotope curves and ice-core data, human cultural episodes, and Circum-Arctic stratigraphy sets. The growing library of datasets is designed for viewing and chart-making in the free "TimeScale Creator" JAVA package. This visualization system produces a screen display of the user-selected time-span and the selected columns of geologic time scale information. The user can change the vertical-scale, column widths, fonts, colors, titles, ordering, range chart options and many other features. Mouse-activated pop-ups provide additional information on columns and events; including links to external Internet sites. The graphics can be saved as SVG (scalable vector graphics) or PDF files for direct import into Adobe Illustrator or other common drafting software. Users can load additional regional datapacks, and create and upload their own datasets. The "Pro" version has additional dataset-creation tools, output options and the ability to edit and re-save merged datasets. The databases and visualization package are envisioned as a convenient reference tool, chart-production assistant, and a window into the geologic history of our planet.
A long time span relativistic precession model of the Earth
NASA Astrophysics Data System (ADS)
Tang, Kai; Soffel, Michael H.; Tao, Jin-He; Han, Wen-Biao; Tang, Zheng-Hong
2015-04-01
A numerical solution to the Earth's precession in a relativistic framework for a long time span is presented here. We obtain the motion of the solar system in the Barycentric Celestial Reference System by numerical integration with a symplectic integrator. Special Newtonian corrections accounting for tidal dissipation are included in the force model. The part representing Earth's rotation is calculated in the Geocentric Celestial Reference System by integrating the post-Newtonian equations of motion published by Klioner et al. All the main relativistic effects are included following Klioner et al. In particular, we consider several relativistic reference systems with corresponding time scales, scaled constants and parameters. Approximate expressions for Earth's precession in the interval ±1 Myr around J2000.0 are provided. In the interval ±2000 years around J2000.0, the difference compared to the P03 precession theory is only several arcseconds and the results are consistent with other long-term precession theories. Supported by the National Natural Science Foundation of China.
Automated Coastal Engineering System: Technical Reference
1992-09-01
of Contents ACES Technical Reference Wave Transmission Through Permeable Structures ..................................... 5-4 Littoral Processes...A-2 Table A-4: Grain-Size Scales ( Soil Classification) ..................................... A-3 Table A-5: Major Tidal Constituents... Permeable Structures Lonphore Sediment Tranaport Littoral Numerical Si~ulation of Time-Dependent Beach and Dune Erosion Processes Calculation of Composite
On the Assessment of Global Terrestrial Reference Frame Temporal Variations
NASA Astrophysics Data System (ADS)
Ampatzidis, Dimitrios; Koenig, Rolf; Zhu, Shengyuan
2015-04-01
Global Terrestrial Reference Frames (GTRFs) as the International Terrestrial Reference Frame (ITRF) provide reliable 4-D position information (3-D coordinates and their evolution through time). The given 3-D velocities play a significant role in precise position acquisition and are estimated from long term coordinate time series from the space-geodetic techniques DORIS, GNSS, SLR, and VLBI. GTRFs temporal evolution is directly connected with their internal stability: The more intense and inhomogeneous velocity field, the less stable TRF is derived. The assessment of the quality of the GTRF is mainly realized by comparing it to each individual technique's reference frame. E.g the comparison of GTRFs to SLR-only based TRF gives the sense of the ITRF stability with respect to the Geocenter and scale and their associated rates respectively. In addition, the comparison of ITRF to the VLBI-only based TRF can be used for the scale validation. However, till now there is not any specified methodology for the total assessment (in terms of origin, orientation and scale respectively) of the temporal evolution and GTRFs associated accuracy. We present a new alternative diagnostic tool for the assessment of GTRFs temporal evolution based on the well-known time-dependent Helmert type transformation formula (three shifts, three rotations and scale rates respectively). The advantage of the new methodology relies on the fact that it uses the full velocity field of the TRF and therefore all points not just the ones common to different techniques. It also examines simultaneously rates of origin, orientation and scale. The methodology is presented and implemented to the two existing GTRFs on the market (ITRF and DTRF which is computed from DGFI) , the results are discussed. The results also allow to compare directly each GTRF dynamic behavior. Furthermore, the correlations of the estimated parameters can also provide useful information to the proposed GTRFs assessment scheme.
Genotype Imputation with Millions of Reference Samples
Browning, Brian L.; Browning, Sharon R.
2016-01-01
We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle’s throughput was more than 100× greater than Impute2’s throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. PMID:26748515
DOT National Transportation Integrated Search
2006-12-01
Over the last several years, researchers at the University of Arizonas ATLAS Center have developed an adaptive ramp : metering system referred to as MILOS (Multi-Objective, Integrated, Large-Scale, Optimized System). The goal of this project : is ...
Extreme reaction times determine fluctuation scaling in human color vision
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-11-01
In modern mental chronometry, human reaction time defines the time elapsed from stimulus presentation until a response occurs and represents a reference paradigm for investigating stochastic latency mechanisms in color vision. Here we examine the statistical properties of extreme reaction times and whether they support fluctuation scaling in the skewness-kurtosis plane. Reaction times were measured for visual stimuli across the cardinal directions of the color space. For all subjects, the results show that very large reaction times deviate from the right tail of reaction time distributions suggesting the existence of dragon-kings events. The results also indicate that extreme reaction times are correlated and shape fluctuation scaling over a wide range of stimulus conditions. The scaling exponent was higher for achromatic than isoluminant stimuli, suggesting distinct generative mechanisms. Our findings open a new perspective for studying failure modes in sensory-motor communications and in complex networks.
Ask Here PA: Large-Scale Synchronous Virtual Reference for Pennsylvania
ERIC Educational Resources Information Center
Mariner, Vince
2008-01-01
Ask Here PA is Pennsylvania's new statewide live chat reference and information service. This article discusses the key strategies utilized by Ask Here PA administrators to recruit participating libraries to contribute staff time to the service, the importance of centralized staff training, the main aspects of staff training, and activating the…
Advances in time-scale algorithms
NASA Technical Reports Server (NTRS)
Stein, S. R.
1993-01-01
The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.
Measurement of optical to electrical and electrical to optical delays with ps-level uncertainty.
Peek, H Z; Pinkert, T J; Jansweijer, P P M; Koelemeij, J C J
2018-05-28
We present a new measurement principle to determine the absolute time delay of a waveform from an optical reference plane to an electrical reference plane and vice versa. We demonstrate a method based on this principle with 2 ps uncertainty. This method can be used to perform accurate time delay determinations of optical transceivers used in fiber-optic time-dissemination equipment. As a result the time scales in optical and electrical domain can be related to each other with the same uncertainty. We expect this method will be a new breakthrough in high-accuracy time transfer and absolute calibration of time-transfer equipment.
NASA Astrophysics Data System (ADS)
Mantegna, Rosario N.; Stanley, H. Eugene
2007-08-01
Preface; 1. Introduction; 2. Efficient market hypothesis; 3. Random walk; 4. Lévy stochastic processes and limit theorems; 5. Scales in financial data; 6. Stationarity and time correlation; 7. Time correlation in financial time series; 8. Stochastic models of price dynamics; 9. Scaling and its breakdown; 10. ARCH and GARCH processes; 11. Financial markets and turbulence; 12. Correlation and anti-correlation between stocks; 13. Taxonomy of a stock portfolio; 14. Options in idealized markets; 15. Options in real markets; Appendix A: notation guide; Appendix B: martingales; References; Index.
Reference results for time-like evolution up to
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Carrazza, Stefano; Nocera, Emanuele R.
2015-03-01
We present high-precision numerical results for time-like Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution in the factorisation scheme, for the first time up to next-to-next-to-leading order accuracy in quantum chromodynamics. First, we scrutinise the analytical expressions of the splitting functions available in the literature, in both x and N space, and check their mutual consistency. Second, we implement time-like evolution in two publicly available, entirely independent and conceptually different numerical codes, in x and N space respectively: the already existing APFEL code, which has been updated with time-like evolution, and the new MELA code, which has been specifically developed to perform the study in this work. Third, by means of a model for fragmentation functions, we provide results for the evolution in different factorisation schemes, for different ratios between renormalisation and factorisation scales and at different final scales. Our results are collected in the format of benchmark tables, which could be used as a reference for global determinations of fragmentation functions in the future.
ERIC Educational Resources Information Center
Ebesutani, Chad; Reise, Steven P.; Chorpita, Bruce F.; Ale, Chelsea; Regan, Jennifer; Young, John; Higa-McMillan, Charmaine; Weisz, John R.
2012-01-01
Using a school-based (N = 1,060) and clinic-referred (N = 303) youth sample, the authors developed a 25-item shortened version of the Revised Child Anxiety and Depression Scale (RCADS) using Schmid-Leiman exploratory bifactor analysis to reduce client burden and administration time and thus improve the transportability characteristics of this…
What Is a Complex Innovation System?
Katz, J. Sylvan
2016-01-01
Innovation systems are sometimes referred to as complex systems, something that is intuitively understood but poorly defined. A complex system dynamically evolves in non-linear ways giving it unique properties that distinguish it from other systems. In particular, a common signature of complex systems is scale-invariant emergent properties. A scale-invariant property can be identified because it is solely described by a power law function, f(x) = kxα, where the exponent, α, is a measure of scale-invariance. The focus of this paper is to describe and illustrate that innovation systems have properties of a complex adaptive system. In particular scale-invariant emergent properties indicative of their complex nature that can be quantified and used to inform public policy. The global research system is an example of an innovation system. Peer-reviewed publications containing knowledge are a characteristic output. Citations or references to these articles are an indirect measure of the impact the knowledge has on the research community. Peer-reviewed papers indexed in Scopus and in the Web of Science were used as data sources to produce measures of sizes and impact. These measures are used to illustrate how scale-invariant properties can be identified and quantified. It is demonstrated that the distribution of impact has a reasonable likelihood of being scale-invariant with scaling exponents that tended toward a value of less than 3.0 with the passage of time and decreasing group sizes. Scale-invariant correlations are shown between the evolution of impact and size with time and between field impact and sizes at points in time. The recursive or self-similar nature of scale-invariance suggests that any smaller innovation system within the global research system is likely to be complex with scale-invariant properties too. PMID:27258040
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
Airborne Sea-Surface Topography in an Absolute Reference Frame
NASA Astrophysics Data System (ADS)
Brozena, J. M.; Childers, V. A.; Jacobs, G.; Blaha, J.
2003-12-01
Highly dynamic coastal ocean processes occur at temporal and spatial scales that cannot be captured by the present generation of satellite altimeters. Space-borne gravity missions such as GRACE also provide time-varying gravity and a geoidal msl reference surface at resolution that is too coarse for many coastal applications. The Naval Research Laboratory and the Naval Oceanographic Office have been testing the application of airborne measurement techniques, gravity and altimetry, to determine sea-surface height and height anomaly at the short scales required for littoral regions. We have developed a precise local gravimetric geoid over a test region in the northern Gulf of Mexico from historical gravity data and recent airborne gravity surveys. The local geoid provides a msl reference surface with a resolution of about 10-15 km and provides a means to connect airborne, satellite and tide-gage observations in an absolute (WGS-84) framework. A series of altimetry reflights over the region with time scales of 1 day to 1 year reveal a highly dynamic environment with coherent and rapidly varying sea-surface height anomalies. AXBT data collected at the same time show apparent correlation with wave-like temperature anomalies propagating up the continental slope of the Desoto Canyon. We present animations of the temporal evolution of the surface topography and water column temperature structure down to the 800 m depth of the AXBT sensors.
Genotype Imputation with Millions of Reference Samples.
Browning, Brian L; Browning, Sharon R
2016-01-07
We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle's throughput was more than 100× greater than Impute2's throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Impact of new clock technologies on the stability and accuracy of the International Atomic Time TAI.
NASA Astrophysics Data System (ADS)
Thomas, C.
1997-05-01
The BIPM Time Section is in charge of the generation of the reference time scales TAI and UTC. Both time scales are obtained in deferred-time by combining the data front a number of atomic clocks spread worldwide. The accuracy of TAI is estimated by the departure between the duration of the TAI scale interval and the SI second as produced on the rotating geoid by primary frequency standards. It is now possible to estimate TAI accuracy through the combination of results obtained from six different primary standards: LPTF-FO1, PTB CS1, PTB CS2, PTB CS3, NIST-7, and SU MCsR 102, all corrected for the black-body radiation shift. This led to a mean departure of the TAI scale interval of +2.0×10-14s over 1995, known with a relative uncertainty of 0.5×10-14(1σ).
Study of Fourier transform spectrometer based on Michelson interferometer wave-meter
NASA Astrophysics Data System (ADS)
Peng, Yuexiang; Wang, Liqiang; Lin, Li
2008-03-01
A wave-meter based on Michelson interferometer consists of a reference and a measurement channel. The voice-coiled motor using PID means can realize to move in stable motion. The wavelength of a measurement laser can be obtained by counting interference fringes of reference and measurement laser. Reference laser with frequency stabilization creates a cosine interferogram signal whose frequency is proportional to velocity of the moving motor. The interferogram of the reference laser is converted to pulse signal, and it is subdivided into 16 times. In order to get optical spectrum, the analog signal of measurement channel should be collected. The Analog-to-Digital Converter (ADC) for measurement channel is triggered by the 16-times pulse signal of reference laser. So the sampling rate is constant only depending on frequency of reference laser and irrelative to the motor velocity. This means the sampling rate of measurement channel signals is on a uniform time-scale. The optical spectrum of measurement channel can be processed with Fast Fourier Transform (FFT) method by DSP and displayed on LCD.
NASA Technical Reports Server (NTRS)
Squires, Kyle D.; Eaton, John K.
1991-01-01
Direct numerical simulation is used to study dispersion in decaying isotropic turbulence and homogeneous shear flow. Both Lagrangian and Eulerian data are presented allowing direct comparison, but at fairly low Reynolds number. The quantities presented include properties of the dispersion tensor, isoprobability contours of particle displacement, Lagrangian and Eulerian velocity autocorrelations and time scale ratios, and the eddy diffusivity tensor. The Lagrangian time microscale is found to be consistently larger than the Eulerian microscale, presumably due to the advection of the small scales by the large scales in the Eulerian reference frame.
NASA Astrophysics Data System (ADS)
Peng, Dailiang; Zhang, Xiaoyang; Zhang, Bing; Liu, Liangyun; Liu, Xinjie; Huete, Alfredo R.; Huang, Wenjiang; Wang, Siyuan; Luo, Shezhou; Zhang, Xiao; Zhang, Helin
2017-10-01
Land surface phenology (LSP) has been widely retrieved from satellite data at multiple spatial resolutions, but the spatial scaling effects on LSP detection are poorly understood. In this study, we collected enhanced vegetation index (EVI, 250 m) from collection 6 MOD13Q1 product over the contiguous United States (CONUS) in 2007 and 2008, and generated a set of multiple spatial resolution EVI data by resampling 250 m to 2 × 250 m and 3 × 250 m, 4 × 250 m, …, 35 × 250 m. These EVI time series were then used to detect the start of spring season (SOS) at various spatial resolutions. Further the SOS variation across scales was examined at each coarse resolution grid (35 × 250 m ≈ 8 km, refer to as reference grid) and ecoregion. Finally, the SOS scaling effects were associated with landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation within each reference grid. The results revealed the influences of satellite spatial resolutions on SOS retrievals and the related impact factors. Specifically, SOS significantly varied lineally or logarithmically across scales although the relationship could be either positive or negative. The overall SOS values averaged from spatial resolutions between 250 m and 35 × 250 m at large ecosystem regions were generally similar with a difference less than 5 days, while the SOS values within the reference grid could differ greatly in some local areas. Moreover, the standard deviation of SOS across scales in the reference grid was less than 5 days in more than 70% of area over the CONUS, which was smaller in northeastern than in southern and western regions. The SOS scaling effect was significantly associated with heterogeneity of vegetation properties characterized using land landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation, but the latter was the most important impact factor.
Global daily reference evapotranspiration modeling and evaluation
Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.
2008-01-01
Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration’s Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ∼100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world. While the study revealed the potential of GDAS ETo for large-scale hydrological applications, site-specific use of GDAS ETo in complex hydro-climatic regions such as coastal areas and rugged terrain may require the application of bias correction and/or disaggregation of the GDAS ETo using downscaling techniques.
NASA Astrophysics Data System (ADS)
Lourens, Lucas
2016-04-01
The astronomical theory of climate has revolutionized our understanding of past climate change and the development of highly accurate geologic time scales for the entire Cenozoic. Most of this understanding has started with the construction of high-resolution stable oxygen isotope (18O) records from planktonic and benthic foraminifera of open ocean deep marine sediments explored by the international drilling operations of DSDP, ODP and IODP. These efforts culminated into global ocean isotopic stacked records, which give a clear picture of the evolution of the climate state through time. Fundamental for these reconstructions are the assumptions made between the astronomical forcing and the tuned time series and the accuracy of the astronomical solution. In the past decades, an astronomically calibrated time scale for the Pliocene and Pleistocene of the Mediterranean has been developed, which has become the reference for the standard Geologic Time Scale. Characteristic of the studied marine sediments are the cyclic lithological alternations, reflecting the interference between obliquity and precession-paced low latitude climate variability. These interference patterns allowed to evaluate the accuracy of astronomical solutions and to constrain the dynamical ellipticity of the Earth and tidal dissipation by the Sun and the Moon, which in turn provided the backbone for the widely applied LR04 open ocean benthic isotope stack of the past 5 Myr. So far, the assumed time lags between orbital forcing and the global climate response as reflected in LR04 have not been tested, while these assumptions hark back to SPECMAP, using simple ice sheet models and a limited number of radiometric dates. In addition, LR04 adopted a shorter response time for the smaller ice caps during the Pliocene. Here I present the first benthic 18O record of the Mediterranean reference scale, which strikingly mirrors the LR04. I will use this record to discuss the assumed phase relations and its potential to constrain global sea level changes and their cause over the past 5.3 million years.
Archetypes, Causal Description and Creativity in Natural World
NASA Astrophysics Data System (ADS)
Chiatti, Leonardo
The idea, formulated for the first time by Pauli, of a "creativity" of natural processes on a quantum scale is briefly investigated, with particular reference to the phenomena, common throughout the biological world, involved in the amplification of microscopic "creative" events at oscopic level. The involvement of non-locality is also discussed with reference to the synordering of events, a concept introduced for the first time by Bohm. Some convergences are proposed between the metamorphic process envisaged by Bohm and that envisaged by Goethe, and some possible applications concerning known biological phenomena are briefly discussed.
Multiple Time Series Node Synchronization Utilizing Ambient Reference
2014-12-31
assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The severe requirements that Special...processing targeted to performance assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The...research community and it is well documented and characterized. The datasets considered from this project (listed below) were used to derive the
Annual Geocenter Motion from Space Geodesy and Models
NASA Astrophysics Data System (ADS)
Ries, J. C.
2013-12-01
Ideally, the origin of the terrestrial reference frame and the center of mass of the Earth are always coincident. By construction, the origin of the reference frame is coincident with the mean Earth center of mass, averaged over the time span of the satellite laser ranging (SLR) observations used in the reference frame solution, within some level of uncertainty. At shorter time scales, tidal and non-tidal mass variations result in an offset between the origin and geocenter, called geocenter motion. Currently, there is a conventional model for the tidally-coherent diurnal and semi-diurnal geocenter motion, but there is no model for the non-tidal annual variation. This annual motion reflects the largest-scale mass redistribution in the Earth system, so it essential to observe it for a complete description of the total mass transport. Failing to model it can also cause false signals in geodetic products such as sea height observations from satellite altimeters. In this paper, a variety of estimates for the annual geocenter motion are presented based on several different geodetic techniques and models, and a ';consensus' model from SLR is suggested.
Measuring pretest-posttest change with a Rasch Rating Scale Model.
Wolfe, E W; Chiu, C W
1999-01-01
When measures are taken on the same individual over time, it is difficult to determine whether observed differences are the result of changes in the person or changes in other facets of the measurement situation (e.g., interpretation of items or use of rating scale). This paper describes a method for disentangling changes in persons from changes in the interpretation of Likert-type questionnaire items and the use of rating scales (Wright, 1996a). The procedure relies on anchoring strategies to create a common frame of reference for interpreting measures that are taken at different times and provides a detailed illustration of how to implement these procedures using FACETS.
Communication: translational Brownian motion for particles of arbitrary shape.
Cichocki, Bogdan; Ekiel-Jeżewska, Maria L; Wajnryb, Eligiusz
2012-02-21
A single Brownian particle of arbitrary shape is considered. The time-dependent translational mean square displacement W(t) of a reference point at this particle is evaluated from the Smoluchowski equation. It is shown that at times larger than the characteristic time scale of the rotational Brownian relaxation, the slope of W(t) becomes independent of the choice of a reference point. Moreover, it is proved that in the long-time limit, the slope of W(t) is determined uniquely by the trace of the translational-translational mobility matrix μ(tt) evaluated with respect to the hydrodynamic center of mobility. The result is applicable to dynamic light scattering measurements, which indeed are performed in the long-time limit. © 2012 American Institute of Physics
Impact of seasonal and postglacial surface displacement on global reference frames
NASA Astrophysics Data System (ADS)
Krásná, Hana; Böhm, Johannes; King, Matt; Memin, Anthony; Shabala, Stanislav; Watson, Christopher
2014-05-01
The calculation of actual station positions requires several corrections which are partly recommended by the International Earth Rotation and Reference Systems Service (IERS) Conventions (e.g., solid Earth tides and ocean tidal loading) as well as other corrections, e.g. accounting for hydrology and atmospheric loading. To investigate the pattern of omitted non-linear seasonal motion we estimated empirical harmonic models for selected stations within a global solution of suitable Very Long Baseline Interferometry (VLBI) sessions as well as mean annual models by stacking yearly time series of station positions. To validate these models we compare them to displacement series obtained from the Gravity Recovery and Climate Experiment (GRACE) data and to hydrology corrections determined from global models. Furthermore, we assess the impact of the seasonal station motions on the celestial reference frame as well as on Earth orientation parameters derived from real and also artificial VLBI observations. In the second part of the presentation we apply vertical rates of the ICE-5G_VM2_2012 vertical land movement grid on vertical station velocities. We assess the impact of postglacial uplift on the variability in the scale given different sampling of the postglacial signal in time and hence on the uncertainty in the scale rate of the estimated terrestrial reference frame.
NASA Astrophysics Data System (ADS)
Willis, Andrew R.; Brink, Kevin M.
2016-06-01
This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.
Dissemination of optical-comb-based ultra-broadband frequency reference through a fiber network.
Nagano, Shigeo; Kumagai, Motohiro; Li, Ying; Ido, Tetsuya; Ishii, Shoken; Mizutani, Kohei; Aoki, Makoto; Otsuka, Ryohei; Hanado, Yuko
2016-08-22
We disseminated an ultra-broadband optical frequency reference based on a femtosecond (fs)-laser optical comb through a kilometer-scale fiber link. Its spectrum ranged from 1160 nm to 2180 nm without additional fs-laser combs at the end of the link. By employing a fiber-induced phase noise cancellation technique, the linewidth and fractional frequency instability attained for all disseminated comb modes were of order 1 Hz and 10-18 in a 5000 s averaging time. The ultra-broad optical frequency reference, for which absolute frequency is traceable to Japan Standard Time, was applied in the frequency stabilization of an injection-seeded Q-switched 2051 nm pulse laser for a coherent light detection and ranging LIDAR system.
Time scales of porphyry Cu deposit formation: insights from titanium diffusion in quartz
Mercer, Celestine N.; Reed, Mark H.; Mercer, Cameron M.
2015-01-01
Porphyry dikes and hydrothermal veins from the porphyry Cu-Mo deposit at Butte, Montana, contain multiple generations of quartz that are distinct in scanning electron microscope-cathodoluminescence (SEM-CL) images and in Ti concentrations. A comparison of microprobe trace element profiles and maps to SEM-CL images shows that the concentration of Ti in quartz correlates positively with CL brightness but Al, K, and Fe do not. After calibrating CL brightness in relation to Ti concentration, we use the brightness gradient between different quartz generations as a proxy for Ti gradients that we model to determine time scales of quartz formation and cooling. Model results indicate that time scales of porphyry magma residence are ~1,000s of years and time scales from porphyry quartz phenocryst rim formation to porphyry dike injection and cooling are ~10s of years. Time scales for the formation and cooling of various generations of hydrothermal vein quartz range from 10s to 10,000s of years. These time scales are considerably shorter than the ~0.6 m.y. overall time frame for each porphyry-style mineralization pulse determined from isotopic studies at Butte, Montana. Simple heat conduction models provide a temporal reference point to compare chemical diffusion time scales, and we find that they support short dike and vein formation time scales. We interpret these relatively short time scales to indicate that the Butte porphyry deposit formed by short-lived episodes of hydrofracturing, dike injection, and vein formation, each with discrete thermal pulses, which repeated over the ~3 m.y. generation of the deposit.
Evaluation of Methods to Select Scale Velocities in Icing Scaling Tests
NASA Technical Reports Server (NTRS)
Anderson, David N.; Ruff, Gary A.; Bond, Thomas H. (Technical Monitor)
2003-01-01
A series of tests were made in the NASA Glenn Icing Research Tunnel to determine how icing scaling results were affected by the choice of scale velocity. Reference tests were performed with a 53.3-cm-chord NACA 0012 airfoil model, while scale tests used a 27.7-cm-chord 0012 model. Tests were made with rime, mixed, and glaze ice. Reference test conditions included airspeeds of 67 and 89 m/s, an MVD of 40 microns, and LWCs of 0.5 and 0.6 g/cu m. Scale test conditions were established by the modified Ruff (AEDC) scaling method with the scale velocity determined in five ways. The resulting scale velocities ranged from 85 to 220 percent of the reference velocity. This paper presents the ice shapes that resulted from those scale tests and compares them to the reference shapes. It was concluded that for freezing fractions greater than 0.8 as well as for a freezing fraction of 0.3, the value of the scale velocity had no effect on how well the scale ice shape simulated the reference shape. For freezing fractions of 0.5 and 0.7, the simulation of the reference shape appeared to improve as the scale velocity increased.
An algorithm for the Italian atomic time scale
NASA Technical Reports Server (NTRS)
Cordara, F.; Vizio, G.; Tavella, P.; Pettiti, V.
1994-01-01
During the past twenty years, the time scale at the IEN has been realized by a commercial cesium clock, selected from an ensemble of five, whose rate has been continuously steered towards UTC to maintain a long term agreement within 3 x 10(exp -13). A time scale algorithm, suitable for a small clock ensemble and capable of improving the medium and long term stability of the IEN time scale, has been recently designed taking care of reducing the effects of the seasonal variations and the sudden frequency anomalies of the single cesium clocks. The new time scale, TA(IEN), is obtained as a weighted average of the clock ensemble computed once a day from the time comparisons between the local reference UTC(IEN) and the single clocks. It is foreseen to include in the computation also ten cesium clocks maintained in other Italian laboratories to further improve its reliability and its long term stability. To implement this algorithm, a personal computer program in Quick Basic has been prepared and it has been tested at the IEN time and frequency laboratory. Results obtained using this algorithm on the real clocks data relative to a period of about two years are presented.
Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert
The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less
NASA Astrophysics Data System (ADS)
Nyeki, Stephan; Wacker, Stefan; Gröbner, Julian; Finsterle, Wolfgang; Wild, Martin
2017-08-01
A large number of radiometers are traceable to the World Standard Group (WSG) for shortwave radiation and the interim World Infrared Standard Group (WISG) for longwave radiation, hosted by the Physikalisch-Meteorologisches Observatorium Davos/World Radiation Centre (PMOD/WRC, Davos, Switzerland). The WSG and WISG have recently been found to over- and underestimate radiation values, respectively (Fehlmann et al., 2012; Gröbner et al., 2014), although research is still ongoing. In view of a possible revision of the reference scales of both standard groups, this study discusses the methods involved and the implications on existing archives of radiation time series, such as the Baseline Surface Radiation Network (BSRN). Based on PMOD/WRC calibration archives and BSRN data archives, the downward longwave radiation (DLR) time series over the 2006-2015 period were analysed at four stations (polar and mid-latitude locations). DLR was found to increase by up to 3.5 and 5.4 W m-2 for all-sky and clear-sky conditions, respectively, after applying a WISG reference scale correction and a minor correction for the dependence of pyrgeometer sensitivity on atmospheric integrated water vapour content. Similar increases in DLR may be expected at other BSRN stations. Based on our analysis, a number of recommendations are made for future studies.
Relativity of Scales: Application to AN Endo-Perspective of Temporal Structures
NASA Astrophysics Data System (ADS)
Nottale, Laurent; Timar, Pierre
The theory of scale relativity is an extension of the principle of relativity to scale transformations of the reference system, in a fractal geometry framework where coordinates become explicitly dependent on resolutions. Applied to an observer perspective, it means that the scales of length and of time, usually attributed to the observed object as being intrinsic to it, have actually no existence by themselves, since only the ratio between an external scale and an internal scale, which serves as unit, is meaningful. Oliver Sacks' observations on patients suffering from temporal and spatial distortions in Parkinson's and encephalitis lethargica disease offer a particularly relevant field of application for such a scale-relativistic view.
Investigation of scale effects in the TRF determined by VLBI
NASA Astrophysics Data System (ADS)
Wahl, Daniel; Heinkelmann, Robert; Schuh, Harald
2017-04-01
The improvement of the International Terrestrial Reference Frame (ITRF) is of great significance for Earth sciences and one of the major tasks in geodesy. The translation, rotation and the scale-factor, as well as their linear rates, are solved in a 14-parameter transformation between individual frames of each space geodetic technique and the combined frame. In ITRF2008, as well as in the current release ITRF2014, the scale-factor is provided by Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) in equal shares. Since VLBI measures extremely precise group delays that are transformed to baseline lengths by the velocity of light, a natural constant, VLBI is the most suitable method for providing the scale. The aim of the current work is to identify possible shortcomings in the VLBI scale contribution to ITRF2008. For developing recommendations for an enhanced estimation, scale effects in the Terrestrial Reference Frame (TRF) determined with VLBI are considered in detail and compared to ITRF2008. In contrast to station coordinates, where the scale is defined by a geocentric position vector, pointing from the origin of the reference frame to the station, baselines are not related to the origin. They are describing the absolute scale independently from the datum. The more accurate a baseline length, and consequently the scale, is estimated by VLBI, the better the scale contribution to the ITRF. Considering time series of baseline length between different stations, a non-linear periodic signal can clearly be recognized, caused by seasonal effects at observation sites. Modeling these seasonal effects and subtracting them from the original data enhances the repeatability of single baselines significantly. Other effects influencing the scale strongly, are jumps in the time series of baseline length, mainly evoked by major earthquakes. Co- and post-seismic effects can be identified in the data, having a non-linear character likewise. Modeling the non-linear motion or completely excluding affected stations is another important step for an improved scale determination. In addition to the investigation of single baseline repeatabilities also the spatial transformation, which is performed for determining parameters of the ITRF2008, are considered. Since the reliability of the resulting transformation parameters is higher the more identical points are used in the transformation, an approach where all possible stations are used as control points is comprehensible. Experiments that examine the scale-factor and its spatial behavior between control points in ITRF2008 and coordinates determined by VLBI only showed that the network geometry has a large influence on the outcome as well. Introducing an unequally distributed network for the datum configuration, the correlations between translation parameters and the scale-factor can become remarkably high. Only a homogeneous spatial distribution of participating stations yields a maximally uncorrelated scale-factor that can be interpreted independent from other parameters. In the current release of the ITRF, the ITRF2014, for the first time, non-linear effects in the time series of station coordinates are taken into account. The present work shows the importance and the right direction of the modification of the ITRF calculation. But also further improvements were found which lead to an enhanced scale determination.
Smoothing of millennial scale climate variability in European Loess (and other records)
NASA Astrophysics Data System (ADS)
Zeeden, Christian; Obreht, Igor; Hambach, Ulrich; Veres, Daniel; Marković, Slobodan B.; Lehmkuhl, Frank
2017-04-01
Millennial scale climate variability is seen in various records of the northern hemisphere in the last glacial cycle, and their expression represents a correlation tool beyond the resolution of e.g. luminescence dating. Highest (correlative) dating accuracy is a prerequisite of comparing different geoarchives, especially when related to archaeological findings. Here we attempt to constrain the timing of loess geoarchives representing the environmental context of early humans in south-eastern Europe, and discuss the challenge of dealing with smoothed records. In this contribution, we present rock magnetic and grain size data from the Rasova loess record in the Lower Danube basin (Romania), showing millennial scale climate variability. Additionally, we summarize similar data from the Lower and Middle Danube Basins. A comparison of these loess data and reference records from Greenland ice cores and the Mediterranean-Black Sea region indicates a rather unusual expression of millennial scale climate variability recorded in loess. To explain the observed patterns, we experiment with low-pass filters of reference records to simulate a signal smoothing by natural processes such as e.g. bioturbation and pervasive diagenesis. Low-pass filters avoid high frequency oscillations and focus on the longer period (lower frequency) variability, here using cut-off periods from 1-15 kyr. In our opinion low-pass filters represent simple models for the expression of millennial scale climate variability in low sedimentation environments, and in sediments where signals are smoothed by e.g. bioturbation and/or diagenesis. Using different low-pass filter thresholds allows us to (a) explain observed patterns and their relation to millennial scale climate variability, (b) propose these filtered/smoothed signals as correlation targets for records lacking millennial scale recording, but showing smoothed climate variability on supra-millennial scales, and (c) determine which time resolution specific (loess) records can reproduce. Comparing smoothed records to reference data may be a step forward especially for last glacial stratigraphies, where millennial scale patterns are certainly present but not directly recorded in some geoarchives. Interestingly, smoothed datasets from Greenland and the Black Sea-Mediterranean region are most similar in the last ca. 15 ka and again from ca. 30-50 ka. During the cold phase from ca. 30-15 ka records show dissimilarities, challenging robust correlative time scales in this age range. A potential explanation may be related to the expansion of Northern European and Alpine ice sheets influencing atmospheric systems in the North Atlantic and Eurasian regions and thus leading to regionally and temporally differentiated climatic responses.
Fixism and conservation science.
Robert, Alexandre; Fontaine, Colin; Veron, Simon; Monnet, Anne-Christine; Legrand, Marine; Clavel, Joanne; Chantepie, Stéphane; Couvet, Denis; Ducarme, Frédéric; Fontaine, Benoît; Jiguet, Frédéric; le Viol, Isabelle; Rolland, Jonathan; Sarrazin, François; Teplitsky, Céline; Mouchet, Maud
2017-08-01
The field of biodiversity conservation has recently been criticized as relying on a fixist view of the living world in which existing species constitute at the same time targets of conservation efforts and static states of reference, which is in apparent disagreement with evolutionary dynamics. We reviewed the prominent role of species as conservation units and the common benchmark approach to conservation that aims to use past biodiversity as a reference to conserve current biodiversity. We found that the species approach is justified by the discrepancy between the time scales of macroevolution and human influence and that biodiversity benchmarks are based on reference processes rather than fixed reference states. Overall, we argue that the ethical and theoretical frameworks underlying conservation research are based on macroevolutionary processes, such as extinction dynamics. Current species, phylogenetic, community, and functional conservation approaches constitute short-term responses to short-term human effects on these reference processes, and these approaches are consistent with evolutionary principles. © 2016 Society for Conservation Biology.
Optical frequency standard development in support of NASA's gravity-mapping missions
NASA Technical Reports Server (NTRS)
Klipstein, W. M.; Seidel, D. J.; White, J. A.; Young, B. C.
2001-01-01
We intend to combine the exquisite performance over short time scales coming from a cavity reference with the long-term stability of an atomic frequency standard with an eye towards reliability in a spaceflight application.
NASA Astrophysics Data System (ADS)
Gómez, Breogán; Miguez-Macho, Gonzalo
2017-04-01
Nudging techniques are commonly used to constrain the evolution of numerical models to a reference dataset that is typically of a lower resolution. The nudged model retains some of the features of the reference field while incorporating its own dynamics to the solution. These characteristics have made nudging very popular in dynamic downscaling applications that cover from shot range, single case studies, to multi-decadal regional climate simulations. Recently, a variation of this approach called Spectral Nudging, has gained popularity for its ability to maintain the higher temporal and spatial variability of the model results, while forcing the large scales in the solution with a coarser resolution field. In this work, we focus on a not much explored aspect of this technique: the impact of selecting different cut-off wave numbers and spin-up times. We perform four-day long simulations with the WRF model, daily for three different one-month periods that include a free run and several Spectral Nudging experiments with cut-off wave numbers ranging from the smallest to the largest possible (full Grid Nudging). Results show that Spectral Nudging is very effective at imposing the selected scales onto the solution, while allowing the limited area model to incorporate finer scale features. The model error diminishes rapidly as the nudging expands over broader parts of the spectrum, but this decreasing trend ceases sharply at cut-off wave numbers equivalent to a length scale of about 1000 km, and the error magnitude changes minimally thereafter. This scale corresponds to the Rossby Radius of deformation, separating synoptic from convective scales in the flow. When nudging above this value is applied, a shifting of the synoptic patterns can occur in the solution, yielding large model errors. However, when selecting smaller scales, the fine scale contribution of the model is damped, thus making 1000 km the appropriate scale threshold to nudge in order to balance both effects. Finally, we note that longer spin-up times are needed for model errors to stabilize when using Spectral Nudging than with Grid Nudging. Our results suggest that this time is between 36 and 48 hours.
A study of the high-precision displacement laser probe
NASA Astrophysics Data System (ADS)
Fan, Yuming; Zhang, Guoxiong
2006-06-01
On the basis of the measuring principle of the dynamic active optical confocal probe based on time difference measurement that has a reference path, a dynamic active optical confocal probe based on time difference measurement but has no reference path is developed. In this paper, the working principle of this optical confocal probe is dissertated. A large-scale integrated measuring system is designed to simplify the structure of the probe and to enhance the stability of the probe. Single-chip microcomputer system with a high-speed ADC is selected in the measurement and control system of the probe. At the end of the paper, experiments on the performance of the optical confocal probe based on time difference measurement with no reference path are carried out. Experiment results show that the probe has a measuring resolution of 0.05μm, a measuring range of 0.2mm and a linearity of 0.4μm.
Effects of Rotor Blade Scaling on High-Pressure Turbine Unsteady Loading
NASA Astrophysics Data System (ADS)
Lastiwka, Derek; Chang, Dongil; Tavoularis, Stavros
2013-03-01
The present work is a study of the effects of rotor blade scaling of a single-stage high pressure turbine on the time-averaged turbine performance and on parameters that influence vibratory stresses on the rotor blades and stator vanes. Three configurations have been considered: a reference case with 36 rotor blades and 24 stator vanes, a case with blades upscaled by 12.5%, and a case with blades downscaled by 10%. The present results demonstrate that blade scaling effects were essentially negligible on the time-averaged turbine performance, but measurable on the unsteady surface pressure fluctuations, which were intensified as blade size was increased. In contrast, blade torque fluctuations increased significantly as blade size decreased. Blade scaling effects were also measurable on the vanes.
Gyrokinetic theory for particle and energy transport in fusion plasmas
NASA Astrophysics Data System (ADS)
Falessi, Matteo Valerio; Zonca, Fulvio
2018-03-01
A set of equations is derived describing the macroscopic transport of particles and energy in a thermonuclear plasma on the energy confinement time. The equations thus derived allow studying collisional and turbulent transport self-consistently, retaining the effect of magnetic field geometry without postulating any scale separation between the reference state and fluctuations. Previously, assuming scale separation, transport equations have been derived from kinetic equations by means of multiple-scale perturbation analysis and spatio-temporal averaging. In this work, the evolution equations for the moments of the distribution function are obtained following the standard approach; meanwhile, gyrokinetic theory has been used to explicitly express the fluctuation induced fluxes. In this way, equations for the transport of particles and energy up to the transport time scale can be derived using standard first order gyrokinetics.
NASA Astrophysics Data System (ADS)
Moreaux, Guilhem; Lemoine, Frank G.; Capdeville, Hugues; Kuzin, Sergey; Otten, Michiel; Štěpánek, Petr; Willis, Pascal; Ferrage, Pascale
2016-12-01
In preparation of the 2014 realization of the International Terrestrial Reference Frame (ITRF2014), the International DORIS Service delivered to the International Earth Rotation and Reference Systems Service a set of 1140 weekly solution files including station coordinates and Earth orientation parameters, covering the time period from 1993.0 to 2015.0. The data come from eleven DORIS satellites: TOPEX/Poseidon, SPOT2, SPOT3, SPOT4, SPOT5, Envisat, Jason-1, Jason-2, Cryosat-2, Saral and HY-2A. In their processing, the six analysis centers which contributed to the DORIS combined solution used the latest time variable gravity models and estimated DORIS ground beacon frequency variations. Furthermore, all the analysis centers but one excepted included in their processing phase center variations for ground antennas. The main objective of this study is to present the combination process and to analyze the impact of the new modeling on the performance of the new combined solution. Comparisons with the IDS contribution to ITRF2008 show that (i) the application of the DORIS ground phase center variations in the data processing shifts the combined scale upward by nearly 7-11 mm and (ii) thanks to estimation of DORIS ground beacon frequency variations, the new combined solution no longer shows any scale discontinuity in early 2002 and does not present unexplained vertical discontinuities in any station position time series. However, analysis of the new series with respect to ITRF2008 exhibits a scale increase late 2011 which is not yet explained. A new DORIS Terrestrial Reference Frame was computed to evaluate the intrinsic quality of the new combined solution. That evaluation shows that the addition of data from the new missions equipped with the latest generation of DORIS receiver (Jason-2, Cryosat-2, HY-2A, Saral), results in an internal position consistency of 10 mm or better after mid-2008.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Four dimensional studies in earth space
NASA Technical Reports Server (NTRS)
Mather, R. S.
1972-01-01
A system of reference which is directly related to observations, is proposed for four-dimensional studies in earth space. Global control network and polar wandering are defined. The determination of variations in the earth's gravitational field with time also forms part of such a system. Techniques are outlined for the unique definition of the motion of the geocenter, and the changes in the location of the axis of rotation of an instantaneous earth model, in relation to values at some epoch of reference. The instantaneous system referred to is directly related to a fundamental equation in geodynamics. The reference system defined would provide an unambiguous frame for long period studies in earth space, provided the scale of the space were specified.
Koho, Petteri; Borodulin, Katja; Kautiainen, Hannu; Kujala, Urho; Pohjolainen, Timo; Hurri, Heikki
2015-03-01
To create reference values for the general Finnish population using the Tampa Scale of Kinesiophobia (TSK-FIN), to study gender differences in the TSK-FIN, to assess the internal consistency of the TSK-FIN, to estimate the prevalence of high levels of kinesiophobia in Finnish men and women, and to examine the association between kinesiophobia and leisure-time physical activity and the impact of co-morbidities on kinesiophobia. The study population comprised 455 men and 579 women. Participants completed a self-administered questionnaire about their socio-demographic factors, leisure-time physical activity, co-morbidities and kinesiophobia. The mean TSK-FIN score was significantly higher for men (mean 34.2, standard deviation (SD) 6.9) compared with women (mean 32.9, SD 6.5), with an age-adjusted p = 0.004 for the difference between men and women. Cronbach's alpha was 0.72, indicating substantial internal consistency. Men over 55 years of age and women over 65 years of age had a higher (p < 0.001) TSK score compared with younger people. There was a significant (p < 0.001) inverse association between kinesiophobia and leisure-time physical activity among both sexes. The presence of cardiovascular disease, musculoskeletal disease or a mental disorder was associated with a higher TSK-FIN score compared with the absence of the aforementioned disorders. We present here the reference values for the TSK-FIN. The reference values and prevalence among the general population may help clinicians to define the level of kinesiophobia among patients. Disorders other than musculoskeletal diseases were associated with kinesiophobia, which should be noted in daily practice.
Application of Millisecond Pulsar Timing to the Long-Term Stability of Clock Ensembles
NASA Technical Reports Server (NTRS)
Foster, Roger S.; Matsakis, Demetrios N.
1996-01-01
We review the application of millisecond pulsars to define a precise long-term standard and positional reference system in a nearly inertial reference frame. We quantify the current timing precision of the best millisecond pulsars and define the required precise time and time interval (PTTI) accuracy and stability to enable time transfer via pulsars. Pulsars may prove useful as independent standards to examine decade-long timing stability and provide an independent natural system within which to calibrate any new, perhaps vastly improved atomic time scale. Since pulsar stability appears to be related to the lifetime of the pulsar, the new millisecond pulsar J173+0747 is projected to have a 100-day accuracy equivalent to a single HP5071 cesium standard. Over the last five years, dozens of new millisecond pulsars have been discovered. A few of the new millisecond pulsars may have even better timing properties.
Scale free effects in world currency exchange network
NASA Astrophysics Data System (ADS)
Górski, A. Z.; Drożdż, S.; Kwapień, J.
2008-11-01
A large collection of daily time series for 60 world currencies' exchange rates is considered. The correlation matrices are calculated and the corresponding Minimal Spanning Tree (MST) graphs are constructed for each of those currencies used as reference for the remaining ones. It is shown that multiplicity of the MST graphs' nodes to a good approximation develops a power like, scale free distribution with the scaling exponent similar as for several other complex systems studied so far. Furthermore, quantitative arguments in favor of the hierarchical organization of the world currency exchange network are provided by relating the structure of the above MST graphs and their scaling exponents to those that are derived from an exactly solvable hierarchical network model. A special status of the USD during the period considered can be attributed to some departures of the MST features, when this currency (or some other tied to it) is used as reference, from characteristics typical to such a hierarchical clustering of nodes towards those that correspond to the random graphs. Even though in general the basic structure of the MST is robust with respect to changing the reference currency some trace of a systematic transition from somewhat dispersed - like the USD case - towards more compact MST topology can be observed when correlations increase.
NASA Astrophysics Data System (ADS)
Dimbylow, Peter
2005-09-01
Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.
Gray, B.R.; Shi, W.; Houser, J.N.; Rogala, J.T.; Guan, Z.; Cochran-Biederman, J. L.
2011-01-01
Ecological restoration efforts in large rivers generally aim to ameliorate ecological effects associated with large-scale modification of those rivers. This study examined whether the effects of restoration efforts-specifically those of island construction-within a largely open water restoration area of the Upper Mississippi River (UMR) might be seen at the spatial scale of that 3476ha area. The cumulative effects of island construction, when observed over multiple years, were postulated to have made the restoration area increasingly similar to a positive reference area (a proximate area comprising contiguous backwater areas) and increasingly different from two negative reference areas. The negative reference areas represented the Mississippi River main channel in an area proximate to the restoration area and an open water area in a related Mississippi River reach that has seen relatively little restoration effort. Inferences on the effects of restoration were made by comparing constrained and unconstrained models of summer chlorophyll a (CHL), summer inorganic suspended solids (ISS) and counts of benthic mayfly larvae. Constrained models forced trends in means or in both means and sampling variances to become, over time, increasingly similar to those in the positive reference area and increasingly dissimilar to those in the negative reference areas. Trends were estimated over 12- (mayflies) or 14-year sampling periods, and were evaluated using model information criteria. Based on these methods, restoration effects were observed for CHL and mayflies while evidence in favour of restoration effects on ISS was equivocal. These findings suggest that the cumulative effects of island building at relatively large spatial scales within large rivers may be estimated using data from large-scale surveillance monitoring programs. Published in 2010 by John Wiley & Sons, Ltd.
Dimbylow, Peter
2005-09-07
Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.
NASA Astrophysics Data System (ADS)
Omrani, H.; Drobinski, P.; Dubos, T.
2009-09-01
In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.
Solar variability. [measurements by spaceborne instruments
NASA Technical Reports Server (NTRS)
Sofia, S.
1981-01-01
Reference is made to direct measurements carried out by space-borne detectors which have shown variations of the solar constant at the 0.2 percent level, with times scales ranging from days to tens of days. It is contended that these changes do not necessarily reflect variations in the solar luminosity and that, in general, direct measurements have not yet been able to establish (or exclude) solar luminosity changes with longer time scales. Indirect techniques, however, especially radius measurements,suggest that solar luminosity variations of up to approximately 0.7 percent have occurred within a period of tens to hundreds of years.
On the cooperativity of association and reference energy scales in thermodynamic perturbation theory
NASA Astrophysics Data System (ADS)
Marshall, Bennett D.
2016-11-01
Equations of state for hydrogen bonding fluids are typically described by two energy scales. A short range highly directional hydrogen bonding energy scale as well as a reference energy scale which accounts for dispersion and orientationally averaged multi-pole attractions. These energy scales are always treated independently. In recent years, extensive first principles quantum mechanics calculations on small water clusters have shown that both hydrogen bond and reference energy scales depend on the number of incident hydrogen bonds of the water molecule. In this work, we propose a new methodology to couple the reference energy scale to the degree of hydrogen bonding in the fluid. We demonstrate the utility of the new approach by showing that it gives improved predictions of water-hydrocarbon mutual solubilities.
NASA Astrophysics Data System (ADS)
Chiogna, Gabriele; Bellin, Alberto
2013-05-01
The laboratory experiments of Gramling et al. (2002) showed that incomplete mixing at the pore scale exerts a significant impact on transport of reactive solutes and that assuming complete mixing leads to overestimation of product concentration in bimolecular reactions. Successively, several attempts have been made to model this experiment, either considering spatial segregation of the reactants, non-Fickian transport applying a Continuous Time Random Walk (CTRW) or an effective upscaled time-dependent kinetic reaction term. Previous analyses of these experimental results showed that, at the Darcy scale, conservative solute transport is well described by a standard advection dispersion equation, which assumes complete mixing at the pore scale. However, reactive transport is significantly affected by incomplete mixing at smaller scales, i.e., within a reference elementary volume (REV). We consider here the family of equilibrium reactions for which the concentration of the reactants and the product can be expressed as a function of the mixing ratio, the concentration of a fictitious non reactive solute. For this type of reactions we propose, in agreement with previous studies, to model the effect of incomplete mixing at scales smaller than the Darcy scale assuming that the mixing ratio is distributed within an REV according to a Beta distribution. We compute the parameters of the Beta model by imposing that the mean concentration is equal to the value that the concentration assumes at the continuum Darcy scale, while the variance decays with time as a power law. We show that our model reproduces the concentration profiles of the reaction product measured in the Gramling et al. (2002) experiments using the transport parameters obtained from conservative experiments and an instantaneous reaction kinetic. The results are obtained applying analytical solutions both for conservative and for reactive solute transport, thereby providing a method to handle the effect of incomplete mixing on multispecies reactive solute transport, which is simpler than other previously developed methods.
Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J
2013-08-01
Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.
Comparison of δ18O measurements in nitrate by different combustion techniques
Revesz, Kinga; Böhlke, John Karl
2002-01-01
Three different KNO3 salts with δ18O values ranging from about −31 to +54‰ relative to VSMOW were used to compare three off-line, sealed glass tube combustion methods (widely used for isotope studies) with a more recently developed on-line carbon combustion technique. All methods yielded roughly similar isotope ratios for KNO3 samples with δ18O values in the midpoint of the δ18O scale near that of the nitrate reference material IAEA-NO-3 (around +21 to +25‰). This reference material has been used previously for one-point interlaboratory and intertechnique calibrations. However, the isotope ratio scale factors by all of the off-line combustion techniques are compressed such that they are between 0.3 and 0.7 times that of the on-line combustion technique. The contraction of the δ18O scale in the off-line preparations apparently is caused by O isotope exchange between the sample and the glass combustion tubes. These results reinforce the need for nitrate reference materials with δ18O values far from that of atmospheric O2, to improve interlaboratory comparability.
Capponi, Rebecca; Loguercio, Valentina; Guerrini, Stefania; Beltrami, Giampietro; Vesprini, Andrea; Giostra, Fabrizio
2017-01-16
Pain evaluation at triage in Emergency Department (ED) is fundamental, as it influences significantly patients color code determination. Different scales have been proposed to quantify pain but they are not always reliable. This study aims to determine a) how important is for triage nurses pain measurement b) reliability of Numeric Rating Scale (NRS), the most used instrument to evaluate pain in Italian EDs, because it frequently shows higher pain scores than others scales. End point 1: a questionnaire was administered to triage nurses in some hospitals of northern Italy. End point 2: 250 patients arriving at the ED referring pain have been evaluated using, randomly, either the NRS or a fake "30-50" scale. End point 1: Triage nurses acknowledge to modify frequently the referred pain intensity. This for several reasons: nurses think that patients may exaggerate to obtain a higher priority color code; they may be influenced by specific patients categories (non EU citizens, drugs-addicted, elderly); the pain score referred by patients is not correspondent to nurse perception. End point 2: Data show that the mean value obtained with NRS is significantly (p<0.05) higher that the mean obtained with the "30-50" scale. Manipulation on pain evaluation performed by nurses might result in a dangerous underestimation of this symptom. At the same time, the use of NRS seems to allow patients to exaggerate pain perception with consequent altered attribution of color code at triage.
A First Look at the Upcoming SISO Space Reference FOM
NASA Technical Reports Server (NTRS)
Crues, Edwin; Dexter, Dan; Madden, Michael; Garro, Alfred; Vankov, Alexander; Skuratovskiy, Anton; Moller, Bjorn
2016-01-01
Simulation is increasingly used in the space domain for several purposes. One example is analysis and engineering, from the mission level down to individual systems and subsystems. Another example is training of space crew and flight controllers. Several distributed simulations have been developed for example for docking vehicles with the ISS and for mission training, in many cases with participants from several nations. Space based scenarios are also used in the "Simulation Exploration Experience", SISO's university outreach program. We have thus realized that there is a need for a distributed simulation interoperability standard for data exchange within the space domain. Based on these experiences, SISO is developing a Space Reference FOM. Members of the product development group come from several countries and contribute experiences from projects within NASA, ESA and other organizations. Participants represent government, academia and industry. The first version will focus on handling of time and space. The Space Reference FOM will provide the following: (i) a flexible positioning system using reference frames for arbitrary bodies in space, (ii) a naming conventions for well known reference frames, (iii) definitions of common time scales, (iv) federation agreements for common types of time management with focus on time stepped simulation, and (v) support for physical entities, such as space vehicles and astronauts. The Space Reference FOM is expected to make collaboration politically, contractually and technically easier. It is also expected to make collaboration easier to manage and extend.
Bird assemblage response to restoration of fire-suppressed longleaf pine sandhills.
Steen, David A; Conner, L M; Smith, Lora L; Provencher, Louis; Hiers, J Kevin; Pokswinski, Scott; Helms, Brian S; Guyer, Craig
2013-01-01
The ecological restoration of fire-suppressed habitats may require a multifaceted approach. Removal of hardwood trees together with reintroduction of fire has been suggested as a method of restoring fire-suppressed longleaf pine (Pinus palustris) forests; however, this strategy, although widespread, has not been evaluated on large spatial and temporal scales. We used a landscape-scale experimental design to examine how bird assemblages in fire-suppressed longleaf pine sandhills responded to fire alone or fire following mechanical removal or herbicide application to reduce hardwood levels. Individual treatments were compared to fire-suppressed controls and reference sites. After initial treatment, all sites were managed with prescribed fire, on an approximately two- to three-year interval, for over a decade. Nonmetric multidimensional scaling ordinations suggested that avian assemblages on sites that experienced any form of hardwood removal differed from assemblages on both fire-suppressed sites and reference sites 3-4 years after treatment (i.e., early posttreatment). After >10 years of prescribed burning on all sites (i.e., late posttreatment), only assemblages at sites treated with herbicide were indistinguishable from assemblages at reference sites. By the end of the study, individual species that were once indicators of reference sites no longer contributed to making reference sites unique. Occupancy modeling of these indicator species also demonstrated increasing similarity across treatments over time. Overall, although we documented long-term and variable assemblage-level change, our results indicate occupancy for birds considered longleaf pine specialists was similar at treatment and reference sites after over a decade of prescribed burning, regardless of initial method of hardwood removal. In other words, based on the response of species highly associated with the habitat, we found no justification for the added cost and effort of fire surrogates; fire alone was sufficient to restore these species.
Braun, Tobias; Grüneberg, Christian; Thiel, Christian
2018-04-01
Routine screening for frailty could be used to timely identify older people with increased vulnerability und corresponding medical needs. The aim of this study was the translation and cross-cultural adaptation of the PRISMA-7 questionnaire, the FRAIL scale and the Groningen Frailty Indicator (GFI) into the German language as well as a preliminary analysis of the diagnostic test accuracy of these instruments used to screen for frailty. A diagnostic cross-sectional study was performed. The instrument translation into German followed a standardized process. Prefinal versions were clinically tested on older adults who gave structured in-depth feedback on the scales in order to compile a final revision of the German language scale versions. For the analysis of diagnostic test accuracy (criterion validity), PRISMA-7, FRAIL scale and GFI were considered the index tests. Two reference tests were applied to assess frailty, either based on Fried's model of a Physical Frailty Phenotype or on the model of deficit accumulation, expressed in a Frailty Index. Prefinal versions of the German translations of each instrument were produced and completed by 52 older participants (mean age: 73 ± 6 years). Some minor issues concerning comprehensibility and semantics of the scales were identified and resolved. Using the Physical Frailty Phenotype (frailty prevalence: 4%) criteria as a reference standard, the accuracy of the instruments was excellent (area under the curve AUC >0.90). Taking the Frailty Index (frailty prevalence: 23%) as the reference standard, the accuracy was good (AUC between 0.73 and 0.88). German language versions of PRISMA-7, FRAIL scale and GFI have been established and preliminary results indicate sufficient diagnostic test accuracy that needs to be further established.
Spagnoli, A; Foresti, G; MacDonald, A; Williams, P
1987-05-01
The Organic Brain Syndrome (OBS) and the Depression (D) scales derived from the Comprehensive Assessment and Referral Evaluation (CARE) were translated into Italian and used in a survey of geriatric institutions in Milan. During the survey validity and reliability tests of the scales were conducted. Inter-rater reliability (total score weighted kappa) was highly satisfactory for both scales (0.96 for OBS and 0.83 for D scale). Reliability was assessed three times during the survey and showed good stability for both scales, with a slight but significant trend towards reduction over time for the D scale. Reliability of the D scale was significantly lower when the subjects interviewed scored highly on the OBS scale (severe cognitive impairment). Criterion validity was highly satisfactory both for the OBS scale (cut-off point 4/5: sensitivity 77%, specificity 96%, positive predictive value 91%) and the D scale (cut-off point 10/11: sensitivity 95%, specificity 92%, positive predictive value 84%). Results are discussed with special reference to longitudinal assessment of reliability, the choice of the cut-off point, and the context-dependent properties of questionnaires.
Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye
2016-01-13
A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.
Operational use of the GPS to build the "Temps Atomique Français" TA(F).
NASA Astrophysics Data System (ADS)
Fréon, G.; Tourde, R.
The clock comparisons by the observations of the satellites of the GPS in common view between several laboratories have been used by the BNM-LPTF since 1983. They have contributed to improve the stability of the national reference time scale: the "Temps Atomique Français". This time comparison method is also used by the Bureau International des Poids et Mesures and all the time and frequency laboratories which participate to the calculation of the International Atomic Time (TAI).
Cross-cultural adaptation of the German version of the spinal stenosis measure.
Wertli, Maria M; Steurer, Johann; Wildi, Lukas M; Held, Ulrike
2014-06-01
To validate the German version of the spinal stenosis measure (SSM), a disease-specific questionnaire assessing symptom severity, physical function, and satisfaction with treatment in patients with lumbar spinal stenosis. After translation, cross-cultural adaptation, and pilot testing, we assessed internal consistency, test-retest reliability, construct validity, and responsiveness of the SSM subscales. Data from a large Swiss multi-center prospective cohort study were used. Reference scales for the assessment of construct validity and responsiveness were the numeric rating scale, pain thermometer, and the Roland Morris Disability Questionnaire. One hundred and eight consecutive patients were included in this validation study, recruited from five different centers. Cronbach's alpha was above 0.8 for all three subscales of the SSM. The objectivity of the SSM was assessed using a partial credit approach. The model showed a good global fit to the data. Of the 108 patients 78 participated in the test-retest procedure. The ICC values were above 0.8 for all three subscales of the SSM. Correlations with reference scales were above 0.7 for the symptom and function subscales. For satisfaction subscale, it was 0.66 or above. Clinically meaningful changes of the reference scales over time were associated with significantly more improvement in all three SSM subscales (p < 0.001). Conclusion: The proposed version of the SSM showed very good measurement properties and can be considered validated for use in the German language.
IGS14/igs14.atx: a new Framework for the IGS Products
NASA Astrophysics Data System (ADS)
Rebischung, P.; Schmid, R.
2016-12-01
The International GNSS Service (IGS) is about to switch to a new reference frame (IGS14), based on the latest release of the International Terrestrial Reference Frame (ITRF2014), as the basis for its products. An updated set of satellite and ground antenna calibrations (igs14.atx) will become effective at the same time. IGS14 and igs14.atx will then replace the previous IGS08/igs08.atx framework in use since GPS week 1632 (17 April 2011) and in the second IGS reprocessing campaign (repro2). Despite the negligible scale difference between ITRF2008 and ITRF2014 (0.02 ppb), the radial components of all GPS and GLONASS satellite antenna phase center offsets (z-PCOs) had to be updated in igs14.atx, because of modeling changes recently introduced within the IGS that affect the scale of the IGS products. This was achieved by deriving and averaging time series of satellite z-PCO estimates, consistent with the ITRF2014 scale, from the daily repro2 and latest operational SINEX solutions of seven IGS Analysis Centers (ACs). Compared to igs08.atx, igs14.atx includes robot calibrations for 16 additional ground antenna types, so that the percentage of stations with absolute calibrations in the IGS network will reach 90% after the switch. 19 type-mean robot calibrations were also updated thanks to the availability of calibration results for additional antenna samples. IGS14 is basically an extract of well-suited reference frame stations (i.e., with long and stable position time series) from ITRF2014. However, to make the IGS14 station coordinates consistent with the new igs14.atx ground antenna calibrations, position offsets due to the switch from igs08.atx to igs14.atx were derived for all IGS14 stations affected by ground antenna calibration updates and applied to their ITRF2014 coordinates. This presentation will first detail the different steps of the elaboration of IGS14 and igs14.atx. The impact of the switch on GNSS-derived geodetic parameter time series will then be assessed by re-aligning the daily repro2 and latest operational IGS combined SINEX solutions to IGS14/igs14.atx. A particular focus will finally be given to the biases and trends present in the satellite z-PCO time series derived from the daily AC SINEX solutions, and to their interpretation in terms of scale and scale rate of the terrestrial frame.
Dowling, Geraldine; Kavanagh, Pierce V; Eckhardt, Hans-Georg; Twamley, Brendan; Hessman, Gary; McLaughlin, Gavin; O'Brien, John; Brandt, Simon D
2018-03-15
Nitrazolam and clonazolam are 2 designer benzodiazepines available from Internet retailers. There is growing evidence suggesting that such compounds have the potential to cause severe adverse events. Information about tolerability in humans is scarce but typically, low doses can be difficult to administer for users when handling bulk material. Variability of the active ingredient in tablet formulations can also be of a concern. Customs, toxicology and forensic laboratories are increasingly encountering designer benzodiazepines, both in tablet and powdered form. The unavailability of reference standards can impact on the ability to identify these compounds. Therefore, the need arises for exploring in-house approaches to the preparation of new psychoactive substances (NPS) that can be carried out in a timely manner. The present study was triggered when samples of clonazolam were received in powdered and tablet form at a time when reference material for this drug was commercially unavailable. Therefore, microscale syntheses of clonazolam and its deschloro analog nitrazolam were developed utilizing polymer-supported reagents starting from 2-amino-2'-chloro-5-nitrobenzophenone (clonazolam) and 2-amino-5-nitrobenzophenone (nitrazolam). The final reaction step forming the 1,2,4-triazole ring moiety was performed within a gas chromatography-mass spectrometry (GC-MS) injector. A comparison with a preparative scale synthesis of both benzodiazepine derivatives showed that microscale synthesis might be an attractive option for a forensic laboratory in terms of time and cost savings when compared with traditional methods of synthesis and when qualitative identifications are needed to direct forensic casework. The reaction by-product profiles for both the micro and the preparative scale syntheses are also presented. Copyright © 2018 John Wiley & Sons, Ltd.
UTC(SU) and EOP(SU) - the only legal reference frames of Russian Federation
NASA Astrophysics Data System (ADS)
Koshelyaevsky, Nikolay B.; Blinov, Igor Yu; Pasynok, Sergey L.
2015-08-01
There are two legal time reference frames in Russian Federation. UTC(SU) deals with atomic time and play a role of reference for legal timing through the whole country. The other one, EOP(SU), deals with Earth's orientation parameters and provides the official EOP data for scientific, technical and metrological applications in Russia.The atomic time is based on two essential hardware components: primary Cs fountain standards and ensemble of continuously operating H-masers as a time unit/time scale keeper. Basing on H-maser intercomparison system data, regular H-maser frequency calibration against Cs standards and time algorithm autonomous TA(SU) time scale is maintained by the Main Metrological Center. Since 2013 time unit in TA(SU) is the second (SU) reproduced independently by VNIIFTRI Cs primary standards in accordance to it’s definition in the SI. UTC(SU) is relied on TA(SU) and steering to UTC basing on TWSTFT/GNSS time link data. As a result TA(SU) stability level relative to TT considerably exceeds 1×10-15 for sample time one month and more, RMS[UTC-UTC(SU)] ≤ 3 ns for the period of 2013-2015. UTC(SU) is broadcasted by different national means such as specialized radio and TV stations, NTP servers and GLONASS. Signals of Russian radio stations contains DUT1 and dUT1 values at 0.1s and 0.02s resolution respectively.The definitive EOP(SU) are calculated by the Main Metrological Center basing on composition of the eight independent individual EOP data streams delivered by four Russian analysis centers: VNIIFTRI, Institute of Applied Astronomy, Information-Analytical Center of Russian Space Agency and Analysis Center of Russian Space Agency. The accuracy of ultra-rapid EOP values for 2014 is estimated ≤ 0.0006" for polar motion, ≤ 70 microseconds for UT1-UTC and ≤ 0.0003" for celestial pole offsets respectively.The other VNIIFTRI EOP activities can be grouped in three basic directions:- arrangement and carrying out GNSS and SLR observations at five institutes- processing GNSS, SLR and VLBI observation data for EOP evaluation- combination of GLONASS satellites orbit/clocks.The paper will deliver more detailed and particular information on Russian legal reference frames.
Scale of reference bias and the evolution of health.
Groot, Wim
2003-09-01
The analysis of subjective measures of well-being-such as self-reports by individuals about their health status is frequently hampered by the problem of scale of reference bias. A particular form of scale of reference bias is age norming. In this study we corrected for scale of reference bias by allowing for individual specific effects in an equation on subjective health. A random effects ordered response model was used to analyze scale of reference bias in self-reported health measures. The results indicate that if we do not control for unobservable individual specific effects, the response to a subjective health state measure suffers from age norming. Age norming can be controlled for by a random effects estimation technique using longitudinal data. Further, estimates are presented on the rate of depreciation of health. Finally, simulations of life expectancy indicate that the estimated model provides a reasonably good fit of the true life expectancy.
NASA Technical Reports Server (NTRS)
Bivolaru, Daniel (Inventor); Cutler, Andrew D. (Inventor); Danehy, Paul M. (Inventor)
2015-01-01
A system that simultaneously measures the translational temperature, bulk velocity, and density in gases by collecting, referencing, and analyzing nanosecond time-scale Rayleigh scattered light from molecules is described. A narrow-band pulsed laser source is used to probe two largely separated measurement locations, one of which is used for reference. The elastically scattered photons containing information from both measurement locations are collected at the same time and analyzed spectrally using a planar Fabry-Perot interferometer. A practical means of referencing the measurement of velocity using the laser frequency, and the density and temperature using the information from the reference measurement location maintained at constant properties is provided.
Dynamic structural disorder in supported nanoscale catalysts
NASA Astrophysics Data System (ADS)
Rehr, J. J.; Vila, F. D.
2014-04-01
We investigate the origin and physical effects of "dynamic structural disorder" (DSD) in supported nano-scale catalysts. DSD refers to the intrinsic fluctuating, inhomogeneous structure of such nano-scale systems. In contrast to bulk materials, nano-scale systems exhibit substantial fluctuations in structure, charge, temperature, and other quantities, as well as large surface effects. The DSD is driven largely by the stochastic librational motion of the center of mass and fluxional bonding at the nanoparticle surface due to thermal coupling with the substrate. Our approach for calculating and understanding DSD is based on a combination of real-time density functional theory/molecular dynamics simulations, transient coupled-oscillator models, and statistical mechanics. This approach treats thermal and dynamic effects over multiple time-scales, and includes bond-stretching and -bending vibrations, and transient tethering to the substrate at longer ps time-scales. Potential effects on the catalytic properties of these clusters are briefly explored. Model calculations of molecule-cluster interactions and molecular dissociation reaction paths are presented in which the reactant molecules are adsorbed on the surface of dynamically sampled clusters. This model suggests that DSD can affect both the prefactors and distribution of energy barriers in reaction rates, and thus can significantly affect catalytic activity at the nano-scale.
Timescale bias in measuring river migration rate
NASA Astrophysics Data System (ADS)
Donovan, M.; Belmont, P.; Notebaert, B.
2016-12-01
River channel migration plays an important role in sediment routing, water quality, riverine ecology, and infrastructure risk assessment. Migration rates may change in time and space due to systematic changes in hydrology, sediment supply, vegetation, and/or human land and water management actions. The ability to make detailed measurements of lateral migration over a wide range of temporal and spatial scales has been enhanced from increased availability of historical landscape-scale aerial photography and high-resolution topography (HRT). Despite a surge in the use of historical and contemporary aerial photograph sequences in conjunction with evolving methods to analyze such data for channel change, we found no research considering the biases that may be introduced as a function of the temporal scales of measurement. Unsteady processes (e.g.; sedimentation, channel migration, width changes) exhibit extreme discontinuities over time and space, resulting in distortion when measurements are averaged over longer temporal scales, referred to as `Sadler effects' (Sadler, 1981; Gardner et al., 1987). Using 12 sets of aerial photographs for the Root River (Minnesota), we measure lateral migration over space (110 km) and time (1937-2013) assess whether bias arises from different measurement scales and whether rates shift systematically with increased discharge over time. Results indicate that measurement-scale biases indeed arise from the time elapsed between measurements. We parsed the study reach into three distinct reaches and examine if/how recent increases in river discharge translate into changes in migration rate.
Magnetometry with Ensembles of Nitrogen Vacancy Centers in Bulk Diamond
2015-10-23
the ESR curve. Any frequency components of the photodetector signal which are not close to the reference frequency, are filtered out. This mitigates ...indicating that we have not yet run up against thermal or flicker noise for these time scales. 5.3 Details of frequency modulation circuit In order
77 FR 71574 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-03
... Test. OMB Control Number: None Form Number(s): The automated survey instrument has no form number. Type... have been developed and are now slated for a large-scale field test to evaluate the questions and the... reference period and timing of data collection. Qualitative research has [[Page 71575
On the Role of Multi-Scale Processes in CO2 Storage Security and Integrity
NASA Astrophysics Data System (ADS)
Pruess, K.; Kneafsey, T. J.
2008-12-01
Consideration of multiple scales in subsurface processes is usually referred to the spatial domain, where we may attempt to relate process descriptions and parameters from pore and bench (Darcy) scale to much larger field and regional scales. However, multiple scales occur also in the time domain, and processes extending over a broad range of time scales may be very relevant to CO2 storage and containment. In some cases, such as in the convective instability induced by CO2 dissolution in saline waters, space and time scales are coupled in the sense that perturbations induced by CO2 injection will grow concurrently over many orders of magnitude in both space and time. In other cases, CO2 injection may induce processes that occur on short time scales, yet may affect large regions. Possible examples include seismicity that may be triggered by CO2 injection, or hypothetical release events such as "pneumatic eruptions" that may discharge substantial amounts of CO2 over a short time period. This paper will present recent advances in our experimental and modeling studies of multi-scale processes. Specific examples that will be discussed include (1) the process of CO2 dissolution-diffusion-convection (DDC), that can greatly accelerate the rate at which free-phase CO2 is stored as aqueous solute; (2) self- enhancing and self-limiting processes during CO2 leakage through faults, fractures, or improperly abandoned wells; and (3) porosity and permeability reduction from salt precipitation near CO2 injection wells, and mitigation of corresponding injectivity loss. This work was supported by the Office of Basic Energy Sciences and by the Zero Emission Research and Technology project (ZERT) under Contract No. DE-AC02-05CH11231 with the U.S. Department of Energy.
Zhang, Yifan; Chen, Xiaoyan; Tang, Yunbiao; Lu, Youming; Guo, Lixia; Zhong, Dafang
2017-01-01
Purpose The aim of this study was to evaluate the bioequivalence of a generic product 70 mg alendronate sodium tablets with the reference product Fosamax® 70 mg tablet. Materials and methods A single-center, open-label, randomized, three-period, three-sequence, reference-replicated crossover study was performed in 36 healthy Chinese male volunteers under fasting conditions. In each study period, the volunteers received a single oral dose of the generic or reference product (70 mg). Blood samples were collected at pre-dose and up to 8 h after administration. The bioequivalence of the generic product to the reference product was assessed using the US Food and Drug Administration (FDA) and European Medicines Agency (EMA) reference-scaled average bioequivalence (RSABE) methods. Results The average maximum concentrations (Cmax) of alendronic acid were 64.78±43.76, 56.62±31.95, and 60.15±37.12 ng/mL after the single dose of the generic product and the first and second doses of the reference product, respectively. The areas under the plasma concentration–time curves from time 0 to the last timepoint (AUC0–t) were 150.36±82.90, 148.15±85.97, and 167.11±110.87 h⋅ng/mL, respectively. Reference scaling was used because the within-subject standard deviations of the reference product (sWR) for Cmax and AUC0–t were all higher than the cutoff value of 0.294. The 95% upper confidence bounds were −0.16 and −0.17 for Cmax and AUC0–t, respectively, and the point estimates for the generic/reference product ratio were 1.08 and 1.00, which satisfied the RSABE acceptance criteria of the FDA. The 90% CIs for Cmax and AUC0–t were 90.35%–129.04% and 85.31%–117.15%, respectively, which were within the limits of the EMA for the bioequivalence of 69.84%–143.19% and 80.00%–125.00%. Conclusion The generic product was bioequivalent to the reference product in terms of the rate and extent of alendronate absorption after a single 70 mg oral dose under fasting conditions. PMID:28744102
Fish Scale Evidence for Rapid Post Glacial Colonization of an Atlantic Coastal Pond
NASA Technical Reports Server (NTRS)
Daniels, R. A.; Peteet, Dorothy
1996-01-01
Fish scales from the sediment of Allamuchy Pond, New Jersey, USA, indicate that fishes were present in the pond within 400 years of the time of the first deposition of organic material, at approximately 12,600 yrs BP. The earliest of the scales, from a white sucker, Catostomus commersoni, appears in sediment dated 12,260 +/- 220 yrs BP. Presence of scales in sediment deposited before I 0,000 yrs BP indicates that Atlantic salmon, Salmo salar, sunfish, Lepomis sp., and yellow perch, Perca flavescens, also were early inhabitants of the pond. The timing of the arrival of each of these fishes suggests that they migrated out from Atlantic coastal refugia. A minnow scale, referred to Phoxininae, was also retrieved; it could not be matched to any cyprinid currently found in northeastern North America. The species present historically in this pond are from five families found currently in ponds throughout the Northeast and sugoest that the lentic palaeo-enviromnent was similar to present mid-elevation or high-latitude lentic systems.
Video-Game-Like Engine for Depicting Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Upchurch, Paul R.
2009-01-01
GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.
Flippo, K. A.; Doss, F. W.; Merritt, E. C.; ...
2018-05-30
The LANL Shear Campaign uses millimeter-scale initially solid shock tubes on the National Ignition Facility to conduct high-energy-density hydrodynamic plasma experiments, capable of reaching energy densities exceeding 100 kJ/cm 3. These shock-tube experiments have for the first time reproduced spontaneously emergent coherent structures due to shear-based fluid instabilities [i.e., Kelvin-Helmholtz (KH)], demonstrating hydrodynamic scaling over 8 orders of magnitude in time and velocity. The KH vortices, referred to as “rollers,” and the secondary instabilities, referred to as “ribs,” are used to understand the turbulent kinetic energy contained in the system. Their evolution is used to understand the transition to turbulencemore » and that transition's dependence on initial conditions. Experimental results from these studies are well modeled by the RAGE (Radiation Adaptive Grid Eulerian) hydro-code using the Besnard-Harlow-Rauenzahn turbulent mix model. Information inferred from both the experimental data and the mix model allows us to demonstrate that the specific Turbulent Kinetic Energy (sTKE) in the layer, as calculated from the plan-view structure data, is consistent with the mixing width growth and the RAGE simulations of sTKE.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flippo, K. A.; Doss, F. W.; Merritt, E. C.
The LANL Shear Campaign uses millimeter-scale initially solid shock tubes on the National Ignition Facility to conduct high-energy-density hydrodynamic plasma experiments, capable of reaching energy densities exceeding 100 kJ/cm 3. These shock-tube experiments have for the first time reproduced spontaneously emergent coherent structures due to shear-based fluid instabilities [i.e., Kelvin-Helmholtz (KH)], demonstrating hydrodynamic scaling over 8 orders of magnitude in time and velocity. The KH vortices, referred to as “rollers,” and the secondary instabilities, referred to as “ribs,” are used to understand the turbulent kinetic energy contained in the system. Their evolution is used to understand the transition to turbulencemore » and that transition's dependence on initial conditions. Experimental results from these studies are well modeled by the RAGE (Radiation Adaptive Grid Eulerian) hydro-code using the Besnard-Harlow-Rauenzahn turbulent mix model. Information inferred from both the experimental data and the mix model allows us to demonstrate that the specific Turbulent Kinetic Energy (sTKE) in the layer, as calculated from the plan-view structure data, is consistent with the mixing width growth and the RAGE simulations of sTKE.« less
Large-scale seismic waveform quality metric calculation using Hadoop
NASA Astrophysics Data System (ADS)
Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.
2016-09-01
In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.
Gonzalez-Gil, Graciela; Thomas, Ludivine; Emwas, Abdul-Hamid; Lens, Piet N. L.; Saikaly, Pascal E.
2015-01-01
Anaerobic granular sludge is composed of multispecies microbial aggregates embedded in a matrix of extracellular polymeric substances (EPS). Here we characterized the chemical fingerprint of the polysaccharide fraction of EPS in anaerobic granules obtained from full-scale reactors treating different types of wastewater. Nuclear magnetic resonance (NMR) signals of the polysaccharide region from the granules were very complex, likely as a result of the diverse microbial population in the granules. Using nonmetric multidimensional scaling (NMDS), the 1H NMR signals of reference polysaccharides (gellan, xanthan, alginate) and those of the anaerobic granules revealed that there were similarities between the polysaccharides extracted from granules and the reference polysaccharide alginate. Further analysis of the exopolysaccharides from anaerobic granules, and reference polysaccharides using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) revealed that exopolysaccharides from two of the anaerobic granular sludges studied exhibited spectra similar to that of alginate. The presence of sequences related to the synthesis of alginate was confirmed in the metagenomes of the granules. Collectively these results suggest that alginate-like exopolysaccharides are constituents of the EPS matrix in anaerobic granular sludge treating different industrial wastewater. This finding expands the engineered environments where alginate has been found as EPS constituent of microbial aggregates. PMID:26391984
Gonzalez-Gil, Graciela; Thomas, Ludivine; Emwas, Abdul-Hamid; Lens, Piet N L; Saikaly, Pascal E
2015-09-22
Anaerobic granular sludge is composed of multispecies microbial aggregates embedded in a matrix of extracellular polymeric substances (EPS). Here we characterized the chemical fingerprint of the polysaccharide fraction of EPS in anaerobic granules obtained from full-scale reactors treating different types of wastewater. Nuclear magnetic resonance (NMR) signals of the polysaccharide region from the granules were very complex, likely as a result of the diverse microbial population in the granules. Using nonmetric multidimensional scaling (NMDS), the (1)H NMR signals of reference polysaccharides (gellan, xanthan, alginate) and those of the anaerobic granules revealed that there were similarities between the polysaccharides extracted from granules and the reference polysaccharide alginate. Further analysis of the exopolysaccharides from anaerobic granules, and reference polysaccharides using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) revealed that exopolysaccharides from two of the anaerobic granular sludges studied exhibited spectra similar to that of alginate. The presence of sequences related to the synthesis of alginate was confirmed in the metagenomes of the granules. Collectively these results suggest that alginate-like exopolysaccharides are constituents of the EPS matrix in anaerobic granular sludge treating different industrial wastewater. This finding expands the engineered environments where alginate has been found as EPS constituent of microbial aggregates.
NASA Astrophysics Data System (ADS)
Gattano, C.; Lambert, S.; Bizouard, C.
2017-12-01
In the context of selecting sources defining the celestial reference frame, we compute astrometric time series of all VLBI radio-sources from observations in the International VLBI Service database. The time series are then analyzed with Allan variance in order to estimate the astrometric stability. From results, we establish a new classification that takes into account the whole multi-time scales information. The algorithm is flexible on the definition of ``stable source" through an adjustable threshold.
SAIP2014, the 59th Annual Conference of the South African Institute of Physics
NASA Astrophysics Data System (ADS)
Engelbrecht, Chris; Karataglidis, Steven
2015-04-01
The International Celestial Reference Frame (ICRF) was adopted by the International Astronomical Union (IAU) in 1997. The current standard, the ICRF-2, is based on Very Long Baseline Interferometric (VLBI) radio observations of positions of 3414 extragalactic radio reference sources. The angular resolution achieved by the VLBI technique is on a scale of milliarcsecond to sub-milliarcseconds and defines the ICRF with the highest accuracy available at present. An ideal reference source used for celestial reference frame work should be unresolved or point-like on these scales. However, extragalactic radio sources, such as those that definevand maintain the ICRF, can exhibit spatially extended structures on sub-milliarsecond scalesvthat may vary both in time and frequency. This variability can introduce a significant error in the VLBI measurements thereby degrading the accuracy of the estimated source position. Reference source density in the Southern celestial hemisphere is also poor compared to the Northern hemisphere, mainly due to the limited number of radio telescopes in the south. In order to dene the ICRF with the highest accuracy, observational efforts are required to find more compact sources and to monitor their structural evolution. In this paper we show that the astrometric VLBI sessions can be used to obtain source structure information and we present preliminary imaging results for the source J1427-4206 at 2.3 and 8.4 GHz frequencies which shows that the source is compact and suitable as a reference source.
Global retrieval of soil moisture and vegetation properties using data-driven methods
NASA Astrophysics Data System (ADS)
Rodriguez-Fernandez, Nemesio; Richaume, Philippe; Kerr, Yann
2017-04-01
Data-driven methods such as neural networks (NNs) are a powerful tool to retrieve soil moisture from multi-wavelength remote sensing observations at global scale. In this presentation we will review a number of recent results regarding the retrieval of soil moisture with the Soil Moisture and Ocean Salinity (SMOS) satellite, either using SMOS brightness temperatures as input data for the retrieval or using SMOS soil moisture retrievals as reference dataset for the training. The presentation will discuss several possibilities for both the input datasets and the datasets to be used as reference for the supervised learning phase. Regarding the input datasets, it will be shown that NNs take advantage of the synergy of SMOS data and data from other sensors such as the Advanced Scatterometer (ASCAT, active microwaves) and MODIS (visible and infra red). NNs have also been successfully used to construct long time series of soil moisture from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) and SMOS. A NN with input data from ASMR-E observations and SMOS soil moisture as reference for the training was used to construct a dataset sharing a similar climatology and without a significant bias with respect to SMOS soil moisture. Regarding the reference data to train the data-driven retrievals, we will show different possibilities depending on the application. Using actual in situ measurements is challenging at global scale due to the scarce distribution of sensors. In contrast, in situ measurements have been successfully used to retrieve SM at continental scale in North America, where the density of in situ measurement stations is high. Using global land surface models to train the NN constitute an interesting alternative to implement new remote sensing surface datasets. In addition, these datasets can be used to perform data assimilation into the model used as reference for the training. This approach has recently been tested at the European Centre for Medium-Range Weather Forecasts (ECMWF). Finally, retrievals using radiative transfer models can also be used as a reference SM dataset for the training phase. This approach was used to retrieve soil moisture from ASMR-E, as mentioned above, and also to implement the official European Space Agency (ESA) SMOS soil moisture product in Near-Real-Time. We will finish with a discussion of the retrieval of vegetation parameters from SMOS observations using data-driven methods.
Fron, Eduard; Pilot, Roberto; Schweitzer, Gerd; Qu, Jianqiang; Herrmann, Andreas; Müllen, Klaus; Hofkens, Johan; Van der Auweraer, Mark; De Schryver, Frans C
2008-05-01
The excited state dynamics of two generations perylenediimide chromophores substituted in the bay area with dendritic branches bearing triphenylamine units as well as those of the respective reference compounds are investigated. Using single photon timing and multi-pulse femtosecond transient absorption experiments a direct proof of a reversible charge transfer occurring from the peripheral triphenylamine to the electron acceptor perylenediimide core is revealed. Femtosecond pump-dump-probe experiments provide evidence for the ground state dynamics by populating excited vibronic levels. It is found by the means of both techniques that the rotational isomerization of the dendritic branches occurs on a time scale that ranges up to 1 ns. This time scale of the isomerization depends on the size of the dendritic arms and is similar both in the ground and excited state.
The Vocabulary Knowledge Scale: A Critical Analysis
ERIC Educational Resources Information Center
Bruton, Anthony
2009-01-01
There are normally two major research reasons for assessing second and foreign language (L2) knowledge: either to gauge a participant's actual level of competence/proficiency or to assess language development over a period of time. In testing, the corresponding contrasts are typically referred to as proficiency tests on the one hand and…
ERIC Educational Resources Information Center
Cheek, Kim A.
2017-01-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude…
Similarity-based modeling in large-scale prediction of drug-drug interactions.
Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P
2014-09-01
Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients' quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. The method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. The method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. The time frame to implement this protocol is 5-7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented.
Potjans, Wiebke; Morrison, Abigail; Diesmann, Markus
2010-01-01
A major puzzle in the field of computational neuroscience is how to relate system-level learning in higher organisms to synaptic plasticity. Recently, plasticity rules depending not only on pre- and post-synaptic activity but also on a third, non-local neuromodulatory signal have emerged as key candidates to bridge the gap between the macroscopic and the microscopic level of learning. Crucial insights into this topic are expected to be gained from simulations of neural systems, as these allow the simultaneous study of the multiple spatial and temporal scales that are involved in the problem. In particular, synaptic plasticity can be studied during the whole learning process, i.e., on a time scale of minutes to hours and across multiple brain areas. Implementing neuromodulated plasticity in large-scale network simulations where the neuromodulatory signal is dynamically generated by the network itself is challenging, because the network structure is commonly defined purely by the connectivity graph without explicit reference to the embedding of the nodes in physical space. Furthermore, the simulation of networks with realistic connectivity entails the use of distributed computing. A neuromodulated synapse must therefore be informed in an efficient way about the neuromodulatory signal, which is typically generated by a population of neurons located on different machines than either the pre- or post-synaptic neuron. Here, we develop a general framework to solve the problem of implementing neuromodulated plasticity in a time-driven distributed simulation, without reference to a particular implementation language, neuromodulator, or neuromodulated plasticity mechanism. We implement our framework in the simulator NEST and demonstrate excellent scaling up to 1024 processors for simulations of a recurrent network incorporating neuromodulated spike-timing dependent plasticity. PMID:21151370
Reaching extended length-scales with accelerated dynamics
NASA Astrophysics Data System (ADS)
Hubartt, Bradley; Shim, Yunsic; Amar, Jacques
2012-02-01
While temperature-accelerated dynamics (TAD) has been quite successful in extending the time-scales for non-equilibrium simulations of small systems, the computational time increases rapidly with system size. One possible solution to this problem, which we refer to as parTAD^1 is to use spatial decomposition combined with our previously developed semi-rigorous synchronous sublattice algorithm^2. However, while such an approach leads to significantly better scaling as a function of system-size, it also artificially limits the size of activated events and is not completely rigorous. Here we discuss progress we have made in developing an alternative approach in which localized saddle-point searches are combined with parallel GPU-based molecular dynamics in order to improve the scaling behavior. By using this method, along with the use of an adaptive method to determine the optimal high-temperature^3, we have been able to significantly increase the range of time- and length-scales over which accelerated dynamics simulations may be carried out. [1] Y. Shim et al, Phys. Rev. B 76, 205439 (2007); ibid, Phys. Rev. Lett. 101, 116101 (2008). [2] Y. Shim and J.G. Amar, Phys. Rev. B 71, 125432 (2005). [3] Y. Shim and J.G. Amar, J. Chem. Phys. 134, 054127 (2011).
A Multivariate Approach for Comparing and Classifying Streamwater Quality
NASA Astrophysics Data System (ADS)
Hooper, R. P.; McGlynn, B. L.; Hjerdt, K. N.; McDonnell, J. J.
2001-05-01
Few measures exist for objectively comparing the chemistry of streams. We develop a multivariate technique, based on an eigenvalue analysis of streamwater concentrations, to facilitate comparison of water quality among sites across basin scales. A correlation matrix is constructed to include only solutes that mix conservatively. An eigenvalue analysis of this matrix is performed at each site to determine the approximate rank of the data set. If the ranks of all sites are roughly equal, one site is chosen as the reference site. The reduced set of eigenvectors from this site is chosen as the basis for a new, lower dimensional coordinate system and the data from the other sites are projected into this coordinate system. To assess the relative orientation of data from the reference site to all of the other sites, the relative bias (RB) and relative root mean square error (RRMSE) are calculated between the original and the projected points. The new technique was applied to multiple sites within three experimental watersheds to assess the consistency of water quality across the basin scale. The three watersheds were: Panola Mountain, Georgia, USA (6 solutes, 8 sites, 3 to 1000 ha); Sleepers River, Vermont, USA (5 solutes, 7 sites, 3 to 840 ha); and Maimai, South Island, New Zealand (4 solutes, 4 sites, 3 to 300 ha). Data from all sites were roughly planar with the first two eigenvectors explaining more than 90% of the variation. The RRMSEs for the reference site were generally between 5 and 10% with <0.1% RB. At Maimai, the RRMSE was roughly equivalent between the test sites and the 17-ha reference site, 5-8%; the RB was less than 4% at all sites. At Sleepers River, Ca and Mg had larger RRMSE at smaller basins relative to the 41 ha reference site; there was no consistent pattern to the RB for these solutes. Mg, Na, and SiO2 exhibited larger RRMSE (10-20%) and had substantial bias (10%, -20%, and 10%, respectively) at the 840-ha site compared with the 41-ha site. At Panola, only the 17-ha site was similar to the 41-ha reference site in RRMSE and had an RB<10%. The 8-ha site had an RRMSE about twice that of the reference site; Ca exhibited a 20% RB and Mg, -16% RB. All other sites had RRMSEs and RBs greater than 50% for at least one solute. These results indicate that the same set of processes control stream chemistry across a broad range of basin scales at Maimai, but that different processes are expressed at different scales to some extent at Sleepers River and more strongly at Panola Mountain. We hypothesize that this gradient reflects the relatively short residence time in the wetter Maimai basins and the progressively longer residence time in the drier Sleepers River and Panola basins. This new multivariate approach may be a way to objectively sort and classify stream chemistry characteristics from diverse watersheds and across basin scales.
[Time perceptions and representations].
Tordjman, S
2015-09-01
Representations of time and time measurements depend on subjective constructs that vary according to changes in our concepts, beliefs, societal needs and technical advances. Similarly, the past, the future and the present are subjective representations that depend on each individual's psychic time and biological time. Therefore, there is no single, one-size-fits-all time for everyone, but rather a different, subjective time for each individual. We need to acknowledge the existence of different inter-individual times but also intra-individual times, to which different functions and different rhythms are attached, depending on the system of reference. However, the construction of these time perceptions and representations is influenced by objective factors (physiological, physical and cognitive) related to neuroscience which will be presented and discussed in this article. Thus, studying representation and perception of time lies at the crossroads between neuroscience, human sciences and philosophy. Furthermore, it is possible to identify several constants among the many and various representations of time and their corresponding measures, regardless of the system of time reference. These include the notion of movements repeated in a stable rhythmic pattern involving the recurrence of the same interval of time, which enables us to define units of time of equal and invariable duration. This rhythmicity is also found at a physiological level and contributes through circadian rhythms, in particular the melatonin rhythm, to the existence of a biological time. Alterations of temporality in mental disorders will be also discussed in this article illustrated by certain developmental disorders such as autism spectrum disorders. In particular, the hypothesis will be developed that children with autism would need to create discontinuity out of continuity through stereotyped behaviors and/or interests. This discontinuity repeated at regular intervals could have been fundamentally lacking in their physiological development due to possibly altered circadian rhythms, including arhythmy and asynchrony. Time measurement, based on the repetition of discontinuity at regular intervals, involves also a spatial representation. It is our own trajectory through space-time, and thus our own motion, including the physiological process of aging, that affords us a representation of the passing of time, just as the countryside seems to be moving past us when we travel in a vehicle. Chinese and Indian societies actually have circular representations of time, and linear representations of time and its trajectory through space-time are currently a feature of Western societies. Circular time is collective time, and its metaphysical representations go beyond the life of a single individual, referring to the cyclical, or at least nonlinear, nature of time. Linear time is individual time, in that it refers to the scale of a person's lifetime, and it is physically represented by an arrow flying ineluctably from the past to the future. An intermediate concept can be proposed that acknowledges the existence of linear time involving various arrows of time corresponding to different lifespans (human, animal, plant, planet lifespans, etc.). In fact, the very notion of time would depend on the trajectory of each arrow of time, like shooting stars in the sky with different trajectory lengths which would define different time scales. The time scale of these various lifespans are very different (for example, a few decades for humans and a few days or hours for insects). It would not make sense to try to understand the passage of time experienced by an insect which may live only a few hours based on a human time scale. One hour in an insect's life cannot be compared to one experienced by a human. Yet again, it appears that there is a coexistence of different clocks based here on different lifespans. Finally, the evolution of our society focused on the present moment and choosing the cesium atom as the international reference unit of time measurement (cesium has a transition frequency of 9.192.631.77000 oscillations per second), will be questioned. We can consider that focusing on the present moment, in particular on instantaneity rather than infinity, prevents us from facing our own finitude. In conclusion, the question is raised that the current representation of time might be a means of managing our fear of death, giving us the illusion of controlling the uncontrollable, in particular the passage of time, and a means of avoiding to represent what many regard as non-representable, namely our own demise. Copyright © 2015 L’Encéphale. Published by Elsevier Masson SAS.. All rights reserved.
NASA Astrophysics Data System (ADS)
Guillevic, Myriam; Vollmer, Martin K.; Wyss, Simon A.; Leuenberger, Daiana; Ackermann, Andreas; Pascale, Céline; Niederhauser, Bernhard; Reimann, Stefan
2018-06-01
For many years, the comparability of measurements obtained with various instruments within a global-scale air quality monitoring network has been ensured by anchoring all results to a unique suite of reference gas mixtures, also called a primary calibration scale
. Such suites of reference gas mixtures are usually prepared and then stored over decades in pressurised cylinders by a designated laboratory. For the halogenated gases which have been measured over the last 40 years, this anchoring method is highly relevant as measurement reproducibility is currently much better ( < 1 %, k = 2 or 95 % confidence interval) than the expanded uncertainty of a reference gas mixture (usually > 2 %). Meanwhile, newly emitted halogenated gases are already measured in the atmosphere at pmol mol-1 levels, while still lacking an established reference standard. For compounds prone to adsorption on material surfaces, it is difficult to evaluate mixture stability and thus variations in the molar fractions over time in cylinders at pmol mol-1 levels.To support atmospheric monitoring of halogenated gases, we create new primary calibration scales for SF6 (sulfur hexafluoride), HFC-125 (pentafluoroethane), HFO-1234yf (or HFC-1234yf, 2,3,3,3-tetrafluoroprop-1-ene), HCFC-132b (1,2-dichloro-1,1-difluoroethane) and CFC-13 (chlorotrifluoromethane). The preparation method, newly applied to halocarbons, is dynamic and gravimetric: it is based on the permeation principle followed by dynamic dilution and cryo-filling of the mixture in cylinders. The obtained METAS-2017 primary calibration scales are made of 11 cylinders containing these five substances at near-ambient and slightly varying molar fractions. Each prepared molar fraction is traceable to the realisation of SI units (International System of Units) and is assigned an uncertainty estimate following international guidelines (JCGM, 2008), ranging from 0.6 % for SF6 to 1.3 % (k = 2) for all other substances. The smallest uncertainty obtained for SF6 is mostly explained by the high substance purity level in the permeator and the low SF6 contamination of the matrix gas. The measured internal consistency of the suite ranges from 0.23 % for SF6 to 1.1 % for HFO-1234yf (k = 1). The expanded uncertainty after verification (i.e. measurement of the cylinders vs. each others) ranges from 1 to 2 % (k = 2).This work combines the advantages of SI-traceable reference gas mixture preparation with a calibration scale system for its use as anchor by a monitoring network. Such a combined system supports maximising compatibility within the network while linking all reference values to the SI and assigning carefully estimated uncertainties.For SF6, comparison of the METAS-2017 calibration scale with the scale prepared by SIO (Scripps Institution of Oceanography, SIO-05) shows excellent concordance, the ratio METAS-2017 / SIO-05 being 1.002. For HFC-125, the METAS-2017 calibration scale is measured as 7 % lower than SIO-14; for HFO-1234yf, it is 9 % lower than Empa-2013. No other scale for HCFC-132b was available for comparison. Finally, for CFC-13 the METAS-2017 primary calibration scale is 5 % higher than the interim calibration scale (Interim-98) that was in use within the Advanced Global Atmospheric Gases Experiment (AGAGE) network before adopting the scale established in the present work.
Reptile assemblage response to restoration of fire-suppressed longleaf pine sandhills.
Steen, David A; Smith, Lora L; Conner, L M; Litt, Andrea R; Provencher, Louis; Hiers, J Kevin; Pokswinski, Scott; Guyer, Craig
2013-01-01
Measuring the effects of ecological restoration on wildlife assemblages requires study on broad temporal and spatial scales. Longleaf pine (Pinus palustris) forests are imperiled due to fire suppression and subsequent invasion by hardwood trees. We employed a landscape-scale, randomized-block design to identify how reptile assemblages initially responded to restoration treatments including removal of hardwood trees via mechanical methods (felling and girdling), application of herbicides, or prescribed burning alone. Then, we examined reptile assemblages after all sites experienced more than a decade of prescribed burning at two- to thee-year return intervals. Data were collected concurrently at reference sites chosen to represent target conditions for restoration. Reptile assemblages changed most rapidly in response to prescribed burning, but reptile assemblages at all sites, including reference sites, were generally indistinguishable by the end of the study. Thus, we suggest that prescribed burning in longleaf pine forests over long time periods is an effective strategy for restoring reptile assemblages to the reference condition. Application of herbicides or mechanical removal of hardwood trees provided no apparent benefit to reptiles beyond what was achieved by prescribed fire alone.
Two reference time scales for studying the dynamic cavitation of liquid films
NASA Technical Reports Server (NTRS)
Sun, D. C.; Brewe, D. E.
1992-01-01
Two formulas, one for the characteristic time of filling a void with the vapor of the surrounding liquid, and one of filling the void by diffusion of the dissolved gas in the liquid, are derived. By comparing these time scales with that of the dynamic operation of oil film bearings, it is concluded that the evaporation process is usually fast enough to fill the cavitation bubble with oil vapor; whereas the diffusion process is much too slow for the dissolved air to liberate itself and enter the cavitation bubble. These results imply that the formation of a two phase fluid in dynamically loaded bearings, as often reported in the literature, is caused by air entrainment. They further indicate a way to simplify the treatment of the dynamic problem of bubble evolution.
Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.
Ice-Accretion Scaling Using Water-Film Thickness Parameters
NASA Technical Reports Server (NTRS)
Anderson, David N.; Feo, Alejandro
2003-01-01
Studies were performed at INTA in Spain to determine water-film thickness on a stagnation-point probe inserted in a simulated cloud. The measurements were correlated with non-dimensional parameters describing the flow and the cloud conditions. Icing scaling tests in the NASA Glenn Icing Research Tunnel were then conducted using the Ruff scaling method with the scale velocity found by matching scale and reference values of either the INTA non-dimensional water-film thickness or a Weber number based on that film thickness. For comparison, tests were also performed using the constant drop-size Weber number and the average-velocity methods. The reference and scale models were both aluminum, 61-cm-span, NACA 0012 airfoil sections at 0 deg. AOA. The reference had a 53-cm-chord and the scale, 27 cm (1/2 size). Both models were mounted vertically in the center of the IRT test section. Tests covered a freezing fraction range of 0.28 to 1.0. Rime ice (n = 1.0) tests showed the consistency of the IRT calibration over a range of velocities. At a freezing fraction of 0.76, there was no significant difference in the scale ice shapes produced by the different methods. For freezing fractions of 0.40, 0.52 and 0.61, somewhat better agreement with the reference horn angles was typically achieved with the average-velocity and constant-film thickness methods than when either of the two Weber numbers was matched to the reference value. At a freezing fraction of 0.28, the four methods were judged equal in providing simulations of the reference shape.
ERIC Educational Resources Information Center
Edwards, Oliver W.; Paulin, Rachel V.
2007-01-01
This study investigates the convergent relations of the Reynolds Intellectual Assessment Scales (RIAS) and the Wechsler Intelligence Scale for Children--Fourth Edition (WISC-IV). Data from counterbalanced administrations of each instrument to 48 elementary school students referred for psychoeducational testing were examined. Analysis of the 96…
Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications
NASA Technical Reports Server (NTRS)
Anderson, David N.
2003-01-01
This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.
[Pain Intensity and Time to Death of Cancer Patients Referred to Palliative Care].
Barata, Pedro; Santos, Filipa; Mesquita, Graça; Cardoso, Alice; Custódio, Maria Paula; Alves, Marta; Papoila, Ana Luísa; Barbosa, António; Lawlor, Peter
2016-11-01
Pain is a common symptom experienced by cancer patients, especially in those with advanced disease. Our aim was to describe pain intensity in advanced cancer patients, referred to the palliative care unit, the factors underlying moderate to severe pain and its prognostic values. This was a prospective observational study. All patients with mestastatic solid tumors and with no specific oncologic treatment were included. Pain intensity was accessed using the pain scale from Edmonton Symptom Assessment Scale, rated from 0 to 10 on a numerical scale, where zero = no pain and 10 = worst possible pain. Between October 2012 and June 2015, a total of 301 patients participated in the study. The median age was 69 years, (37 - 94); most of the patients were men (57%) and 64.8% had a performance status of 3/4. About 42% reported pain severity ≥ 4 and 74% were medicated with opioids. Multivariate analysis indicated a correlation between performance status and reported pain (OR: 1.7; IC 95%: 1.0 - 2.7; p = 0.045). Median overall survival was 37 days (IC 95%: 28 - 46). Patients reporting moderate to severe pain (pain severity ≥ 4) had a median survival of 29 days (IC 95%: 21 - 37), comparing with those who had no or moderate pain with median survival of 49 days (IC 95%: 35 - 63) (p = 0.022). The performance status was associated with more intense pain. The performance status, hospitalization, intra-abdominal metastization and opioid analgesia were associated with shorter time to death in advanced cancer patients referred to palliative care. Cancer pain continues to be a major clinical problem in advanced cancer patients.
Sub-seasonal predictability of water scarcity at global and local scale
NASA Astrophysics Data System (ADS)
Wanders, N.; Wada, Y.; Wood, E. F.
2016-12-01
Forecasting the water demand and availability for agriculture and energy production has been neglected in previous research, partly due to the fact that most large-scale hydrological models lack the skill to forecast human water demands at sub-seasonal time scale. We study the potential of a sub-seasonal water scarcity forecasting system for improved water management decision making and improved estimates of water demand and availability. We have generated 32 years of global sub-seasonal multi-model water availability, demand and scarcity forecasts. The quality of the forecasts is compared to a reference forecast derived from resampling historic weather observations. The newly developed system has been evaluated for both the global scale and in a real-time local application in the Sacramento valley for the Trinity, Shasta and Oroville reservoirs, where the water demand for agriculture and hydropower is high. On the global scale we find that the reference forecast shows high initial forecast skill (up to 8 months) for water scarcity in the eastern US, Central Asia and Sub-Saharan Africa. Adding dynamical sub-seasonal forecasts results in a clear improvement for most regions in the world, increasing the forecasts' lead time by 2 or more months on average. The strongest improvements are found in the US, Brazil, Central Asia and Australia. For the Sacramento valley we can accurately predict anomalies in the reservoir inflow, hydropower potential and the downstream irrigation water demand 6 months in advance. This allow us to forecast potential water scarcity in the Sacramento valley and adjust the reservoir management to prevent deficits in energy or irrigation water availability. The newly developed forecast system shows that it is possible to reduce the vulnerability to upcoming water scarcity events and allows optimization of the distribution of the available water between the agricultural and energy sector half a year in advance.
Improving the distinguishable cluster results: spin-component scaling
NASA Astrophysics Data System (ADS)
Kats, Daniel
2018-06-01
The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.
The value of cows in reference populations for genomic selection of new functional traits.
Buch, L H; Kargo, M; Berg, P; Lassen, J; Sørensen, A C
2012-06-01
Today, almost all reference populations consist of progeny tested bulls. However, older progeny tested bulls do not have reliable estimated breeding values (EBV) for new traits. Thus, to be able to select for these new traits, it is necessary to build a reference population. We used a deterministic prediction model to test the hypothesis that the value of cows in reference populations depends on the availability of phenotypic records. To test the hypothesis, we investigated different strategies of building a reference population for a new functional trait over a 10-year period. The trait was either recorded on a large scale (30 000 cows per year) or on a small scale (2000 cows per year). For large-scale recording, we compared four scenarios where the reference population consisted of 30 sires; 30 sires and 170 test bulls; 30 sires and 2000 cows; or 30 sires, 2000 cows and 170 test bulls in the first year with measurements of the new functional trait. In addition to varying the make-up of the reference population, we also varied the heritability of the trait (h2 = 0.05 v. 0.15). The results showed that a reference population of test bulls, cows and sires results in the highest accuracy of the direct genomic values (DGV) for a new functional trait, regardless of its heritability. For small-scale recording, we compared two scenarios where the reference population consisted of the 2000 cows with phenotypic records or the 30 sires of these cows in the first year with measurements of the new functional trait. The results showed that a reference population of cows results in the highest accuracy of the DGV whether the heritability is 0.05 or 0.15, because variation is lost when phenotypic data on cows are summarized in EBV of their sires. The main conclusions from this study are: (i) the fewer phenotypic records, the larger effect of including cows in the reference population; (ii) for small-scale recording, the accuracy of the DGV will continue to increase for several years, whereas the increases in the accuracy of the DGV quickly decrease with large-scale recording; (iii) it is possible to achieve accuracies of the DGV that enable selection for new functional traits recorded on a large scale within 3 years from commencement of recording; and (iv) a higher heritability benefits a reference population of cows more than a reference population of bulls.
The 3D Reference Earth Model: Status and Preliminary Results
NASA Astrophysics Data System (ADS)
Moulik, P.; Lekic, V.; Romanowicz, B. A.
2017-12-01
In the 20th century, seismologists constructed models of how average physical properties (e.g. density, rigidity, compressibility, anisotropy) vary with depth in the Earth's interior. These one-dimensional (1D) reference Earth models (e.g. PREM) have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, new datasets motivated more sophisticated efforts that yielded models of how properties vary both laterally and with depth in the Earth's interior. Though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. As part of the REM-3D project, we are compiling and reconciling reference seismic datasets of body wave travel-time measurements, fundamental mode and overtone surface wave dispersion measurements, and normal mode frequencies and splitting functions. These reference datasets are being inverted for a long-wavelength, 3D reference Earth model that describes the robust long-wavelength features of mantle heterogeneity. As a community reference model with fully quantified uncertainties and tradeoffs and an associated publically available dataset, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. Here, we summarize progress made in the construction of the reference long period dataset and present a preliminary version of REM-3D in the upper-mantle. In order to determine the level of detail warranted for inclusion in REM-3D, we analyze the spectrum of discrepancies between models inverted with different subsets of the reference dataset. This procedure allows us to evaluate the extent of consistency in imaging heterogeneity at various depths and between spatial scales.
Evaluation of the Water Film Weber Number in Glaze Icing Scaling
NASA Technical Reports Server (NTRS)
Tsao, Jen-Ching; Kreeger, Richard E.; Feo, Alejandro
2010-01-01
Icing scaling tests were performed in the NASA Glenn Icing Research Tunnel to evaluate a new scaling method, developed and proposed by Feo for glaze icing, in which the scale liquid water content and velocity were found by matching reference and scale values of the nondimensional water-film thickness expression and the film Weber number. For comparison purpose, tests were also conducted using the constant We(sub L) method for velocity scaling. The reference tests used a full-span, fiberglass, 91.4-cm-chord NACA 0012 model with velocities of 76 and 100 knot and MVD sizes of 150 and 195 microns. Scale-to-reference model size ratio was 1:2.6. All tests were made at 0deg AOA. Results will be presented for stagnation point freezing fractions of 0.3 and 0.5.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
Modernization of Koesters interferometer and high accuracy calibration gauge blocks
NASA Astrophysics Data System (ADS)
França, R. S.; Silva, I. L. M.; Couceiro, I. B.; Torres, M. A. C.; Bessa, M. S.; Costa, P. A.; Oliveira, W., Jr.; Grieneisen, H. P. H.
2016-07-01
The Optical Metrology Division (Diopt) of Inmetro is responsible for maintaining the national reference of the length unit according to International System of Units (SI) definitions. The length unit is realized by interferometric techniques and is disseminated to the dimensional community through calibrations of gauge blocks. Calibration of large gauge blocks from 100 mm to 1000 mm has been performed by Diopt with a Koesters interferometer with reference to spectral lines of a krypton discharge lamp. Replacement of this lamp by frequency stabilized lasers, traceable now to the time and frequency scale, is described and the first results are reported.
Two reference time scales for studying the dynamic cavitation of liquid films
NASA Technical Reports Server (NTRS)
Sun, D. C.; Brewe, David E.
1991-01-01
Two formulas, one for characteristic time of filling a void with a vapor of the surrounding liquid, and one of filling the void by diffusion of the dissolved gas in the liquid, are derived. Based on this analysis, it is seen that in an oil film bearing operating under dynamic loads, the content of cavitation region should be oil vapor rather than the air liberated from solution, if the oil is free of entrained air.
A Causal Contiguity Effect That Persists across Time Scales
ERIC Educational Resources Information Center
Kilic, Asli; Criss, Amy H.; Howard, Marc W.
2013-01-01
The contiguity effect refers to the tendency to recall an item from nearby study positions of the just recalled item. Causal models of contiguity suggest that recalled items are used as probes, causing a change in the memory state for subsequent recall attempts. Noncausal models of the contiguity effect assume the memory state is unaffected by…
NASA Astrophysics Data System (ADS)
Tesmer, Volker; Boehm, Johannes; Heinkelmann, Robert; Schuh, Harald
2007-06-01
This paper compares estimated terrestrial reference frames (TRF) and celestial reference frames (CRF) as well as position time-series in terms of systematic differences, scale, annual signals and station position repeatabilities using four different tropospheric mapping functions (MF): The NMF (Niell Mapping Function) and the recently developed GMF (Global Mapping Function) consist of easy-to-handle stand-alone formulae, whereas the IMF (Isobaric Mapping Function) and the VMF1 (Vienna Mapping Function 1) are determined from numerical weather models. All computations were performed at the Deutsches Geodätisches Forschungsinstitut (DGFI) using the OCCAM 6.1 and DOGS-CS software packages for Very Long Baseline Interferometry (VLBI) data from 1984 until 2005. While it turned out that CRF estimates only slightly depend on the MF used, showing small systematic effects up to 0.025 mas, some station heights of the computed TRF change by up to 13 mm. The best agreement was achieved for the VMF1 and GMF results concerning the TRFs, and for the VMF1 and IMF results concerning scale variations and position time-series. The amplitudes of the annual periodical signals in the time-series of estimated heights differ by up to 5 mm. The best precision in terms of station height repeatability is found for the VMF1, which is 5 7% better than for the other MFs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.; Stone, C.M.; Krieg, R.D.
Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less
Optimal Control Modification Adaptive Law for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
Optimal Control Modification for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
ERIC Educational Resources Information Center
Salazar, LeRoy; And Others
This resource for trainers involved in irrigated agriculture training for Peace Corps volunteers consists of two parts: irrigation training manual and irrigation reference manual. The complete course should fully prepare volunteers serving as irrigation, specialists to plan, implement, evaluate and manage small-scale irrigation projects in arid,…
Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties
NASA Astrophysics Data System (ADS)
Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.
2014-10-01
We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.
Positional reference system for ultraprecision machining
Arnold, Jones B.; Burleson, Robert R.; Pardue, Robert M.
1982-01-01
A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.
Positional reference system for ultraprecision machining
Arnold, J.B.; Burleson, R.R.; Pardue, R.M.
1980-09-12
A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.
Multi-scale graph-cut algorithm for efficient water-fat separation.
Berglund, Johan; Skorpil, Mikael
2017-09-01
To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Knouse, Laura E.; Barkley, Russell A.; Murphy, Kevin R.
2012-01-01
Background Deficits in executive functioning (EF) are implicated in neurobiological and cognitive-processing theories of depression. EF deficits are also associated with Attention-deficit/hyperactivity disorder (ADHD) in adults, who are also at increased risk for depressive disorders. Given debate about the ecological validity of laboratory measures of EF, we investigated the relationship between depression diagnoses and symptoms and EF as measured by both rating scales and tests in a sample of adults referred for evaluation of adult ADHD. Method Data from two groups of adults recruited from an ADHD specialty clinic were analyzed together: Adults diagnosed with ADHD (N=146) and a clinical control group of adults referred for adult ADHD assessment but not diagnosed with the disorder ADHD (N=97). EF was assessed using a rating scale of EF deficits in daily life and a battery of tests tapping various EF constructs. Depression was assessed using current and lifetime SCID diagnoses (major depression, dysthymia) and self-report symptom ratings. Results EF as assessed via rating scale predicted depression across measures even when controlling for current anxiety and impairment. Self-Management to Time and Self-Organization and Problem-Solving showed the most robust relationships. EF tests were weakly and inconsistently related to depression measures. Limitations Prospective studies are needed to rigorously evaluate EF problems as true risk factors for depressive onset. Conclusions EF problems in everyday life were important predictors of depression. Researchers and clinicians should consistently assess for the ADHD-depression comorbidity. Clinicians should consider incorporating strategies to address EF deficits when treating people with depression. PMID:22858220
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Christensen, Bruce K; Girard, Todd A; Bagby, R Michael
2007-06-01
An eight-subtest short form (SF8) of the Wechsler Adult Intelligence Scale, Third Edition (WAIS-III), maintaining equal representation of each index factor, was developed for use with psychiatric populations. Data were collected from a mixed inpatient/outpatient sample (99 men and 101 women) referred for neuropsychological assessment. Psychometric analyses revealed an optimal SF8 comprising Vocabulary, Similarities, Arithmetic, Digit Span, Picture Completion, Matrix Reasoning, Digit Symbol Coding, and Symbol Search, scored by linear scaling. Expanding on previous short forms, the current SF8 maximizes the breadth of information and reduces administration time while maintaining the original WAIS-III factor structure. (c) 2007 APA, all rights reserved
Time and Space in Tzeltal: Is the Future Uphill?
Brown, Penelope
2012-01-01
Linguistic expressions of time often draw on spatial language, which raises the question of whether cultural specificity in spatial language and cognition is reflected in thinking about time. In the Mayan language Tzeltal, spatial language relies heavily on an absolute frame of reference utilizing the overall slope of the land, distinguishing an “uphill/downhill” axis oriented from south to north, and an orthogonal “crossways” axis (sunrise-set) on the basis of which objects at all scales are located. Does this absolute system for calculating spatial relations carry over into construals of temporal relations? This question was explored in a study where Tzeltal consultants produced temporal expressions and performed two different non-linguistic temporal ordering tasks. The results show that at least five distinct schemata for conceptualizing time underlie Tzeltal linguistic expressions: (i) deictic ego-centered time, (ii) time as an ordered sequence (e.g., “first”/“later”), (iii) cyclic time (times of the day, seasons), (iv) time as spatial extension or location (e.g., “entering/exiting July”), and (v) a time vector extending uphillwards into the future. The non-linguistic task results showed that the “time moves uphillwards” metaphor, based on the absolute frame of reference prevalent in Tzeltal spatial language and thinking and important as well in the linguistic expressions for time, is not strongly reflected in responses on these tasks. It is argued that systematic and consistent use of spatial language in an absolute frame of reference does not necessarily transfer to consistent absolute time conceptualization in non-linguistic tasks; time appears to be more open to alternative construals. PMID:22787451
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaud, G.; Bergeron, P.; Wesemael, F.
The abundance anomalies generated by diffusion in the envelopes of hot, hydrogen-rich subdwarfs are studied. It is shown that unimpeded diffusion cannot lead to the large silicon underabundance observed in those stars at effective temperatures above 30,000 K. Calculations of diffusion of heavy elements in the presence of mass loss are also performed. For a mass-loss rate of 2.5 x 10 to the -15th solar masses/year, the observed abundance patterns of C, N, and Si are reproduced on a time scale of about 100,000 yr. Lower mass-loss rates would necessitate longer time scales. The pattern of abundance anomalies may eventuallymore » be used to constrain both the mass-loss rate and the stellar lifetime in the sdB evolutionary phase. 12 references.« less
NASA Astrophysics Data System (ADS)
Gao, H.; Zhang, S.; Nijssen, B.; Zhou, T.; Voisin, N.; Sheffield, J.; Lee, K.; Shukla, S.; Lettenmaier, D. P.
2017-12-01
Despite its errors and uncertainties, the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis real-time product (TMPA-RT) has been widely used for hydrological monitoring and forecasting due to its timely availability for real-time applications. To evaluate the utility of TMPA-RT in hydrologic predictions, many studies have compared modeled streamflows driven by TMPA-RT against gauge data. However, because of the limited availability of streamflow observations in data sparse regions, there is still a lack of comprehensive comparisons for TMPA-RT based hydrologic predictions at the global scale. Furthermore, it is expected that its skill is less optimal at the subbasin scale than the basin scale. In this study, we evaluate and characterize the utility of the TMPA-RT product over selected global river basins during the period of 1998 to 2015 using the TMPA research product (TMPA-RP) as a reference. The Variable Infiltration Capacity (VIC) model, which was calibrated and validated previously, is adopted to simulate streamflows driven by TMPA-RT and TMPA-RP, respectively. The objective of this study is to analyze the spatial and temporal characteristics of the hydrologic predictions by answering the following questions: (1) How do the precipitation errors associated with the TMPA-RT product transform into streamflow errors with respect to geographical and climatological characteristics? (2) How do streamflow errors vary across scales within a basin?
NASA Technical Reports Server (NTRS)
Shine, R. A.
1975-01-01
The problem of LTE and non-LTE line formation in the presence of nonthermal velocity fields with geometric scales between the microscopic and macroscopic limits is investigated in the cases of periodic sinusoidal and sawtooth waves. For a fixed source function (the LTE case), it is shown that time-averaged line profiles progress smoothly from the microscopic to the macroscopic limits as the geometric scale of the motions increases, that the sinusoidal motions produce symmetric time-averaged profiles, and that the sawtooth motions cause a redshift. In several idealized non-LTE cases, it is found that intermediate-scale velocity fields can significantly increase the surface source functions and line-core intensities. Calculations are made for a two-level atom in an isothermal atmosphere for a range of velocity scales and non-LTE coupling parameters and also for a two-level atom and a four-level representation of Na I line formation in the Harvard-Smithsonian Reference Atmosphere (1971) solar model. It is found that intermediate-scale velocity fields in the solar atmosphere could explain the central intensities of the Na I D lines and other strong absorption lines without invoking previously suggested high electron densities.
EOP and scale from continuous VLBI observing: CONT campaigns to future VGOS networks
NASA Astrophysics Data System (ADS)
MacMillan, D. S.
2017-07-01
Continuous (CONT) VLBI campaigns have been carried out about every 3 years since 2002. The basic idea of these campaigns is to acquire state-of-the-art VLBI data over a continuous time period of about 2 weeks to demonstrate the highest accuracy of which the current VLBI system is capable. In addition, these campaigns support scientific studies such as investigations of high-resolution Earth rotation, reference frame stability, and daily to sub-daily site motions. The size of the CONT networks and the observing data rate have increased steadily since 1994. Performance of these networks based on reference frame scale precision and polar motion/LOD comparison with global navigation satellite system (GNSS) earth orientation parameters (EOP) has been substantially better than the weekly operational R1 and R4 series. The precisions of CONT EOP and scale have improved by more than a factor of two since 2002. Polar motion precision based on the WRMS difference between VLBI and GNSS for the most recent CONT campaigns is at the 30 μas level, which is comparable to that of GNSS. The CONT campaigns are a natural precursor to the planned future VLBI observing networks, which are expected to observe continuously. We compare the performance of the most recent CONT campaigns in 2011 and 2014 with the expected performance of the future VLBI global observing system network using simulations. These simulations indicate that the expected future precision of scale and EOP will be at least 3 times better than the current CONT precision.
Large-scale seismic waveform quality metric calculation using Hadoop
Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.; ...
2016-05-27
Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.« less
Large-scale seismic waveform quality metric calculation using Hadoop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.
Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.« less
ERIC Educational Resources Information Center
Nelson, Jason M.; Canivez, Gary L.; Lindstrom, Will; Hatt, Clifford V.
2007-01-01
The factor structure of the Reynolds Intellectual Assessment Scales (RIAS; [Reynolds, C.R., & Kamphaus, R.W. (2003). "Reynolds Intellectual Assessment Scales". Lutz, FL: Psychological Assessment Resources, Inc.]) was investigated with a large (N=1163) independent sample of referred students (ages 6-18). More rigorous factor extraction criteria…
Methods for Scaling Icing Test Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.
1995-01-01
This report presents the results of tests at NASA Lewis to evaluate several methods to establish suitable alternative test conditions when the test facility limits the model size or operating conditions. The first method was proposed by Olsen. It can be applied when full-size models are tested and all the desired test conditions except liquid-water content can be obtained in the facility. The other two methods discussed are: a modification of the French scaling law and the AEDC scaling method. Icing tests were made with cylinders at both reference and scaled conditions representing mixed and glaze ice in the NASA Lewis Icing Research Tunnel. Reference and scale ice shapes were compared to evaluate each method. The Olsen method was tested with liquid-water content varying from 1.3 to .8 g/m(exp3). Over this range, ice shapes produced using the Olsen method were unchanged. The modified French and AEDC methods produced scaled ice shapes which approximated the reference shapes when model size was reduced to half the reference size for the glaze-ice cases tested.
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
A global meta-analysis on the ecological drivers of forest restoration success
Crouzeilles, Renato; Curran, Michael; Ferreira, Mariana S.; Lindenmayer, David B.; Grelle, Carlos E. V.; Rey Benayas, José M.
2016-01-01
Two billion ha have been identified globally for forest restoration. Our meta-analysis encompassing 221 study landscapes worldwide reveals forest restoration enhances biodiversity by 15–84% and vegetation structure by 36–77%, compared with degraded ecosystems. For the first time, we identify the main ecological drivers of forest restoration success (defined as a return to a reference condition, that is, old-growth forest) at both the local and landscape scale. These are as follows: the time elapsed since restoration began, disturbance type and landscape context. The time elapsed since restoration began strongly drives restoration success in secondary forests, but not in selectively logged forests (which are more ecologically similar to reference systems). Landscape restoration will be most successful when previous disturbance is less intensive and habitat is less fragmented in the landscape. Restoration does not result in full recovery of biodiversity and vegetation structure, but can complement old-growth forests if there is sufficient time for ecological succession. PMID:27193756
Van Gorder, Robert A
2013-04-01
We provide a formulation of the local induction approximation (LIA) for the motion of a vortex filament in the Cartesian reference frame (the extrinsic coordinate system) which allows for scaling of the reference coordinate. For general monotone scalings of the reference coordinate, we derive an equation for the planar solution to the derivative nonlinear Schrödinger equation governing the LIA. We proceed to solve this equation perturbatively in small amplitude through an application of multiple-scales analysis, which allows for accurate computation of the period of the planar vortex filament. The perturbation result is shown to agree strongly with numerical simulations, and we also relate this solution back to the solution obtained in the arclength reference frame (the intrinsic coordinate system). Finally, we discuss nonmonotone coordinate scalings and their application for finding self-intersections of vortex filaments. These self-intersecting vortex filaments are likely unstable and collapse into other structures or dissipate completely.
Detection of weak signals in memory thermal baths.
Jiménez-Aquino, J I; Velasco, R M; Romero-Bastida, M
2014-11-01
The nonlinear relaxation time and the statistics of the first passage time distribution in connection with the quasideterministic approach are used to detect weak signals in the decay process of the unstable state of a Brownian particle embedded in memory thermal baths. The study is performed in the overdamped approximation of a generalized Langevin equation characterized by an exponential decay in the friction memory kernel. A detection criterion for each time scale is studied: The first one is referred to as the receiver output, which is given as a function of the nonlinear relaxation time, and the second one is related to the statistics of the first passage time distribution.
Climate, orography and scale controls on flood frequency in Triveneto (Italy)
NASA Astrophysics Data System (ADS)
Persiano, Simone; Castellarin, Attilio; Salinas, Jose Luis; Domeneghetti, Alessio; Brath, Armando
2016-05-01
The growing concern about the possible effects of climate change on flood frequency regime is leading Authorities to review previously proposed reference procedures for design-flood estimation, such as national flood frequency models. Our study focuses on Triveneto, a broad geographical region in North-eastern Italy. A reference procedure for design flood estimation in Triveneto is available from the Italian NCR research project "VA.PI.", which considered Triveneto as a single homogeneous region and developed a regional model using annual maximum series (AMS) of peak discharges that were collected up to the 1980s by the former Italian Hydrometeorological Service. We consider a very detailed AMS database that we recently compiled for 76 catchments located in Triveneto. All 76 study catchments are characterized in terms of several geomorphologic and climatic descriptors. The objective of our study is threefold: (1) to inspect climatic and scale controls on flood frequency regime; (2) to verify the possible presence of changes in flood frequency regime by looking at changes in time of regional L-moments of annual maximum floods; (3) to develop an updated reference procedure for design flood estimation in Triveneto by using a focused-pooling approach (i.e. Region of Influence, RoI). Our study leads to the following conclusions: (1) climatic and scale controls on flood frequency regime in Triveneto are similar to the controls that were recently found in Europe; (2) a single year characterized by extreme floods can have a remarkable influence on regional flood frequency models and analyses for detecting possible changes in flood frequency regime; (3) no significant change was detected in the flood frequency regime, yet an update of the existing reference procedure for design flood estimation is highly recommended and we propose the RoI approach for properly representing climate and scale controls on flood frequency in Triveneto, which cannot be regarded as a single homogeneous region.
NASA Astrophysics Data System (ADS)
Ribera, M.; Gopal, S.
2014-12-01
Productivity hotspots are traditionally defined as concentrations of relatively high biomass compared to global reference values. These hotspots often signal atypical processes occurring in a location, and identifying them is a great first step at understanding the complexity inherent in the system. However, identifying local hotspots can be difficult when an overarching global pattern (i.e. spatial autocorrelation) already exists. This problem is particularly apparent in marine ecosystems because values of productivity in near-shore areas are consistently higher than those of the open ocean due to oceanographic processes such as upwelling. In such cases, if the global reference layer used to detect hotspots is too wide, hotspots may be only identified near the coast while missing known concentrations of organisms in offshore waters. On the other hand, if the global reference layer is too small, every single location may be considered a hotspot. We applied spatial and traditional statistics to remote sensing data to determine the optimal reference global spatial scale for identifying marine productivity hotspots in the Gulf of Maine. Our iterative process measured Getis and Ord's local G* statistic at different global scales until the variance of each hotspot was maximized. We tested this process with different full resolution MERIS chlorophyll layers (300m spatial resolution) for the whole Gulf of Maine. We concluded that the optimal global scale depends on the time of the year the remote sensing data was collected, particularly when coinciding with known seasonal phytoplankton blooms. The hotspots found through this process were also spatially heterogeneous in size, with bigger hotspots in areas offshore than in locations inshore. These results may be instructive for both managers and fisheries researchers as they adapt their fisheries management policies and methods to an ecosystem based approach (EBM).
NASA Astrophysics Data System (ADS)
Lhermitte, S.; Tips, M.; Verbesselt, J.; Jonckheere, I.; Van Aardt, J.; Coppin, Pol
2005-10-01
Large-scale wild fires have direct impacts on natural ecosystems and play a major role in the vegetation ecology and carbon budget. Accurate methods for describing post-fire development of vegetation are therefore essential for the understanding and monitoring of terrestrial ecosystems. Time series analysis of satellite imagery offers the potential to quantify these parameters with spatial and temporal accuracy. Current research focuses on the potential of time series analysis of SPOT Vegetation S10 data (1999-2001) to quantify the vegetation recovery of large-scale burns detected in the framework of GBA2000. The objective of this study was to provide quantitative estimates of the spatio-temporal variation of vegetation recovery based on remote sensing indicators. Southern Africa was used as a pilot study area, given the availability of ground and satellite data. An automated technique was developed to extract consistent indicators of vegetation recovery from the SPOT-VGT time series. Reference areas were used to quantify the vegetation regrowth by means of Regeneration Indices (RI). Two kinds of recovery indicators (time and value- based) were tested for RI's of NDVI, SR, SAVI, NDWI, and pure band information. The effects of vegetation structure and temporal fire regime features on the recovery indicators were subsequently analyzed. Statistical analyses were conducted to assess whether the recovery indicators were different for different vegetation types and dependent on timing of the burning season. Results highlighted the importance of appropriate reference areas and the importance of correct normalization of the SPOT-VGT data.
Role of the BIPM in UTC Dissemination to the Real Time User
NASA Technical Reports Server (NTRS)
Quinn, T. J.; Thomas, C.
1996-01-01
The generation and dissemination of International Atomic Time (TAI), and Coordinated Universal Time (UTC) are explicitly mentioned in the list of principal tasks of the Bureau International des Poids et Mesures (BIPM), that appears in the Compes Rendus of the the 18e Conference Generales des Poids et Measures, in 1987. These time scales are used as the ultimate reference in the most demanding scientific applications and must, therefore, be of the best metrological quality in terms of reliability, long term stability, and conformity of the scale interval with the second, the unit of time of the International System of Units. To meet these requirements, it is necessary that the readings of the atomic clocks, spread all over the world, that are used as basic timing data for TAI and UTC generation, must be combined in the most efficient way possible. In particular, to take full advantage of the quality of each contributing clock calls for observation of its performance over a sufficiently long time. At present, the computation period treats data in blocks covering two months. TAI and UTC are thus deferred-time scales that cannot be immediately available to real-time users. The BIPM can, nevertheless be of help to real-time users. The predictability of UTC is a fundamental attribute of the scale for institutions responsible for the dissemination of real-time time scales. It allows them to improve their local representations of UTC and, thus, implement a more thorough steering of the time scales diffused in real-time. With a view to improving the predicatbility of UTC, the BIPM examines in detail timing techniques and basic theories in order to propose alternative solutions for timing algorithms. This, coupled with a recent improvement of timing data, makes UTC more stable and thus, more predictable. At a more practical level, effort is being devoted to putting in place automatic procedures for reducing the time needed for data collection and treatment: monthly results are already available ten days earlier than before.
National Centers for Environmental Prediction
references for past and current documentation: Sort A-Z | Sort by Year Alpert, J.C., S-Y Hong and Y-J Kim -Scale Environment, Part I. J. Atmos. Sci., 31, 674-704. Asselin, R., 1972: Frequency filter for time observations and model computations. J. Clim. and Appl. Meteo., 25, 214-226. Campana, K. A., Y-T Hou, K. E
Growth responses of mature loblolly pine to dead wood manipulations
Michael D. Ulyshen; Scott Horn; James L. Hanula
2012-01-01
Large-scale manipulations of dead wood in mature Pinus taeda L. stands in the southeastern United States included a major one-time input of logs (fivefold increase in log volume) created by felling trees onsite, annual removals of all dead wood above ≥10 cm in diameter and ≥60 cm in length, and a reference in which no...
Dichotomous scoring of Trails B in patients referred for a dementia evaluation.
Schmitt, Andrew L; Livingston, Ronald B; Smernoff, Eric N; Waits, Bethany L; Harris, James B; Davis, Kent M
2010-04-01
The Trail Making Test is a popular neuropsychological test and its interpretation has traditionally used time-based scores. This study examined an alternative approach to scoring that is simply based on the examinees' ability to complete the test. If an examinee is able to complete Trails B successfully, they are coded as "completers"; if not, they are coded as "noncompleters." To assess this approach to scoring Trails B, the performance of 97 diagnostically heterogeneous individuals referred for a dementia evaluation was examined. In this sample, 55 individuals successfully completed Trails B and 42 individuals were unable to complete it. Point-biserial correlations indicated a moderate-to-strong association (r(pb)=.73) between the Trails B completion variable and the Total Scale score of the Repeatable Battery for the Assessment of Neurological Status (RBANS), which was larger than the correlation between the Trails B time-based score and the RBANS Total Scale score (r(pb)=.60). As a screen for dementia status, Trails B completion showed a sensitivity of 69% and a specificity of 100% in this sample. These results suggest that dichotomous scoring of Trails B might provide a brief and clinically useful measure of dementia status.
Gravity in the Brain as a Reference for Space and Time Perception.
Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka
2015-01-01
Moving and interacting with the environment require a reference for orientation and a scale for calibration in space and time. There is a wide variety of environmental clues and calibrated frames at different locales, but the reference of gravity is ubiquitous on Earth. The pull of gravity on static objects provides a plummet which, together with the horizontal plane, defines a three-dimensional Cartesian frame for visual images. On the other hand, the gravitational acceleration of falling objects can provide a time-stamp on events, because the motion duration of an object accelerated by gravity over a given path is fixed. Indeed, since ancient times, man has been using plumb bobs for spatial surveying, and water clocks or pendulum clocks for time keeping. Here we review behavioral evidence in favor of the hypothesis that the brain is endowed with mechanisms that exploit the presence of gravity to estimate the spatial orientation and the passage of time. Several visual and non-visual (vestibular, haptic, visceral) cues are merged to estimate the orientation of the visual vertical. However, the relative weight of each cue is not fixed, but depends on the specific task. Next, we show that an internal model of the effects of gravity is combined with multisensory signals to time the interception of falling objects, to time the passage through spatial landmarks during virtual navigation, to assess the duration of a gravitational motion, and to judge the naturalness of periodic motion under gravity.
Rotstein, Horacio G
2014-01-01
We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.
Syntactic Approach To Geometric Surface Shell Determination
NASA Astrophysics Data System (ADS)
DeGryse, Donald G.; Panton, Dale J.
1980-12-01
Autonomous terminal homing of a smart missile requires a stored reference scene of the target for which the missle is destined. The reference scene is produced from stereo source imagery by deriving a three-dimensional model containing cultural structures such as buildings, towers, bridges, and tanks. This model is obtained by the precise matching of cultural features from one image of the stereo pair to the other. In the past, this stereo matching process has relied heavily on local edge operators and a gray scale matching metric. The processing is performed line by line over the imagery and the amount of geometric control is minimal. As a result, the gross structure of the scene is determined but the derived three-dimensional data is noisy, oscillatory, and at times significantly inaccurate. This paper discusses new concepts that are currently being developed to stabilize this geometric reference preparation process. The new concepts involve the use of a structural syntax which will be used as a geometric constraint on automatic stereo matching. The syntax arises from the stereo configuration of the imaging platforms at the time of exposure and the knowledge of how various cultural structures are constructed. The syntax is used to parse a scene in terms of its cultural surfaces and to dictate to the matching process the allowable relative positions and orientations of surface edges in the image planes. Using the syntax, extensive searches using a gray scale matching metric are reduced.
Comparison of tablet-based strategies for incision planning in laser microsurgery
NASA Astrophysics Data System (ADS)
Schoob, Andreas; Lekon, Stefan; Kundrat, Dennis; Kahrs, Lüder A.; Mattos, Leonardo S.; Ortmaier, Tobias
2015-03-01
Recent research has revealed that incision planning in laser surgery deploying stylus and tablet outperforms state-of-the-art micro-manipulator-based laser control. Providing more detailed quantitation regarding that approach, a comparative study of six tablet-based strategies for laser path planning is presented. Reference strategy is defined by monoscopic visualization and continuous path drawing on a graphics tablet. Further concepts deploying stereoscopic or a synthesized laser view, point-based path definition, real-time teleoperation or a pen display are compared with the reference scenario. Volunteers were asked to redraw and ablate stamped lines on a sample. Performance is assessed by measuring planning accuracy, completion time and ease of use. Results demonstrate that significant differences exist between proposed concepts. The reference strategy provides more accurate incision planning than the stereo or laser view scenario. Real-time teleoperation performs best with respect to completion time without indicating any significant deviation in accuracy and usability. Point-based planning as well as the pen display provide most accurate planning and increased ease of use compared to the reference strategy. As a result, combining the pen display approach with point-based planning has potential to become a powerful strategy because of benefiting from improved hand-eye-coordination on the one hand and from a simple but accurate technique for path definition on the other hand. These findings as well as the overall usability scale indicating high acceptance and consistence of proposed strategies motivate further advanced tablet-based planning in laser microsurgery.
ERIC Educational Resources Information Center
Waldow, Florian
2017-01-01
Researchers interested in the global flow of educational ideas and programmes have long been interested in the role of so-called "reference societies." The article investigates how top scorers in large-scale assessments are framed as positive or negative reference societies in the education policy-making debate in German mass media and…
MHD Modeling of the Solar Wind with Turbulence Transport and Heating
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Usmanov, A. V.; Matthaeus, W. H.; Breech, B.
2009-01-01
We have developed a magnetohydrodynamic model that describes the global axisymmetric steady-state structure of the solar wind near solar minimum with account for transport of small-scale turbulence associated heating. The Reynolds-averaged mass, momentum, induction, and energy equations for the large-scale solar wind flow are solved simultaneously with the turbulence transport equations in the region from 0.3 to 100 AU. The large-scale equations include subgrid-scale terms due to turbulence and the turbulence (small-scale) equations describe the effects of transport and (phenomenologically) dissipation of the MHD turbulence based on a few statistical parameters (turbulence energy, normalized cross-helicity, and correlation scale). The coupled set of equations is integrated numerically for a source dipole field on the Sun by a time-relaxation method in the corotating frame of reference. We present results on the plasma, magnetic field, and turbulence distributions throughout the heliosphere and on the role of the turbulence in the large-scale structure and temperature distribution in the solar wind.
Validation of the group nuclear safety climate questionnaire.
Navarro, M Felisa Latorre; Gracia Lerín, Francisco J; Tomás, Inés; Peiró Silla, José María
2013-09-01
Group safety climate is a leading indicator of safety performance in high reliability organizations. Zohar and Luria (2005) developed a Group Safety Climate scale (ZGSC) and found it to have a single factor. The ZGSC scale was used as a basis in this study with the researchers rewording almost half of the items on this scale, changing the referents from the leader to the group, and trying to validate a two-factor scale. The sample was composed of 566 employees in 50 groups from a Spanish nuclear power plant. Item analysis, reliability, correlations, aggregation indexes and CFA were performed. Results revealed that the construct was shared by each unit, and our reworded Group Safety Climate (GSC) scale showed a one-factor structure and correlated to organizational safety climate, formalized procedures, safety behavior, and time pressure. This validation of the one-factor structure of the Zohar and Luria (2005) scale could strengthen and spread this scale and measure group safety climate more effectively. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
Consistency of near-death experience accounts over two decades: are reports embellished over time?
Greyson, Bruce
2007-06-01
"Near-death experiences," commonly reported after clinical death and resuscitation, may require intervention and, if reliable, may elucidate altered brain functioning under extreme stress. It has been speculated that accounts of near-death experiences are exaggerated over the years. The objective of this study was to test the reliability over two decades of accounts of near-death experiences. Seventy-two patients with near-death experience who had completed the NDE scale in the 1980s (63% of the original cohort still alive) completed the scale a second time, without reference to the original scale administration. The primary outcome was differences in NDE scale scores on the two administrations. The secondary outcome was the statistical association between differences in scores and years elapsed between the two administrations. Mean scores did not change significantly on the total NDE scale, its 4 factors, or its 16 items. Correlation coefficients between scores on the two administrations were significant at P<0.001 for the total NDE scale, for its 4 factors, and for its 16 items. Correlation coefficients between score changes and time elapsed between the two administrations were not significant for the total NDE scale, for its 4 factors, or for its 16 items. Contrary to expectation, accounts of near-death experiences, and particularly reports of their positive affect, were not embellished over a period of almost two decades. These data support the reliability of near-death experience accounts.
Uncertainties in scaling factors for ab initio vibrational zero-point energies
NASA Astrophysics Data System (ADS)
Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger
2009-03-01
Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.
Defining biophysical reference conditions for dynamics river systems: an Alaskan example
NASA Astrophysics Data System (ADS)
Pess, G. R.
2008-12-01
Defining reference conditions for dynamic river ecosystems is difficult for two reasons. First long-term, persistent anthropogenic influences such as land development, harvest of biological resources, and invasive species have resulted in degraded, reduced, and simplified ecological communities and associated habitats. Second, river systems that have not been altered through human disturbance rarely have a long-term dataset on ecological conditions. However there are exceptions which can help us define the dynamic nature of river ecosystems. One large-scale exception is the Wood River system in Bristol Bay, Alaska, where habitat and salmon populations have not been altered by anthropogenic influences such as land development, hatchery production, and invasive species. In addition, the one major anthropogenic disturbance, salmon (Oncorhynchus spp.) harvest, has been quantified and regulated since its inception. First, we examined the variation in watershed and stream habitat characteristics across the Wood River system. We then compared these stream habitat characteristics with data that was collected in the 1950s. Lastly, we examined the correlation between pink (Oncorhynchus gorbuscha), chum (O. keta), and Chinook (O. tshawytscha), and sockeye salmon (O. nerka), and habitat characteristics in the Wood River system using four decades of data on salmon. We found that specific habitat attributes such as stream channel wetted width, depth, cover type, and the proportion of spawnable area were similar to data collected in the 1950s. Greater stream habitat variation occurred among streams than over time. Salmon occurrence and abundance, however was more temporal and spatially variable. The occurrence of pink and chum salmon increased from the 1970's to the present in the Wood River system, while sockeye abundance has fluctuated with changes in ocean conditions. Pink, Chinook and chum salmon ranged from non-existent to episodic to abundantly perennial, while sockeye dominated all streams in the Wood River system. One main trend was the frequency of occurrence and abundance of pink, Chinook, and chum salmon increased with watershed drainage area and stream depth and, to a lesser extent, decreased with sockeye salmon density. Conversely, sockeye salmon densities decreased with watershed drainage area and stream depth. Wood river habitat was temporally stable and spatially variable, thus identifying the suite of stream channel types that occur and identifying reference states for each is critical to capture reference conditions. Wood River biological reference states need to be established over a longer time frame than physical attributes because of the large-scale temporal variability that is forced by climatic conditions and larger scale spatially- explicit trends. Thus biological reference states for the Wood River system need to be defined with multiple streams, similar to developing reference states for different stream channel types, in order to capture the range of biological variability.
Time-resolved production and detection of reactive atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grossman, L. W.; Hurst, G. S.
1977-09-01
Cesium iodide in the presence of a buffer gas was dissociated with a pulsed ultraviolet laser, which will be referred to as the source laser. This created a population of atoms at a well defined time and in a compact, well defined volume. A second pulsed laser, with a beam that completely surrounded that of the first, photoionized the cesium after a known time delay. This laser will be referred to as the detector laser. It was determined that for short time delays, all of the cesium atoms were easily ionized. When focused, the source laser generated an extremely intensemore » fluence. By accounting for the beam intensity profile it was shown that all of the molecules in the central portion of the beam can be dissociated and detected. Besides proving the feasibility of single-molecule detection, this enabled a determination of the absolute photodissociation cross section as a function of wavelength. Initial studies of the time decay of the cesium signal at low argon pressures indicated a non-exponential decay. This was consistent with a diffusion mechanism transporting cesium atoms out of the laser beam. Therefore, it was desired to conduct further experiments using a tightly focused source beam, passing along the axis of the detector beam. The theoretical behavior of this simple geometry accounting for diffusion and reaction is easily calculated. A diffusion coefficient can then be extracted by data fitting. If reactive decay is due to impurities constituting a fixed percentage of the buffer gas, then two-body reaction rates will scale linearly with pressure and three-body reaction rates will scale quadratically. Also, the diffusion coefficient will scale inversely with pressure. At low pressures it is conceivable that decay due to diffusion would be sufficiently rapid that all other processes can be neglected. Extraction of a diffusion coefficient would then be quite direct. Finally, study of the reaction of cesium and oxygen was undertaken.« less
Twomey, Michèle; Wallis, Lee A; Myers, Jonathan E
2014-07-01
To evaluate the construct of triage acuity as measured by the South African Triage Scale (SATS) against a set of reference vignettes. A modified Delphi method was used to develop a set of reference vignettes. Delphi participants completed a 2-round consensus-building process, and independently assigned triage acuity ratings to 100 written vignettes unaware of the ratings given by others. Triage acuity ratings were summarised for all vignettes, and only those that reached 80% consensus during round 2 were included in the reference set. Triage ratings for the reference vignettes given by two independent experts using the SATS were compared with the ratings given by the international Delphi panel. Measures of sensitivity, specificity, associated percentages for over-triage/under-triage were used to evaluate the construct of triage acuity (as measured by the SATS) by examining the association between the ratings by the two experts and the international panel. On completion of the Delphi process, 42 of the 100 vignettes reached 80% consensus on their acuity rating and made up the reference set. On average, over all acuity levels, sensitivity was 74% (CI 64% to 82%), specificity 92% (CI 87% to 94%), under-triage occurred 14% (CI 8% to 23%) and over-triage 12% (CI 8% to 23%) of the time. The results of this study provide an alternative to evaluating triage scales against the construct of acuity as measured with the SATS. This method of using 80% consensus vignettes may, however, systematically bias the validity estimate towards better performance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Technical Reports Server (NTRS)
Burley, Richard K.; Adams, James F.
1987-01-01
Indentations made by typing on lead tape. Lead scales for inclusion in x-radiographs as length and position references created by repeatedly imprinting character like upper-case I, L, or V, or lower-case L into lead tape with typewriter. Character pitch of typewriter serves as length reference for scale. Thinning of tape caused by impacts of type shows up dark in radiograph.
Time scales in the context of general relativity.
Guinot, Bernard
2011-10-28
Towards 1967, the accuracy of caesium frequency standards reached such a level that the relativistic effect could not be ignored anymore. Corrections began to be applied for the gravitational frequency shift and for distant time comparisons. However, these corrections were not applied to an explicit theoretical framework. Only in 1991 did the International Astronomical Union provide metrics (then improved in 2000) for a definition of space-time coordinates in reference systems centred at the barycentre of the Solar System and at the centre of mass of the Earth. In these systems, the temporal coordinates (coordinate times) can be realized on the basis of one of them, the International Atomic Time (TAI), which is itself a realized time scale. The definition and the role of TAI in this context will be recalled. There remain controversies regarding the name to be given to the unit of coordinate times and to other quantities appearing in the theory. However, the idea that astrometry and celestial mechanics should adopt the usual metrological rules is progressing, together with the use of the International System of Units, among astronomers.
The drainage of the Baltic Ice Lake and a new Scandinavian reference 10Be production rate
NASA Astrophysics Data System (ADS)
Stroeven, Arjen P.; Heyman, Jakob; Fabel, Derek; Björck, Svante; Caffee, Marc W.; Fredin, Ola; Harbor, Jonathan M.
2015-04-01
An important constraint on the reliability of cosmogenic nuclide exposure dating is the derivation of tightly controlled production rates. We present a new dataset for 10Be production rate calibration from Mount Billingen, southern Sweden, the site of the final drainage of the Baltic Ice Lake, an event dated to 11,620 ± 100 cal yr BP. Nine samples of flood-scoured bedrock surfaces and depositional boulders and cobbles unambiguously connected to the drainage event yield a reference 10Be production rate of 4.09 ± 0.22 atoms g-1 yr-1 for the CRONUS Lm scaling and 3.93 ± 0.21 atoms g-1 yr-1 for the LSD general spallation scaling. We also recalibrate the reference 10Be production rates for four sites in Norway and combine these with the Billingen results to derive a tightly clustered Scandinavian reference 10Be production rate of 4.12 ± 0.10 (4.12 ± 0.25 for altitude scaling) atoms g-1 yr-1 for the Lm scaling scheme and 3.96 ± 0.10 (3.96 ± 0.24 for altitude scaling) atoms g-1 yr-1 for the LSD scaling scheme.
Finite-time scaling at the Anderson transition for vibrations in solids
NASA Astrophysics Data System (ADS)
Beltukov, Y. M.; Skipetrov, S. E.
2017-11-01
A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.
Coarse-grained modeling of polyethylene melts: Effect on dynamics
Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya; ...
2017-05-23
The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less
Coarse-grained modeling of polyethylene melts: Effect on dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya
The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less
NASA Astrophysics Data System (ADS)
Rochat, Bertrand
2017-04-01
High-resolution (HR) MS instruments recording HR-full scan allow analysts to go further beyond pre-acquisition choices. Untargeted acquisition can reveal unexpected compounds or concentrations and can be performed for preliminary diagnosis attempt. Then, revealed compounds will have to be identified for interpretations. Whereas the need of reference standards is mandatory to confirm identification, the diverse information collected from HRMS allows identifying unknown compounds with relatively high degree of confidence without reference standards injected in the same analytical sequence. However, there is a necessity to evaluate the degree of confidence in putative identifications, possibly before further targeted analyses. This is why a confidence scale and a score in the identification of (non-peptidic) known-unknown, defined as compounds with entries in database, is proposed for (LC-) HRMS data. The scale is based on two representative documents edited by the European Commission (2007/657/EC) and the Metabolomics Standard Initiative (MSI), in an attempt to build a bridge between the communities of metabolomics and screening labs. With this confidence scale, an identification (ID) score is determined as [a number, a letter, and a number] (e.g., 2D3), from the following three criteria: I, a General Identification Category (1, confirmed, 2, putatively identified, 3, annotated compounds/classes, and 4, unknown); II, a Chromatography Class based on the relative retention time (from the narrowest tolerance, A, to no chromatographic references, D); and III, an Identification Point Level (1, very high, 2, high, and 3, normal level) based on the number of identification points collected. Three putative identification examples of known-unknown will be presented.
NASA Astrophysics Data System (ADS)
Anjum, Muhammad Naveed; Ding, Yongjian; Shangguan, Donghui; Ahmad, Ijaz; Ijaz, Muhammad Wajid; Farid, Hafiz Umar; Yagoub, Yousif Elnour; Zaman, Muhammad; Adnan, Muhammad
2018-06-01
Recently, the Global Precipitation Measurement (GPM) mission has released the Integrated Multi-satellite Retrievals for GPM (IMERG) at a fine spatial (0.1° × 0.1°) and temporal (half hourly) resolutions. A comprehensive evaluation of this newly launched precipitation product is very important for satellite-based precipitation data users as well as for algorithm developers. The objective of this study was to provide a preliminary and timely performance evaluation of the IMERG product over the northern high lands of Pakistan. For comparison reference, the real-time and post real-time Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) products were also evaluated parallel to the IMERG. All of the selected precipitation products were evaluated at annual, monthly, seasonal and daily time scales using reference gauges data from April 2014 to December 2016. The results showed that: (1) the precipitation estimates from IMERG, 3B42V7 and 3B42RT products correlated well with the reference gauges observations at monthly time scale (CC = 0.93, 0.91, 0.88, respectively), whereas moderately at the daily time scale (CC = 0.67, 0.61, and 0.58, respectively); (2) Compared to the 3B42V7 and 3B42RT, the precipitation estimates from IMERG were more reliable in all seasons particularly in the winter season with lowest relative bias (2.61%) and highest CC (0.87); (3) IMERG showed a clear superiority over 3B42V7 and 3B42RT products in order to capture spatial distribution of precipitation over the northern Pakistan; (4) Relative to the 3B42V7 and 3B42RT, daily precipitation estimates from IMEREG showed lowest relative bias (9.20% vs. 21.40% and 26.10%, respectively) and RMSE (2.05 mm/day vs. 2.49 mm/day and 2.88 mm/day, respectively); and (5) Light precipitation events (0-1 mm/day) were usually overestimated by all said satellite-based precipitation products. In contrast moderate (1-20 mm/day) to heavy (>20 mm/day) precipitation events were underestimated by both TMPA products while IMERG was found capable to detect moderate to heavy precipitation events more precisely. Overall, the performance of IMERG was better than that of TMPA products. This preliminary evaluation of new generation of satellite-based precipitation estimates might be a useful feedback for sensor and algorithm developers as well as data users.
Vermeerbergen, Lander; Van Hootegem, Geert; Benders, Jos
2017-02-01
Ongoing shortages of care workers, together with an ageing population, make it of utmost importance to increase the quality of working life in nursing homes. Since the 1970s, normalised and small-scale nursing homes have been increasingly introduced to provide care in a family and homelike environment, potentially providing a richer work life for care workers as well as improved living conditions for residents. 'Normalised' refers to the opportunities given to residents to live in a manner as close as possible to the everyday life of persons not needing care. The study purpose is to provide a synthesis and overview of empirical research comparing the quality of working life - together with related work and health outcomes - of professional care workers in normalised small-scale nursing homes as compared to conventional large-scale ones. A systematic review of qualitative and quantitative studies. A systematic literature search (April 2015) was performed using the electronic databases Pubmed, Embase, PsycInfo, CINAHL and Web of Science. References and citations were tracked to identify additional, relevant studies. We identified 825 studies in the selected databases. After checking the inclusion and exclusion criteria, nine studies were selected for review. Two additional studies were selected after reference and citation tracking. Three studies were excluded after requesting more information on the research setting. The findings from the individual studies suggest that levels of job control and job demands (all but "time pressure") are higher in normalised small-scale homes than in conventional large-scale nursing homes. Additionally, some studies suggested that social support and work motivation are higher, while risks of burnout and mental strain are lower, in normalised small-scale nursing homes. Other studies found no differences or even opposing findings. The studies reviewed showed that these inconclusive findings can be attributed to care workers in some normalised small-scale homes experiencing isolation and too high job demands in their work roles. This systematic review suggests that normalised small-scale homes are a good starting point for creating a higher quality of working life in the nursing home sector. Higher job control enables care workers to manage higher job demands in normalised small-scale homes. However, some jobs would benefit from interventions to address care workers' perceptions of too low social support and of too high job demands. More research is needed to examine strategies to enhance these working life issues in normalised small-scale settings. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.
2009-12-01
Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2014-12-01
We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.
Reference Model 6 (RM6): Oscillating Wave Energy Converter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bull, Diana L; Smith, Chris; Jenne, Dale Scott
This report is an addendum to SAND2013-9040: Methodology for Design and Economic Analysis of Marine Energy Conversion (MEC) Technologies. This report describes an Oscillating Water Column Wave Energy Converter reference model design in a complementary manner to Reference Models 1-4 contained in the above report. In this report, a conceptual design for an Oscillating Water Column Wave Energy Converter (WEC) device appropriate for the modeled reference resource site was identified, and a detailed backward bent duct buoy (BBDB) device design was developed using a combination of numerical modeling tools and scaled physical models. Our team used the methodology in SAND2013-9040more » for the economic analysis that included costs for designing, manufacturing, deploying, and operating commercial-scale MEC arrays, up to 100 devices. The methodology was applied to identify key cost drivers and to estimate levelized cost of energy (LCOE) for this RM6 Oscillating Water Column device in dollars per kilowatt-hour ($/kWh). Although many costs were difficult to estimate at this time due to the lack of operational experience, the main contribution of this work was to disseminate a detailed set of methodologies and models that allow for an initial cost analysis of this emerging technology. This project is sponsored by the U.S. Department of Energy's (DOE) Wind and Water Power Technologies Program Office (WWPTO), within the Office of Energy Efficiency & Renewable Energy (EERE). Sandia National Laboratories, the lead in this effort, collaborated with partners from National Laboratories, industry, and universities to design and test this reference model.« less
Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing
NASA Astrophysics Data System (ADS)
Williams, McKay D.
Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.
What is the effect of LiDAR-derived DEM resolution on large-scale watershed model results?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ping Yang; Daniel B. Ames; Andre Fonseca
This paper examines the effect of raster cell size on hydrographic feature extraction and hydrological modeling using LiDAR derived DEMs. LiDAR datasets for three experimental watersheds were converted to DEMs at various cell sizes. Watershed boundaries and stream networks were delineated from each DEM and were compared to reference data. Hydrological simulations were conducted and the outputs were compared. Smaller cell size DEMs consistently resulted in less difference between DEM-delineated features and reference data. However, minor differences been found between streamflow simulations resulted for a lumped watershed model run at daily simulations aggregated at an annual average. These findings indicatemore » that while higher resolution DEM grids may result in more accurate representation of terrain characteristics, such variations do not necessarily improve watershed scale simulation modeling. Hence the additional expense of generating high resolution DEM's for the purpose of watershed modeling at daily or longer time steps may not be warranted.« less
Some TEM observations of Al2O3 scales formed on NiCrAl alloys
NASA Technical Reports Server (NTRS)
Smialek, J.; Gibala, R.
1979-01-01
The microstructural development of Al2O3 scales on NiCrAl alloys has been examined by transmission electron microscopy. Voids were observed within grains in scales formed on a pure NiCrAl alloy. Both voids and oxide grains grew measurably with oxidation time at 1100 C. The size and amount of porosity decreased towards the oxide-metal growth interface. The voids resulted from an excess number of oxygen vacancies near the oxidemetal interface. Short-circuit diffusion paths were discussed in reference to current growth stress models for oxide scales. Transient oxidation of pure, Y-doped, and Zr-doped NiCrAl was also examined. Oriented alpha-(Al, Cr)2O3 and Ni(Al, Cr)2O4 scales often coexisted in layered structures on all three alloys. Close-packed oxygen planes and directions in the corundum and spinel layers were parallel. The close relationship between oxide layers provided a gradual transition from initial transient scales to steady state Al2O3 growth.
NASA Astrophysics Data System (ADS)
Gouveia, C. M.; Trigo, R. M.; Beguería, S.; Vicente-Serrano, S. M.
2017-04-01
The present work analyzes the drought impacts on vegetation over the entire Mediterranean basin, with the purpose of determining the vegetation communities, regions and seasons at which vegetation is driven by drought. Our approach is based on the use of remote sensing data and a multi-scalar drought index. Correlation maps between fields of monthly Normalized Difference Vegetation Index (NDVI) and the Standardized Precipitation-Evapotranspiration Index (SPEI) at different time scales (1-24 months) were computed for representative months of winter (Feb), spring (May), summer (Aug) and fall (Nov). Results for the period from 1982 to 2006 show large areas highly controlled by drought, although presenting high spatial and seasonal differences, with a maximum influence in August and a minimum in February. The highest correlation values are observed in February for 3 months' time scale and in May for 6 and 12 months. The higher control of drought on vegetation in February and May is obtained mainly over the drier vegetation communities (Mediterranean Dry and Desertic) at shorter time scales (3 to 9 months). Additionally, in February the impact of drought on vegetation is lower for Temperate Oceanic and Continental vegetation types and takes place at longer time scales (18-24). The dependence of drought time-scale response with water balance, as obtained through a simple difference between precipitation and reference evapotranspiration, varies with vegetation communities. During February and November low water balance values correspond to shorter time scales over dry vegetation communities, whereas high water balance values implies longer time scales over Temperate Oceanic and Continental areas. The strong control of drought on vegetation observed for Mediterranean Dry and Desertic vegetation types located over areas with high negative values of water balance emphasizes the need for an early warning drought system covering the entire Mediterranean basin. We are confident that these results will provide a useful tool for drought management plans and play a relevant role in mitigating the impact of drought episodes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.
2015-07-01
This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the controlmore » of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.« less
Dainer-Best, Justin; Lee, Hae Yeon; Shumake, Jason D; Yeager, David S; Beevers, Christopher G
2018-06-07
Although the self-referent encoding task (SRET) is commonly used to measure self-referent cognition in depression, many different SRET metrics can be obtained. The current study used best subsets regression with cross-validation and independent test samples to identify the SRET metrics most reliably associated with depression symptoms in three large samples: a college student sample (n = 572), a sample of adults from Amazon Mechanical Turk (n = 293), and an adolescent sample from a school field study (n = 408). Across all 3 samples, SRET metrics associated most strongly with depression severity included number of words endorsed as self-descriptive and rate of accumulation of information required to decide whether adjectives were self-descriptive (i.e., drift rate). These metrics had strong intratask and split-half reliability and high test-retest reliability across a 1-week period. Recall of SRET stimuli and traditional reaction time (RT) metrics were not robustly associated with depression severity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Dainer-Best, Justin; Disner, Seth G; McGeary, John E; Hamilton, Bethany J; Beevers, Christopher G
2018-01-01
The current research examined whether carriers of the short 5-HTTLPR allele (in SLC6A4), who have been shown to selectively attend to negative information, exhibit a bias towards negative self-referent processing. The self-referent encoding task (SRET) was used to measure self-referential processing of positive and negative adjectives. Ratcliff's diffusion model isolated and extracted decision-making components from SRET responses and reaction times. Across the initial (N = 183) and replication (N = 137) studies, results indicated that short 5-HTTLPR allele carriers more easily categorized negative adjectives as self-referential (i.e., higher drift rate). Further, drift rate was associated with recall of negative self-referential stimuli. Findings across both studies provide further evidence that genetic variation may contribute to the etiology of negatively biased processing of self-referent information. Large scale studies examining the genetic contributions to negative self-referent processing may be warranted.
Long-term variability of T Tauri stars using WASP
NASA Astrophysics Data System (ADS)
Rigon, Laura; Scholz, Alexander; Anderson, David; West, Richard
2017-03-01
We present a reference study of the long-term optical variability of young stars using data from the WASP project. Our primary sample is a group of well-studied classical T Tauri stars (CTTSs), mostly in Taurus-Auriga. WASP light curves cover time-scales of up to 7 yr and typically contain 10 000-30 000 data points. We quantify the variability as a function of time-scale using the time-dependent standard deviation 'pooled sigma'. We find that the overwhelming majority of CTTSs have a low-level variability with σ < 0.3 mag dominated by time-scales of a few weeks, consistent with rotational modulation. Thus, for most young stars, monitoring over a month is sufficient to constrain the total amount of variability over time-scales of up to a decade. The fraction of stars with a strong optical variability (σ > 0.3 mag) is 21 per cent in our sample and 21 per cent in an unbiased control sample. An even smaller fraction (13 per cent in our sample, 6 per cent in the control) show evidence for an increase in variability amplitude as a function of time-scale from weeks to months or years. The presence of long-term variability correlates with the spectral slope at 3-5 μm, which is an indicator of inner disc geometry, and with the U-B band slope, which is an accretion diagnostics. This shows that the long-term variations in CTTSs are predominantly driven by processes in the inner disc and in the accretion zone. Four of the stars with long-term variations show periods of 20-60 d, significantly longer than the rotation periods and stable over months to years. One possible explanation is cyclic changes in the interaction between the disc and the stellar magnetic field.
Vela X-1 pulse timing. II - Variations in pulse frequency
NASA Technical Reports Server (NTRS)
Deeter, J. E.; Boynton, P. E.; Lamb, F. K.; Zylstra, G.
1989-01-01
The pulsed X-ray emission of Vela X-1 during May 1978 and December-January 1978-1979 is investigated analytically on the basis of published satellite observations. The data are compiled in tables and graphs and discussed in detail, with reference to data for the entire 1975-1982 period. Variations in pulse frequency are identified on time scales from 2 to 2600 days; the lower nine octaves are characterized as white noise (or random walk in pulse frequency), while the longer-period variations are attributed to changes in neutron-star rotation rates.
Deriving video content type from HEVC bitstream semantics
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.
2014-05-01
As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.
Arias, Elisa Felicitas
2005-09-15
Measuring time is a continuous activity, an international and restless enterprise hidden in time laboratories spread all over the planet. The Bureau International des Poids et Mesures is charged with coordinating activities for international timekeeping and it makes use of the world's capacity to produce a remarkably stable and accurate reference time-scale. Commercial atomic clocks beating the second in national laboratories can reach a stability of one part in 10(14) over a 5 day averaging time, compelling us to research the most highly performing methods of remote clock comparison. The unit of the international time-scale is the second of the International System of Units, realized with an uncertainty of the order 10(-15) by caesium fountains. Physicists in a few time laboratories are making efforts to gain one order of magnitude in the uncertainty of the realization of the second, and more refined techniques of time and frequency transfer are in development to accompany this progress. Femtosecond comb technology will most probably contribute in the near future to enhance the definition of the second with the incorporation of optical clocks. We will explain the evolution of the measuring of time, current state-of-the-art measures and future challenges.
Tatari, K; Smets, B F; Albrechtsen, H-J
2013-10-15
A bench-scale assay was developed to obtain site-specific nitrification biokinetic information from biological rapid sand filters employed in groundwater treatment. The experimental set-up uses granular material subsampled from a full-scale filter, packed in a column, and operated with controlled and continuous hydraulic and ammonium loading. Flowrates and flow recirculation around the column are chosen to mimic full-scale hydrodynamic conditions, and minimize axial gradients. A reference ammonium loading rate is calculated based on the average loading experienced in the active zone of the full-scale filter. Effluent concentrations of ammonium are analyzed when the bench-scale column is subject to reference loading, from which removal rates are calculated. Subsequently, removal rates above the reference loading are measured by imposing short-term loading variations. A critical loading rate corresponding to the maximum removal rate can be inferred. The assay was successfully applied to characterize biokinetic behavior from a test rapid sand filter; removal rates at reference loading matched those observed from full-scale observations, while a maximum removal capacity of 6.9 g NH4(+)-N/m(3) packed sand/h could easily be determined at 7.5 g NH4(+)-N/m(3) packed sand/h. This assay, with conditions reflecting full-scale observations, and where the biological activity is subject to minimal physical disturbance, provides a simple and fast, yet powerful tool to gain insight in nitrification kinetics in rapid sand filters. Copyright © 2013 Elsevier Ltd. All rights reserved.
Laurent, Jeff; Joiner, Thomas E; Catanzaro, Salvatore J
2011-12-01
The Positive and Negative Affect Scale for Children (PANAS-C) and the Physiological Hyperarousal Scale for Children (PH-C) seem ideal measures for school mental health screenings, because they are theory based, psychometrically sound, and brief. This study provides descriptive information and preliminary cutoff scores in an effort to increase the practical utility of the measures. Scores on the PANAS-C Positive Affect (PA) and Negative Affect (NA) scales and the PH-C were compared for a general sample of schoolchildren (n = 226), a group of students referred for special education services (n = 83), and youths on an inpatient psychiatric unit (n = 37). Expected patterns of scores emerged for the general school and referred school samples, although only scores on the PH-C were statistically significantly different. Differences in scores between the general school and inpatient samples were significant for all 3 scales. Differences in scores between the referred school and inpatient samples were significant for the NA scale and the PH-C but not for the PA scale. In addition, we used traditional self-report measures to form groups of normal, anxious, depressed, and mixed anxious and depressed youths. Again, predicted general patterns of PA, NA and PH scores were supported, although statistical differences were not always evident. In particular, scores on the PH-C for the anxious and depressed groups were inconsistent with predictions. Possible reasons related to sample and scale issues are discussed. Finally, preliminary cutoff scores were proposed for the PANAS-C scales and the PH-C.
Strategies for Interactive Visualization of Large Scale Climate Simulations
NASA Astrophysics Data System (ADS)
Xie, J.; Chen, C.; Ma, K.; Parvis
2011-12-01
With the advances in computational methods and supercomputing technology, climate scientists are able to perform large-scale simulations at unprecedented resolutions. These simulations produce data that are time-varying, multivariate, and volumetric, and the data may contain thousands of time steps with each time step having billions of voxels and each voxel recording dozens of variables. Visualizing such time-varying 3D data to examine correlations between different variables thus becomes a daunting task. We have been developing strategies for interactive visualization and correlation analysis of multivariate data. The primary task is to find connection and correlation among data. Given the many complex interactions among the Earth's oceans, atmosphere, land, ice and biogeochemistry, and the sheer size of observational and climate model data sets, interactive exploration helps identify which processes matter most for a particular climate phenomenon. We may consider time-varying data as a set of samples (e.g., voxels or blocks), each of which is associated with a vector of representative or collective values over time. We refer to such a vector as a temporal curve. Correlation analysis thus operates on temporal curves of data samples. A temporal curve can be treated as a two-dimensional function where the two dimensions are time and data value. It can also be treated as a point in the high-dimensional space. In this case, to facilitate effective analysis, it is often necessary to transform temporal curve data from the original space to a space of lower dimensionality. Clustering and segmentation of temporal curve data in the original or transformed space provides us a way to categorize and visualize data of different patterns, which reveals connection or correlation of data among different variables or at different spatial locations. We have employed the power of GPU to enable interactive correlation visualization for studying the variability and correlations of a single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.
VizieR Online Data Catalog: Variable stars in globular clusters (Figuera Jaimes+, 2016)
NASA Astrophysics Data System (ADS)
Figuera Jaimes, R.; Bramich, D. M.; Skottfelt, J.; Kains, N.; Jorgensen, U. G.; Horne, K.; Dominik, M.; Alsubai, K. A.; Bozza, V.; Calchi Novati, S.; Ciceri, S.; D'Ago, G.; Galianni, P.; Gu, S.-H.; W Harpsoe, K. B.; Haugbolle, T.; Hinse, T. C.; Hundertmark, M.; Juncher, D.; Korhonen, H.; Mancini, L.; Popovas, A.; Rabus, M.; Rahvar, S.; Scarpetta, G.; Schmidt, R. W.; Snodgrass, C.; Southworth, J.; Starkey, D.; Street, R. A.; Surdej, J.; Wang, X.-B.; Wertz, O.
2016-02-01
Observations were taken during 2013 and 2014 as part of an ongoing program at the 1.54m Danish telescope at the ESO observatory at La Silla in Chile that was implemented from April to September each year. table1.dat file contains the time-series I photometry for all the variables in the globular clusters studied in this work. We list standard and instrumental magnitudes and their uncertainties corresponding to the variable star identification, filter, and epoch of mid-exposure. For completeness, we also list the reference flux, difference flux, and photometric scale factor, along with the uncertainties on the reference and difference fluxes. (2 data files).
VizieR Online Data Catalog: Variable stars in NGC 6715 (Figuera Jaimes+, 2016)
NASA Astrophysics Data System (ADS)
Figuera Jaimes, R.; Bramich, D. M.; Kains, N.; Skottfelt, J.; Jorgensen, U. G.; Horne, K.; Dominik, M.; Alsubai, K. A.; Bozza, V.; Burgdorf, M. J.; Calchi Novati, S.; Ciceri, S.; D'Ago, G.; Evans, D. F.; Galianni, P.; Gu, S. H.; Harpsoe, K. B. W.; Haugbolle, T.; Hinse, T. C.; Hundertmark, M.; Juncher, D.; Kerins, E.; Korhonen, H.; Kuffmeier, M.; Mancini, L.; Peixinho, N.; Popovas, A.; Rabus, M.; Rahvar, S.; Scarpetta, G.; Schmidt, R. W.; Snodgrass, C.; Southworth, J.; Starkey, D.; Street, R. A.; Surdej, J.; Tronsgaard, R.; Unda-Sanzana, E.; von Essen, C.; Wang, X. B.; Wertz, O.
2016-06-01
Observations were taken during 2013, 2014, and 2015 as part of an ongoing program at the 1.54m Danish telescope at the ESO observatory at La Silla in Chile that was implemented from April to September each year. table1.dat file contains the time-series I photometry for all the variables in NGC 6715 studied in this work. We list standard and instrumental magnitudes and their uncertainties corresponding to the variable star identification, filter, and epoch of mid-exposure. For completeness, we also list the reference flux, difference flux, and photometric scale factor, along with the uncertainties on the reference and difference fluxes. (3 data files).
A First Look at the Upcoming SISO Space Reference FOM
NASA Technical Reports Server (NTRS)
Mueller, Bjorn; Crues, Edwin Z.; Dexter, Dan; Garro, Alfredo; Skuratovskiy, Anton; Vankov, Alexander
2016-01-01
Spaceflight is difficult, dangerous and expensive; human spaceflight even more so. In order to mitigate some of the danger and expense, professionals in the space domain have relied, and continue to rely, on computer simulation. Simulation is used at every level including concept, design, analysis, construction, testing, training and ultimately flight. As space systems have grown more complex, new simulation technologies have been developed, adopted and applied. Distributed simulation is one those technologies. Distributed simulation provides a base technology for segmenting these complex space systems into smaller, and usually simpler, component systems or subsystems. This segmentation also supports the separation of responsibilities between participating organizations. This segmentation is particularly useful for complex space systems like the International Space Station (ISS), which is composed of many elements from many nations along with visiting vehicles from many nations. This is likely to be the case for future human space exploration activities. Over the years, a number of distributed simulations have been built within the space domain. While many use the High Level Architecture (HLA) to provide the infrastructure for interoperability, HLA without a Federation Object Model (FOM) is insufficient by itself to insure interoperability. As a result, the Simulation Interoperability Standards Organization (SISO) is developing a Space Reference FOM. The Space Reference FOM Product Development Group is composed of members from several countries. They contribute experiences from projects within NASA, ESA and other organizations and represent government, academia and industry. The initial version of the Space Reference FOM is focusing on time and space and will provide the following: (i) a flexible positioning system using reference frames for arbitrary bodies in space, (ii) a naming conventions for well-known reference frames, (iii) definitions of common time scales, (iv) federation agreements for common types of time management with focus on time stepped simulation, and (v) support for physical entities, such as space vehicles and astronauts. The Space Reference FOM is expected to make collaboration politically, contractually and technically easier. It is also expected to make collaboration easier to manage and extend.
NASA Astrophysics Data System (ADS)
Hashimoto, S.; Fujita, T.; Nakayama, T.; Xu, K.
2007-12-01
There is an ongoing project on establishing environmental scenarios in Japan to evaluate middle to long-term environmental policy and technology options toward low carbon society. In this project, the time horizon of the scenarios is set for 2050 on the ground that a large part of social infrastructure in Japan is likely to be renovated by that time, and cities are supposed to play important roles in building low carbon society in Japan. This belief is held because cities or local governments could implement various policies and programs, such as land use planning and promotion of new technologies with low GHG emissions, which produce an effect in an ununiform manner, taking local socio-economic conditions into account, while higher governments, either national or prefectural, could impose environmental tax on electricity and gas to alleviate ongoing GHG emissions, which uniformly covers their jurisdictions. In order for local governments to devise and implement concrete administrative actions equipped with rational policies and technologies, referring the environmental scenarios developed for the entire nation, we need to localize the national scenarios, both in terms of spatial and temporal extent, so that they could better reflect local socio-economic and institutional conditions. In localizing the national scenarios, the participation of stakeholders is significant because they play major roles in shaping future society. Stakeholder participation in the localization process would bring both creative and realistic inputs on how future unfolds on a city scale. In this research, 1) we reviewed recent efforts on international and domestic scenario development to set a practical time horizon for a city-scale environmental scenario, which would lead to concrete environmental policies and programs, 2) designed a participatory scenario development/localization process, drawing on the framework of the 'Story-and-Simulation' or SAS approach, which Alcamo(2001) proposed, and 3) started implementing it to the city of Kawasaki, Kanagawa, Japan, in cooperation with municipal officials and stakeholders. The participatory process is to develop city-scale environmental scenarios toward low carbon society, referring international and domestic environmental scenarios. Though the scenario development is still in process, it has already brought practical knowledge about and experience on how to bridge scenarios developed for different temporal and spatial scales.
Polymer Chain Conformation and Dynamical Confinement in a Model One-Component Nanocomposite
NASA Astrophysics Data System (ADS)
Mark, C.; Holderer, O.; Allgaier, J.; Hübner, E.; Pyckhout-Hintzen, W.; Zamponi, M.; Radulescu, A.; Feoktystov, A.; Monkenbusch, M.; Jalarvo, N.; Richter, D.
2017-07-01
We report a neutron-scattering investigation on the structure and dynamics of a single-component nanocomposite based on SiO2 particles that were grafted with polyisoprene chains at the entanglement limit. By skillful labeling, we access both the monomer density in the corona as well as the conformation of the grafted chains. While the corona profile follows a r-1 power law, the conformation of a grafted chain is identical to that of a chain in a reference melt, implying a high mutual penetration of the coronas from different particles. The brush crowding leads to topological confinement of the chain dynamics: (i) At local scales, the segmental dynamics is unchanged compared to the reference melt, while (ii) at the scale of the chain, the dynamics appears to be slowed down; (iii) by performing a mode analysis in terms of end-fixed Rouse chains, the slower dynamics is tracked to topological confinement within the cone spanned by the adjacent grafts; (iv) by adding 50% matrix chains, the topological confinement sensed by the grafted chain is lifted partially and the apparent chain motion is accelerated. We observe a crossover from pure Rouse motion at short times to topological confined motion beyond the time when the segmental mean squared displacement has reached the distance to the next graft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, M. J.; Michalakes, J.; Vanderwende, B.
Wind plant aerodynamics are directly affected by the microscale weather, which is directly influenced by the mesoscale weather. Microscale weather refers to processes that occur within the atmospheric boundary layer with the largest scales being a few hundred meters to a few kilometers depending on the atmospheric stability of the boundary layer. Mesoscale weather refers to large weather patterns, such as weather fronts, with the largest scales being hundreds of kilometers wide. Sometimes microscale simulations that capture mesoscale-driven variations (changes in wind speed and direction over time or across the spatial extent of a wind plant) are important in windmore » plant analysis. In this paper, we present our preliminary work in coupling a mesoscale weather model with a microscale atmospheric large-eddy simulation model. The coupling is one-way beginning with the weather model and ending with a computational fluid dynamics solver using the weather model in coarse large-eddy simulation mode as an intermediary. We simulate one hour of daytime moderately convective microscale development driven by the mesoscale data, which are applied as initial and boundary conditions to the microscale domain, at a site in Iowa. We analyze the time and distance necessary for the smallest resolvable microscales to develop.« less
A time series of urban extent in China using DSMP/OLS nighttime light data
Chen, Dongsheng; Chen, Le; Wang, Huan; Guan, Qingfeng
2018-01-01
Urban extent data play an important role in urban management and urban studies, such as monitoring the process of urbanization and changes in the spatial configuration of urban areas. Traditional methods of extracting urban-extent information are primarily based on manual investigations and classifications using remote sensing images, and these methods have such problems as large costs in labor and time and low precision. This study proposes an improved, simplified and flexible method for extracting urban extents over multiple scales and the construction of spatiotemporal models using DMSP/OLS nighttime light (NTL) for practical situations. This method eliminates the regional temporal and spatial inconsistency of thresholding NTL in large-scale and multi-temporal scenes. Using this method, we have extracted the urban extents and calculated the corresponding areas on the county, municipal and provincial scales in China from 2000 to 2012. In addition, validation with the data of reference data shows that the overall accuracy (OA), Kappa and F1 Scores were 0.996, 0.793, and 0.782, respectively. We increased the spatial resolution of the urban extent to 500 m (approximately four times finer than the results of previous studies). Based on the urban extent dataset proposed above, we analyzed changes in urban extents over time and observed that urban sprawl has grown in all of the counties of China. We also identified three patterns of urban sprawl: Early Urban Growth, Constant Urban Growth and Recent Urban Growth. In addition, these trends of urban sprawl are consistent with the western, eastern and central cities of China, respectively, in terms of their spatial distribution, socioeconomic characteristics and historical background. Additionally, the urban extents display the spatial configurations of urban areas intuitively. The proposed urban extent dataset is available for download and can provide reference data and support for future studies of urbanization and urban planning. PMID:29795685
Fuzzy Matching Based on Gray-scale Difference for Quantum Images
NASA Astrophysics Data System (ADS)
Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia
2018-05-01
Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.
Deriving a geocentric reference frame for satellite positioning and navigation
NASA Technical Reports Server (NTRS)
Malla, R. P.; Wu, S.-C.
1988-01-01
With the advent of Earth-orbiting geodetic satellites, nongeocentric datums or reference frames have become things of the past. Accurate geocentric three-dimensional positioning is now possible and is of great importance for various geodetic and oceanographic applications. While relative positioning accuracy of a few centimeters has become a reality using very long baseline interferometry (VLBI), the uncertainty in the offset of the adopted coordinate system origin from the geocenter is still believed to be on the order of 1 meter. Satellite laser ranging (SLR), however, is capable of determining this offset to better than 10 cm, but this is possible only after years of measurements. Global Positioning System (GPS) measurements provide a powerful tool for an accurate determination of this origin offset. Two strategies are discussed. The first strategy utilizes the precise relative positions that were predetermined by VLBI to fix the frame orientation and the absolute scaling, while the offset from the geocenter is determined from GPS measurements. Three different cases are presented under this strategy. The reference frame thus adopted will be consistent with the VLBI coordinate system. The second strategy establishes a reference frame by holding only the longitude of one of the tracking sites fixed. The absolute scaling is determined by the adopted gravitational constant (GM) of the Earth; and the latitude is inferred from the time signature of the Earth rotation in the GPS measurements. The coordinate system thus defined will be a geocentric Earth-fixed coordinate system.
The visible ground surface as a reference frame for scaling binocular depth of a target in midair
WU, JUN; ZHOU, LIU; SHI, PAN; HE, ZIJIANG J; OOI, TENG LENG
2014-01-01
The natural ground surface carries texture information that extends continuously from one’s feet to the horizon, providing a rich depth resource for accurately locating an object resting on it. Here, we showed that the ground surface’s role as a reference frame also aids in locating a target suspended in midair based on relative binocular disparity. Using real world setup in our experiments, we first found that a suspended target is more accurately localized when the ground surface is visible and the observer views the scene binocularly. In addition, the increased accuracy occurs only when the scene is viewed for 5 sec rather than 0.15 sec, suggesting that the binocular depth process takes time. Second, we found that manipulation of the configurations of the texture-gradient and/or linear-perspective cues on the visible ground surface affects the perceived distance of the suspended target in midair. Third, we found that a suspended target is more accurately localized against a ground texture surface than a ceiling texture surface. This suggests that our visual system usesthe ground surface as the preferred reference frame to scale the distance of a suspended target according to its relative binocular disparity. PMID:25384237
Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J
2015-04-01
To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.
De Kegel, A; Peersman, W; Onderbeke, K; Baetens, T; Dhooge, I; Van Waelvelde, H
2013-03-01
The Alberta Infant Motor Scales (AIMS) is a reliable and valid assessment tool to evaluate the motor performance from birth to independent walking. This study aimed to determine whether the Canadian reference values on the AIMS from 1990-1992 are still useful tor Flemish infants, assessed in 2007-2010. Additionally, the association between motor performance and sleep and play positioning will be determined. A total of 270 Flemish infants between 0 and 18 months, recruited by formal day care services, were assessed with the AIMS by four trained physiotherapists. Information about sleep and play positioning was collected by mean of a questionnaire. Flemish infants perform significantly lower on the AIMS compared with the reference values (P < 0.001). Especially, infants from the age groups of 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 and of 15 months showed significantly lower scores. From the information collected by parental questionnaires, the lower motor scores seem to be related to the sleep position, the amount of play time in prone, in supine and in a sitting device. Infants who are exposed often to frequently to prone while awake showed a significant higher motor performance than infants who are exposed less to prone (<6 m: P = 0.002; >6 m: P = 0.013). Infants who are placed often to frequently in a sitting device in the first 6 months of life (P = 0.010) and in supine after 6 months (P = 0.001) performed significantly lower than those who are placed less in it. Flemish infants recruited by formal day care services, show significantly lower motor scores than the Canadian norm population. New reference values should be established for the AIMS for accurate identification of infants at risk. Prevention of sudden infant death syndrome by promoting supine sleep position should go together with promotion of tummy time when awake and avoiding to spent too much time in sitting devices when awake. © 2012 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Wilson, G. S.
1977-01-01
The paper describes interrelationships between synoptic-scale and convective-scale systems obtained by following individual air parcels as they traveled within the convective storm environment of AVE IV. (NASA's fourth Atmospheric Variability Experiment, AVE IV, was a 36-hour study in April 1975 of the atmospheric variability and structure in regions of convective storms.) A three-dimensional trajectory model was used to calculate parcel paths, and manually digitized radar was employed to locate convective activity of various intensities and to determine those trajectories that traversed the storm environment. Spatial and temporal interrelationships are demonstrated by reference to selected time periods of AVE IV which contain the development and movement of the squall line in which the Neosho tornado was created.
Jiménez-Aquino, J I; Romero-Bastida, M
2011-07-01
The detection of weak signals through nonlinear relaxation times for a Brownian particle in an electromagnetic field is studied in the dynamical relaxation of the unstable state, characterized by a two-dimensional bistable potential. The detection process depends on a dimensionless quantity referred to as the receiver output, calculated as a function of the nonlinear relaxation time and being a characteristic time scale of our system. The latter characterizes the complete dynamical relaxation of the Brownian particle as it relaxes from the initial unstable state of the bistable potential to its corresponding steady state. The one-dimensional problem is also studied to complement the description.
[ETAP: A smoking scale for Primary Health Care].
González Romero, Pilar María; Cuevas Fernández, Francisco Javier; Marcelino Rodríguez, Itahisa; Rodríguez Pérez, María Del Cristo; Cabrera de León, Antonio; Aguirre-Jaime, Armando
2016-05-01
To obtain a scale of tobacco exposure to address smoking cessation. Follow-up of a cohort. Scale validation. Primary Care Research Unit. Tenerife. A total of 6729 participants from the "CDC de Canarias" cohort. A scale was constructed under the assumption that the time of exposure to tobacco is the key factor to express accumulated risk. Discriminant validity was tested on prevalent cases of acute myocardial infarction (AMI; n=171), and its best cut-off for preventive screening was obtained. Its predictive validity was tested with incident cases of AMI (n=46), comparing the predictive power with markers (age, sex) and classic risk factors of AMI (hypertension, diabetes, dyslipidaemia), including the pack-years index (PYI). The scale obtained was the sum of three times the years that they had smoked plus years exposed to smoking at home and at work. The frequency of AMI increased with the values of the scale, with the value 20 years of exposure being the most appropriate cut-off for preventive action, as it provided adequate predictive values for incident AMI. The scale surpassed PYI in predicting AMI, and competed with the known markers and risk factors. The proposed scale allows a valid measurement of exposure to smoking and provides a useful and simple approach that can help promote a willingness to change, as well as prevention. It still needs to demonstrate its validity, taking as reference other problems associated with smoking. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Reference values for anxiety questionnaires: the Leiden Routine Outcome Monitoring Study.
Schulte-van Maaren, Yvonne W M; Giltay, Erik J; van Hemert, Albert M; Zitman, Frans G; de Waal, Margot W M; Carlier, Ingrid V E
2013-09-25
The monitoring of patients with an anxiety disorder can benefit from Routine Outcome Monitoring (ROM). As anxiety disorders differ in phenomenology, several anxiety questionnaires are included in ROM: Brief Scale for Anxiety (BSA), PADUA Inventory Revised (PI-R), Panic Appraisal Inventory (PAI), Penn State Worry Questionnaire (PSWQ), Worry Domains Questionnaire (WDQ), Social Interaction, Anxiety Scale (SIAS), Social Phobia Scale (SPS), and the Impact of Event Scale-Revised (IES-R). We aimed to generate reference values for both 'healthy' and 'clinically anxious' populations for these anxiety questionnaires. We included 1295 subjects from the general population (ROM reference-group) and 5066 psychiatric outpatients diagnosed with a specific anxiety disorder (ROM patient-group). The MINI was used as diagnostic device in both the ROM reference group and the ROM patient group. To define limits for one-sided reference intervals (95th percentile; P95) the outermost 5% of observations were used. Receiver Operating Characteristics (ROC) analyses were used to yield alternative cut-off values for the anxiety questionnaires. For the ROM reference-group the mean age was 40.3 years (SD=12.6), and for the ROM patient-group it was 36.5 years (SD=11.9). Females constituted 62.8% of the reference-group and 64.4% of the patient-group. P95 ROM reference group cut-off values for reference versus clinically anxious populations were 11 for the BSA, 43 for the PI-R, 37 for the PAI Anticipated Panic, 47 for the PAI Perceived Consequences, 65 for the PAI Perceived Self-efficacy, 66 for the PSWQ, 74 for the WDQ, 32 for the SIAS, 19 for the SPS, and 36 for IES-R. ROC analyses yielded slightly lower reference values. The discriminative power of all eight anxiety questionnaires was very high. Substantial non-response and limited generalizability. For eight anxiety questionnaires a comprehensive set of reference values was provided. Reference values were generally higher in women than in men, implying the use of gender-specific cut-off values. Each instrument can be offered to every patient with MAS disorders to make responsible decisions about continuing, changing or terminating therapy. © 2013 Elsevier B.V. All rights reserved.
Marshall, Paul; Schroeder, Ryan; O'Brien, Jeffrey; Fischer, Rebecca; Ries, Adam; Blesi, Brita; Barker, Jessica
2010-10-01
This study examines the effectiveness of symptom validity measures to detect suspect effort in cognitive testing and invalid completion of ADHD behavior rating scales in 268 adults referred for ADHD assessment. Patients were diagnosed with ADHD based on cognitive testing, behavior rating scales, and clinical interview. Suspect effort was diagnosed by at least two of the following: failure on embedded and free-standing SVT measures, a score > 2 SD below the ADD population average on tests, failure on an ADHD behavior rating scale validity scale, or a major discrepancy between reported and observed ADHD behaviors. A total of 22% of patients engaged in symptom exaggeration. The Word Memory test immediate recall and consistency score (both 64%), TOVA omission errors (63%) and reaction time variability (54%), CAT-A infrequency scale (58%), and b Test (47%) had good sensitivity as well as at least 90% specificity. Clearly, such measures should be used to help avoid making false positive diagnoses of ADHD.
2014-01-01
Background Habitat fragmentation has accelerated within the last century, but may have been ongoing over longer time scales. We analyzed the timing and genetic consequences of fragmentation in two isolated lake-dwelling brown trout populations. They are from the same river system (the Gudenå River, Denmark) and have been isolated from downstream anadromous trout by dams established ca. 600–800 years ago. For reference, we included ten other anadromous populations and two hatchery strains. Based on analysis of 44 microsatellite loci we investigated if the lake populations have been naturally genetically differentiated from anadromous trout for thousands of years, or have diverged recently due to the establishment of dams. Results Divergence time estimates were based on 1) Approximate Bayesian Computation and 2) a coalescent-based isolation-with-gene-flow model. Both methods suggested divergence times ca. 600–800 years bp, providing strong evidence for establishment of dams in the Medieval as the factor causing divergence. Bayesian cluster analysis showed influence of stocked trout in several reference populations, but not in the focal lake and anadromous populations. Estimates of effective population size using a linkage disequilibrium method ranged from 244 to > 1,000 in all but one anadromous population, but were lower (153 and 252) in the lake populations. Conclusions We show that genetic divergence of lake-dwelling trout in two Danish lakes reflects establishment of water mills and impassable dams ca. 600–800 years ago rather than a natural genetic population structure. Although effective population sizes of the two lake populations are not critically low they may ultimately limit response to selection and thereby future adaptation. Our results demonstrate that populations may have been affected by anthropogenic disturbance over longer time scales than normally assumed. PMID:24903056
Hansen, Michael M; Limborg, Morten T; Ferchaud, Anne-Laure; Pujolar, José-Martin
2014-06-05
Habitat fragmentation has accelerated within the last century, but may have been ongoing over longer time scales. We analyzed the timing and genetic consequences of fragmentation in two isolated lake-dwelling brown trout populations. They are from the same river system (the Gudenå River, Denmark) and have been isolated from downstream anadromous trout by dams established ca. 600-800 years ago. For reference, we included ten other anadromous populations and two hatchery strains. Based on analysis of 44 microsatellite loci we investigated if the lake populations have been naturally genetically differentiated from anadromous trout for thousands of years, or have diverged recently due to the establishment of dams. Divergence time estimates were based on 1) Approximate Bayesian Computation and 2) a coalescent-based isolation-with-gene-flow model. Both methods suggested divergence times ca. 600-800 years bp, providing strong evidence for establishment of dams in the Medieval as the factor causing divergence. Bayesian cluster analysis showed influence of stocked trout in several reference populations, but not in the focal lake and anadromous populations. Estimates of effective population size using a linkage disequilibrium method ranged from 244 to > 1,000 in all but one anadromous population, but were lower (153 and 252) in the lake populations. We show that genetic divergence of lake-dwelling trout in two Danish lakes reflects establishment of water mills and impassable dams ca. 600-800 years ago rather than a natural genetic population structure. Although effective population sizes of the two lake populations are not critically low they may ultimately limit response to selection and thereby future adaptation. Our results demonstrate that populations may have been affected by anthropogenic disturbance over longer time scales than normally assumed.
Kerlinger's Criterial Referents Theory Revisited.
ERIC Educational Resources Information Center
Zak, Itai; Birenbaum, Menucha
1980-01-01
Kerlinger's criterial referents theory of attitudes was tested cross-culturally by administering an education attitude referents summated-rating scale to 713 individuals in Israel. The response pattern to criterial and noncriterial referents was examined. Results indicated empirical cross-cultural validity of theory, but questioned measuring…
Automated mapping of burned areas in semi-arid ecosystems using modis time-series imagery
NASA Astrophysics Data System (ADS)
Hardtke, L. A.; Blanco, P. D.; del Valle, H. F.; Metternicht, G. I.; Sione, W. F.
2015-04-01
Understanding spatial and temporal patterns of burned areas at regional scales, provides a long-term perspective of fire processes and its effects on ecosystems and vegetation recovery patterns, and it is a key factor to design prevention and post-fire restoration plans and strategies. Standard satellite burned area and active fire products derived from the 500-m MODIS and SPOT are avail - able to this end. However, prior research caution on the use of these global-scale products for regional and sub-regional applica - tions. Consequently, we propose a novel algorithm for automated identification and mapping of burned areas at regional scale in semi-arid shrublands. The algorithm uses a set of the Normalized Burned Ratio Index products derived from MODIS time series; using a two-phased cycle, it firstly detects potentially burned pixels while keeping a low commission error (false detection of burned areas), and subsequently labels them as seed patches. Region growing image segmentation algorithms are applied to the seed patches in the second-phase, to define the perimeter of fire affected areas while decreasing omission errors (missing real burned areas). Independently-derived Landsat ETM+ burned-area reference data was used for validation purposes. The correlation between the size of burnt areas detected by the global fire products and independently-derived Landsat reference data ranged from R2 = 0.01 - 0.28, while our algorithm performed showed a stronger correlation coefficient (R2 = 0.96). Our findings confirm prior research calling for caution when using the global fire products locally or regionally.
Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains
NASA Technical Reports Server (NTRS)
Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang
2013-01-01
Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.
Construction of Polarimetric Radar-Based Reference Rain Maps for the Iowa Flood Studies Campaign
NASA Technical Reports Server (NTRS)
Petersen, Walter; Wolff, David; Krajewski, Witek; Gatlin, Patrick
2015-01-01
The Global Precipitation Measurement (GPM) Mission Iowa Flood Studies (IFloodS) campaign was conducted in central and northeastern Iowa during the months of April-June, 2013. Specific science objectives for IFloodS included quantification of uncertainties in satellite and ground-based estimates of precipitation, 4-D characterization of precipitation physical processes and associated parameters (e.g., size distributions, water contents, types, structure etc.), assessment of the impact of precipitation estimation uncertainty and physical processes on hydrologic predictive skill, and refinement of field observations and data analysis approaches as they pertain to future GPM integrated hydrologic validation and related field studies. In addition to field campaign archival of raw and processed satellite data (including precipitation products), key ground-based platforms such as the NASA NPOL S-band and D3R Ka/Ku-band dual-polarimetric radars, University of Iowa X-band dual-polarimetric radars, a large network of paired rain gauge platforms, and a large network of 2D Video and Parsivel disdrometers were deployed. In something of a canonical approach, the radar (NPOL in particular), gauge and disdrometer observational assets were deployed to create a consistent high-quality distributed (time and space sampling) radar-based ground "reference" rainfall dataset, with known uncertainties, that could be used for assessing the satellite-based precipitation products at a range of space/time scales. Subsequently, the impact of uncertainties in the satellite products could be evaluated relative to the ground-benchmark in coupled weather, land-surface and distributed hydrologic modeling frameworks as related to flood prediction. Relative to establishing the ground-based "benchmark", numerous avenues were pursued in the making and verification of IFloodS "reference" dual-polarimetric radar-based rain maps, and this study documents the process and results as they pertain specifically to efforts using the NPOL radar dataset. The initial portions of the "process" involved dual-polarimetric quality control procedures which employed standard phase and correlation-based approaches to removal of clutter and non-meteorological echo. Calculation of a scale-adaptive KDP was accomplished using the method of Wang and Chandrasekar (2009; J. Atmos. Oceanic Tech.). A dual-polarimetric blockage algorithm based on Lang et al. (2009; J. Atmos. Oceanic Tech.) was then implemented to correct radar reflectivity and differential reflectivity at low elevation angles. Next, hydrometeor identification algorithms were run to identify liquid and ice hydrometeors. After the quality control and data preparation steps were completed several different dual-polarimetric rain estimation algorithms were employed to estimate rainfall rates using rainfall scans collected approximately every two to three minutes throughout the campaign. These algorithms included a polarimetrically-tuned Z-R algorithm that adjusts for drop oscillations (via Bringi et al., 2004, J. Atmos. Oceanic Tech.), and several different hybrid polarimetric variable approaches, including one that made use of parameters tuned to IFloodS 2D Video Disdrometer measurements. Finally, a hybrid scan algorithm was designed to merge the rain rate estimates from multiple low level elevation angle scans (where blockages could not be appropriately corrected) in order to create individual low-level rain maps. Individual rain maps at each time step were subsequently accumulated over multiple time scales for comparison to gauge network data. The comparison results and overall error character depended strongly on rain event type, polarimetric estimator applied, and range from the radar. We will present the outcome of these comparisons and their impact on constructing composited "reference" rainfall maps at select time and space scales.
Stability and accuracy of International Atomic Time TAI.
NASA Astrophysics Data System (ADS)
Thomas, C.
Since the end of 1992, the quality of the timing data received at the BIPM has rapidly evolved dues to the extensive replacement of older designs of commercial Cs clocks. Consequently, the stability of the reference time scales has improved significantly. This was tested by running modified algorithms over the real clock data collected at the BIPM. Results of different studies are shown here; in particular the implementation of an upper relative contribution, chosen equal to 1.37% for any contributing clock, leads to σy(τ=40 d) = 1.8×10-15. The accuracy of TAI is estimated by the difference between the duration of the TAI scale interval and the SI second as produced on the rotating geoid by primary frequency standards. In this paper, TAI accuracy is evaluated from six primary frequency standards LPTF-FO1, PTB CS1, PTB CS2, PTB CS3, NIST-7 and SU MCsR 102 all corrected in a consistent manner for the gravitational shift and the black-body radiation shift. This led to a mean departure of the TAI scale interval of 1.8×10-14 s over 1995, known with a relative uncertainty of 0.5×10-14 (1σ).
Role of optometry school in single day large scale school vision testing
Anuradha, N; Ramani, Krishnakumar
2015-01-01
Background: School vision testing aims at identification and management of refractive errors. Large-scale school vision testing using conventional methods is time-consuming and demands a lot of chair time from the eye care professionals. A new strategy involving a school of optometry in single day large scale school vision testing is discussed. Aim: The aim was to describe a new approach of performing vision testing of school children on a large scale in a single day. Materials and Methods: A single day vision testing strategy was implemented wherein 123 members (20 teams comprising optometry students and headed by optometrists) conducted vision testing for children in 51 schools. School vision testing included basic vision screening, refraction, frame measurements, frame choice and referrals for other ocular problems. Results: A total of 12448 children were screened, among whom 420 (3.37%) were identified to have refractive errors. 28 (1.26%) children belonged to the primary, 163 to middle (9.80%), 129 (4.67%) to secondary and 100 (1.73%) to the higher secondary levels of education respectively. 265 (2.12%) children were referred for further evaluation. Conclusion: Single day large scale school vision testing can be adopted by schools of optometry to reach a higher number of children within a short span. PMID:25709271
Analyzing large scale genomic data on the cloud with Sparkhit
Huang, Liren; Krüger, Jan
2018-01-01
Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074
NASA Astrophysics Data System (ADS)
Egbers, Christoph; Futterer, Birgit; Zaussinger, Florian; Harlander, Uwe
2014-05-01
Baroclinic waves are responsible for the transport of heat and momentum in the oceans, in the Earth's atmosphere as well as in other planetary atmospheres. The talk will give an overview on possibilities to simulate such large scale as well as co-existing small scale structures with the help of well defined laboratory experiments like the baroclinic wave tank (annulus experiment). The analogy between the Earth's atmosphere and the rotating cylindrical annulus experiment only driven by rotation and differential heating between polar and equatorial regions is obvious. From the Gulf stream single vortices seperate from time to time. The same dynamics and the co-existence of small and large scale structures and their separation can be also observed in laboratory experiments as in the rotating cylindrical annulus experiment. This experiment represents the mid latitude dynamics quite well and is part as a central reference experiment in the German-wide DFG priority research programme ("METSTRÖM", SPP 1276) yielding as a benchmark for lot of different numerical methods. On the other hand, those laboratory experiments in cylindrical geometry are limited due to the fact, that the surface and real interaction between polar and equatorial region and their different dynamics can not be really studied. Therefore, I demonstrate how to use the very successful Geoflow I and Geoflow II space experiment hardware on ISS with future modifications for simulations of small and large scale planetary atmospheric motion in spherical geometry with differential heating between inner and outer spheres as well as between the polar and equatorial regions. References: Harlander, U., Wenzel, J., Wang, Y., Alexandrov, K. & Egbers, Ch., 2012, Simultaneous PIV- and thermography measurements of partially blocked flow in a heated rotating annulus, Exp. in Fluids, 52 (4), 1077-1087 Futterer, B., Krebs, A., Plesa, A.-C., Zaussinger, F., Hollerbach, R., Breuer, D. & Egbers, Ch., 2013, Sheet-like and plume-like thermal flow in a spherical convection experiment performed under microgravity, J. Fluid Mech., vol. 75, p 647-683
The Development, Test, and Evaluation of Three Pilot Performance Reference Scales.
ERIC Educational Resources Information Center
Horner, Walter R.; And Others
A set of pilot performance reference scales was developed based upon airborne Audio-Video Recording (AVR) of student performance in T-37 undergraduate Pilot Training. After selection of the training maneuvers to be studied, video tape recordings of the maneuvers were selected from video tape recordings already available from a previous research…
ERIC Educational Resources Information Center
Williams, Keith L.; Wahler, Robert G.
2010-01-01
Forty clinic-referred mothers completed questionnaires describing their children's problems, the mothers' parenting styles, and their everyday mindfulness. Psychometric analyses of the questionnaires showed mother reports to be internally consistent, except for one of the parenting style scales (i.e., permissive style). We dropped the scale and…
Generating Ground Reference Data for a Global Impervious Surface Survey
NASA Technical Reports Server (NTRS)
Tilton, James C.; deColstoun, Eric Brown; Wolfe, Robert E.; Tan, Bin; Huang, Chengquan
2012-01-01
We are engaged in a project to produce a 30m impervious cover data set of the entire Earth for the years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. The GLS data from Landsat provide an unprecedented opportunity to map global urbanization at this resolution for the first time, with unprecedented detail and accuracy. Moreover, the spatial resolution of Landsat is absolutely essential to accurately resolve urban targets such as buildings, roads and parking lots. Finally, with GLS data available for the 1975, 1990, 2000, and 2005 time periods, and soon for the 2010 period, the land cover/use changes due to urbanization can now be quantified at this spatial scale as well. Our approach works across spatial scales using very high spatial resolution commercial satellite data to both produce and evaluate continental scale products at the 30m spatial resolution of Landsat data. We are developing continental scale training data at 1m or so resolution and aggregating these to 30m for training a regression tree algorithm. Because the quality of the input training data are critical, we have developed an interactive software tool, called HSegLearn, to facilitate the photo-interpretation of high resolution imagery data, such as Quickbird or Ikonos data, into an impervious versus non-impervious map. Previous work has shown that photo-interpretation of high resolution data at 1 meter resolution will generate an accurate 30m resolution ground reference when coarsened to that resolution. Since this process can be very time consuming when using standard clustering classification algorithms, we are looking at image segmentation as a potential avenue to not only improve the training process but also provide a semi-automated approach for generating the ground reference data. HSegLearn takes as its input a hierarchical set of image segmentations produced by the HSeg image segmentation program [1, 2]. HSegLearn lets an analyst specify pixel locations as being either positive or negative examples, and displays a classification of the study area based on these examples. For our study, the positive examples are examples of impervious surfaces and negative examples are examples of non-impervious surfaces. HSegLearn searches the hierarchical segmentation from HSeg for the coarsest level of segmentation at which selected positive example locations do not conflict with negative example locations and labels the image accordingly. The negative example regions are always defined at the finest level of segmentation detail. The resulting classification map can be then further edited at a region object level using the previously developed HSegViewer tool [3]. After providing an overview of the HSeg image segmentation program, we provide a detailed description of the HSegLearn software tool. We then give examples of using HSegLearn to generate ground reference data and conclude with comments on the effectiveness of the HSegLearn tool.
Additional Results of Glaze Icing Scaling in SLD Conditions
NASA Technical Reports Server (NTRS)
Tsao, Jen-Ching
2016-01-01
New guidance of acceptable means of compliance with the super-cooled large drops (SLD) conditions has been issued by the U.S. Department of Transportation's Federal Aviation Administration (FAA) in its Advisory Circular AC 25-28 in November 2014. The Part 25, Appendix O is developed to define a representative icing environment for super-cooled large drops. Super-cooled large drops, which include freezing drizzle and freezing rain conditions, are not included in Appendix C. This paper reports results from recent glaze icing scaling tests conducted in NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the scaling methods recommended for Appendix C conditions might apply to SLD conditions. The models were straight NACA 0012 wing sections. The reference model had a chord of 72 inches and the scale model had a chord of 21 inches. Reference tests were run with airspeeds of 100 and 130.3 knots and with MVD's of 85 and 170 microns. Two scaling methods were considered. One was based on the modified Ruff method with scale velocity found by matching the Weber number W (sub eL). The other was proposed and developed by Feo specifically for strong glaze icing conditions, in which the scale liquid water content and velocity were found by matching reference and scale values of the non-dimensional water-film thickness expression and the film Weber number W (sub ef). All tests were conducted at 0 degrees angle of arrival. Results will be presented for stagnation freezing fractions of 0.2 and 0.3. For non-dimensional reference and scale ice shape comparison, a new post-scanning ice shape digitization procedure was developed for extracting 2-dimensional ice shape profiles at any selected span-wise location from the high fidelity 3-dimensional scanned ice shapes obtained in the IRT.
Additional Results of Glaze Icing Scaling in SLD Conditions
NASA Technical Reports Server (NTRS)
Tsao, Jen-Ching
2016-01-01
New guidance of acceptable means of compliance with the super-cooled large drops (SLD) conditions has been issued by the U.S. Department of Transportation's Federal Aviation Administration (FAA) in its Advisory Circular AC 25-28 in November 2014. The Part 25, Appendix O is developed to define a representative icing environment for super-cooled large drops. Super-cooled large drops, which include freezing drizzle and freezing rain conditions, are not included in Appendix C. This paper reports results from recent glaze icing scaling tests conducted in NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the scaling methods recommended for Appendix C conditions might apply to SLD conditions. The models were straight NACA 0012 wing sections. The reference model had a chord of 72 in. and the scale model had a chord of 21 in. Reference tests were run with airspeeds of 100 and 130.3 kn and with MVD's of 85 and 170 micron. Two scaling methods were considered. One was based on the modified Ruff method with scale velocity found by matching the Weber number WeL. The other was proposed and developed by Feo specifically for strong glaze icing conditions, in which the scale liquid water content and velocity were found by matching reference and scale values of the nondimensional water-film thickness expression and the film Weber number Wef. All tests were conducted at 0 deg AOA. Results will be presented for stagnation freezing fractions of 0.2 and 0.3. For nondimensional reference and scale ice shape comparison, a new post-scanning ice shape digitization procedure was developed for extracting 2-D ice shape profiles at any selected span-wise location from the high fidelity 3-D scanned ice shapes obtained in the IRT.
Karsten, Schober; Stephanie, Savino; Vedat, Yildiz
2017-11-10
The objective of the study was to evaluate the effects of body weight (BW), breed, and sex on two-dimensional (2D) echocardiographic measures, reference ranges, and prediction intervals using allometrically-scaled data of left atrial (LA) and left ventricular (LV) size and LV wall thickness in healthy cats. Study type was retrospective, observational, and clinical cohort. 150 healthy cats were enrolled and 2D echocardiograms analyzed. LA diameter, LV wall thickness, and LV dimension were quantified using three different imaging views. The effect of BW, breed, sex, age, and interaction (BW*sex) on echocardiographic variables was assessed using univariate and multivariate regression and linear mixed model analysis. Standard (using raw data) and allometrically scaled (Y=a × M b ) reference intervals and prediction intervals were determined. BW had a significant (P<0.05) independent effect on 2D variables whereas breed, sex, and age did not. There were clinically relevant differences between reference intervals using mean ± 2SD of raw data and mean and 95% prediction interval of allometrically-scaled variables, most prominent in larger (>6 kg) and smaller (<3 kg) cats. A clinically relevant difference between thickness of the interventricular septum (IVS) and dimension of the LV posterior wall (LVPW) was identified. In conclusion, allometric scaling and BW-based 95% prediction intervals should be preferred over conventional 2D echocardiographic reference intervals in cats, in particular in small and large cats. These results are particularly relevant to screening examinations for feline hypertrophic cardiomyopathy.
Fairness in the coronary angiography queue.
Alter, D A; Basinski, A S; Cohen, E A; Naylor, C D
1999-10-05
Since waiting lists for coronary angiography are generally managed without explicit queuing criteria, patients may not receive priority on the basis of clinical acuity. The objective of this study was to examine clinical and nonclinical determinants of the length of time patients wait for coronary angiography. In this single-centre prospective cohort study conducted in the autumn of 1997, 357 consecutive patients were followed from initial triage until a coronary angiography was performed or an adverse cardiac event occurred. The referring physicians' hospital affiliation (physicians at Sunnybrook & Women's College Health Sciences Centre, those who practice at another centre but perform angiography at Sunnybrook and those with no previous association with Sunnybrook) was used to compare processes of care. A clinical urgency rating scale was used to assign a recommended maximum waiting time (RMWT) to each patient retrospectively, but this was not used in the queuing process. RMWTs and actual waiting times for patients in the 3 referral groups were compared; the influence clinical and nonclinical variables had on the actual length of time patients waited for coronary angiography was assessed; and possible predictors of adverse events were examined. Of 357 patients referred to Sunnybrook, 22 (6.2%) experienced adverse events while in the queue. Among those who remained, 308 (91.9%) were in need of coronary angiography; 201 (60.0%) of those patients received one within the RMWT. The length of time to angiography was influenced by clinical characteristics similar to those specified on the urgency rating scale, leading to a moderate agreement between actual waiting times and RMWTs (kappa = 0.53). However, physician affiliation was a highly significant (p < 0.001) and independent predictor of waiting time. Whereas 45.6% of the variation in waiting time was explained by all clinical factors combined, 9.3% of the variation was explained by physician affiliation alone. Informal queuing practices for coronary angiography do reflect clinical acuity, but they are also influenced by nonclinical factors, such as the nature of the physicians' association with the catheterization facility.
Battlespace Awareness: Heterogeneous Sensor Maps of Large Scale, Complex Environments
2017-06-13
reference frames enable a system designer to describe the position of any sensor or platform at any point of time. This section introduces the...analysis to evaluate the quality of reconstructions created by our algorithms. CloudCompare is an open-source tool designed for this purpose [65]. In...structure of the data. The data term seeks to keep the proposed solution (u) similar to the originally observed values ( f ). A systems designer must
Time Scale for Adiabaticity Breakdown in Driven Many-Body Systems and Orthogonality Catastrophe
NASA Astrophysics Data System (ADS)
Lychkovskiy, Oleg; Gamayun, Oleksandr; Cheianov, Vadim
2017-11-01
The adiabatic theorem is a fundamental result in quantum mechanics, which states that a system can be kept arbitrarily close to the instantaneous ground state of its Hamiltonian if the latter varies in time slowly enough. The theorem has an impressive record of applications ranging from foundations of quantum field theory to computational molecular dynamics. In light of this success it is remarkable that a practicable quantitative understanding of what "slowly enough" means is limited to a modest set of systems mostly having a small Hilbert space. Here we show how this gap can be bridged for a broad natural class of physical systems, namely, many-body systems where a small move in the parameter space induces an orthogonality catastrophe. In this class, the conditions for adiabaticity are derived from the scaling properties of the parameter-dependent ground state without a reference to the excitation spectrum. This finding constitutes a major simplification of a complex problem, which otherwise requires solving nonautonomous time evolution in a large Hilbert space.
Estimation of Atmospheric Methane Surface Fluxes Using a Global 3-D Chemical Transport Model
NASA Astrophysics Data System (ADS)
Chen, Y.; Prinn, R.
2003-12-01
Accurate determination of atmospheric methane surface fluxes is an important and challenging problem in global biogeochemical cycles. We use inverse modeling to estimate annual, seasonal, and interannual CH4 fluxes between 1996 and 2001. The fluxes include 7 time-varying seasonal (3 wetland, rice, and 3 biomass burning) and 3 steady aseasonal (animals/waste, coal, and gas) global processes. To simulate atmospheric methane, we use the 3-D chemical transport model MATCH driven by NCEP reanalyzed observed winds at a resolution of T42 ( ˜2.8° x 2.8° ) in the horizontal and 28 levels (1000 - 3 mb) in the vertical. By combining existing datasets of individual processes, we construct a reference emissions field that represents our prior guess of the total CH4 surface flux. For the methane sink, we use a prescribed, annually-repeating OH field scaled to fit methyl chloroform observations. MATCH is used to produce both the reference run from the reference emissions, and the time-dependent sensitivities that relate individual emission processes to observations. The observational data include CH4 time-series from ˜15 high-frequency (in-situ) and ˜50 low-frequency (flask) observing sites. Most of the high-frequency data, at a time resolution of 40-60 minutes, have not previously been used in global scale inversions. In the inversion, the high-frequency data generally have greater weight than the weekly flask data because they better define the observational monthly means. The Kalman Filter is used as the optimal inversion technique to solve for emissions between 1996-2001. At each step in the inversion, new monthly observations are utilized and new emissions estimates are produced. The optimized emissions represent deviations from the reference emissions that lead to a better fit to the observations. The seasonal processes are optimized for each month, and contain the methane seasonality and interannual variability. The aseasonal processes, which are less variable, are solved as constant emissions over the entire time period. The Kalman Filter also produces emission uncertainties which quantify the ability of the observing network to constrain different processes. The sensitivity of the inversion to different observing sites and model sampling strategies is also tested. In general, the inversion reduces coal and gas emissions, and increases rice and biomass burning emissions relative to the reference case. Increases in both tropical and northern wetland emissions are found to have dominated the strong atmospheric methane increase in 1998. Northern wetlands are the best constrained processes, while tropical regions are poorly constrained and will require additional observations in the future for significant uncertainty reduction. The results of this study also suggest that interannual varying transport like NCEP and high-frequency measurements should be used when solving for methane emissions at monthly time resolution. Better estimates of global OH fluctuations are also necessary to fully describe the interannual behavior of methane observations.
NASA Astrophysics Data System (ADS)
Jackisch, Conrad; Angermann, Lisa; Allroggen, Niklas; Sprenger, Matthias; Blume, Theresa; Tronicke, Jens; Zehe, Erwin
2017-07-01
The study deals with the identification and characterization of rapid subsurface flow structures through pedo- and geo-physical measurements and irrigation experiments at the point, plot and hillslope scale. Our investigation of flow-relevant structures and hydrological responses refers to the general interplay of form and function, respectively. To obtain a holistic picture of the subsurface, a large set of different laboratory, exploratory and experimental methods was used at the different scales. For exploration these methods included drilled soil core profiles, in situ measurements of infiltration capacity and saturated hydraulic conductivity, and laboratory analyses of soil water retention and saturated hydraulic conductivity. The irrigation experiments at the plot scale were monitored through a combination of dye tracer, salt tracer, soil moisture dynamics, and 3-D time-lapse ground penetrating radar (GPR) methods. At the hillslope scale the subsurface was explored by a 3-D GPR survey. A natural storm event and an irrigation experiment were monitored by a dense network of soil moisture observations and a cascade of 2-D time-lapse GPR trenches
. We show that the shift between activated and non-activated state of the flow paths is needed to distinguish structures from overall heterogeneity. Pedo-physical analyses of point-scale samples are the basis for sub-scale structure inference. At the plot and hillslope scale 3-D and 2-D time-lapse GPR applications are successfully employed as non-invasive means to image subsurface response patterns and to identify flow-relevant paths. Tracer recovery and soil water responses from irrigation experiments deliver a consistent estimate of response velocities. The combined observation of form and function under active conditions provides the means to localize and characterize the structures (this study) and the hydrological processes (companion study Angermann et al., 2017, this issue).
Temporal coding of reward-guided choice in the posterior parietal cortex
Hawellek, David J.; Wong, Yan T.; Pesaran, Bijan
2016-01-01
Making a decision involves computations across distributed cortical and subcortical networks. How such distributed processing is performed remains unclear. We test how the encoding of choice in a key decision-making node, the posterior parietal cortex (PPC), depends on the temporal structure of the surrounding population activity. We recorded spiking and local field potential (LFP) activity in the PPC while two rhesus macaques performed a decision-making task. We quantified the mutual information that neurons carried about an upcoming choice and its dependence on LFP activity. The spiking of PPC neurons was correlated with LFP phases at three distinct time scales in the theta, beta, and gamma frequency bands. Importantly, activity at these time scales encoded upcoming decisions differently. Choice information contained in neural firing varied with the phase of beta and gamma activity. For gamma activity, maximum choice information occurred at the same phase as the maximum spike count. However, for beta activity, choice information and spike count were greatest at different phases. In contrast, theta activity did not modulate the encoding properties of PPC units directly but was correlated with beta and gamma activity through cross-frequency coupling. We propose that the relative timing of local spiking and choice information reveals temporal reference frames for computations in either local or large-scale decision networks. Differences between the timing of task information and activity patterns may be a general signature of distributed processing across large-scale networks. PMID:27821752
Relationship of D'' structure with the velocity variations near the inner-core boundary
NASA Astrophysics Data System (ADS)
Luo, Sheng-Nian; Ni, Sidao; Helmberger, Don
2002-06-01
Variations in regional differential times between PKiKP (i) and PKIKP (I) have been attributed to hemispheric P-velocity variations of about 1% in the upper 100 km of the inner core (referred to as HIC). The top of the inner core appears relatively fast beneath Asia where D'' is also fast. An alternative interpretation could be the lateral variation in P velocity at the lowermost outer core (HOC) producing the same differential times. To resolve this issue, we introduce the diffracted PKP phase near the B caustic (Bdiff) in the range of 139-145° epicenter distances, and the corresponding differential times between Bdiff and PKiKP and PKIKP as observed on broadband arrays. Due to the long-wavelength nature of Bdiff, we scaled the S-wave tomography model with k values (k ≡ dlnVs/dlnVp) to obtain large-scale P-wave velocity structure in the lower mantle as proposed by earlier studies. Waveform synthetics of Bdiff constructed with small k's predict complex waveforms not commonly observed, confirming the validity of large scaling factor k. With P-velocity in lower mantle constrained at large scale, the extra travel-time constraint imposed by Bdiff helps to resolve the HOC-HIC issue. Our preliminary results suggest k > 2 for the lowermost mantle and support HIC hypothesis. An important implication is that there appears to be a relationship of D'' velocity structures with the structures near the inner core boundary via core dynamics.
Waiting time distribution in public health care: empirics and theory.
Dimakou, Sofia; Dimakou, Ourania; Basso, Henrique S
2015-12-01
Excessive waiting times for elective surgery have been a long-standing concern in many national healthcare systems in the OECD. How do the hospital admission patterns that generate waiting lists affect different patients? What are the hospitals characteristics that determine waiting times? By developing a model of healthcare provision and analysing empirically the entire waiting time distribution we attempt to shed some light on those issues. We first build a theoretical model that describes the optimal waiting time distribution for capacity constraint hospitals. Secondly, employing duration analysis, we obtain empirical representations of that distribution across hospitals in the UK from 1997-2005. We observe important differences on the 'scale' and on the 'shape' of admission rates. Scale refers to how quickly patients are treated and shape represents trade-offs across duration-treatment profiles. By fitting the theoretical to the empirical distributions we estimate the main structural parameters of the model and are able to closely identify the main drivers of these empirical differences. We find that the level of resources allocated to elective surgery (budget and physical capacity), which determines how constrained the hospital is, explains differences in scale. Changes in benefits and costs structures of healthcare provision, which relate, respectively, to the desire to prioritise patients by duration and the reduction in costs due to delayed treatment, determine the shape, affecting short and long duration patients differently. JEL Classification I11; I18; H51.
NASA Astrophysics Data System (ADS)
Gulyaeva, Tamara; Poustovalova, Ljubov
The International Reference Ionosphere model extended to the plasmasphere, IRI-Plas, has been recently updated for assimilation of total electron content, TEC, derived from observations with Global Navigation Satellite System, GNSS. The ionosonde products of the F2 layer peak density (NmF2) and height (hmF2) ensure true electron density maximum at the F2 peak. The daily solar and magnetic indices used by IRI-Plas code are compiled in data files including the 3-hour ap and kp magnetic index from 1958 onward, 12-monthly smoothed sunspot number R12 and Global Electron Content GEC12, daily solar radio flux F10.7 and daily sunspot number Ri. The 3-h ap-index is available in Real Time, RT, mode from GFZ, Potsdam, Germany, daily update of F10.7 is provided by Space Weather Canada service, and daily estimated international sunspot number Ri is provided by Solar Influences Data Analysis Center, SIDC, Belgium. For IRI-Plas-RT operation in regime of the daily update and prediction of the F2 layer peak parameters, the proxy kp and ap forecast for 3 to 24 hours ahead based on data for preceding 12 hours is applied online at http://www.izmiran.ru/services/iweather/. The topside electron density profile of IRI-Plas code is expressed with complementary half-peak density anchor height above hmF2 which corresponds to transition O+/H+ height. The present investigation is focused on reconstruction of topside ionosphere scale height using vertical total electron content (TEC) data derived from the Global Positioning System GPS observations and the ionosonde derived F2 layer peak parameters from 25 observatories ingested into IRI-Plas model. GPS-TEC and ionosonde measurements at solar maximum (September, 2002, and October, 2003) for quiet, positively disturbed, and negatively disturbed days of the month are used to obtain the topside scale height, Htop, representing the range of altitudes from hmF2 to the height where NmF2 decay by e times occurs. Mapping of the F2 layer peak parameters and TEC allows interpolate these parameters at coordinated grid sites from independent GPS receivers and ionosondes data. Exponential scale height Htop exceeds scale height HT of the α-Chapman layer by 3 times - the latter refers to a narrow altitude range from hmF2 to the height of 1.2 times decay of NmF2. While typical quiet daytime value of the topside scale height is around 200 km, it can be enhanced by 2-3 times during the negative phase of the ionospheric storm as it is captured by IRI-Plas-RT model ingesting the F2 peak and TEC data. This study is supported by the joint grant of RFBR 13-02-91370-CT_a and TUBITAK 112E568.
End-effects-regime in full scale and lab scale rocket nozzles
NASA Astrophysics Data System (ADS)
Rojo, Raymundo; Tinney, Charles; Baars, Woutijn; Ruf, Joseph
2014-11-01
Modern rockets utilize a thrust-optimized parabolic-contour design for their nozzles for its high performance and reliability. However, the evolving internal flow structures within these high area ratio rocket nozzles during start up generate a powerful amount of vibro-acoustic loads that act on the launch vehicle. Modern rockets must be designed to accommodate for these heavy loads or else risk a catastrophic failure. This study quantifies a particular moment referred to as the ``end-effects regime,'' or the largest source of vibro-acoustic loading during start-up [Nave & Coffey, AIAA Paper 1973-1284]. Measurements from full scale ignitions are compared with aerodynamically scaled representations in a fully anechoic chamber. Laboratory scale data is then matched with both static and dynamic wall pressure measurements to capture the associating shock structures within the nozzle. The event generated during the ``end-effects regime'' was successfully reproduced in the both the lab-scale models, and was characterized in terms of its mean, variance and skewness, as well as the spectral properties of the signal obtained by way of time-frequency analyses.
NASA Astrophysics Data System (ADS)
Mihn, Byeong-Hee; Choi, Goeun; Lee, Yong Sam
2017-03-01
This study examines the scale unique instruments used for astronomical observation during the Joseon dynasty. The Small Simplified Armillary Sphere (小簡儀, So-ganui) and the Sun-and-Stars Time-Determining Instrument (日星定時儀, Ilseong-jeongsi-ui) are minimized astronomical instruments, which can be characterized, respectively, as an observational instrument and a clock, and were influenced by the Simplified Armilla (簡儀, Jianyi) of the Yuan dynasty. These two instruments were equipped with several rings, and the rings of one were similar both in size and in scale to those of the other. Using the classic method of drawing the scale on the circumference of a ring, we analyze the scales of the Small Simplified Armillary Sphere and the Sun-and-Stars Time-Determining Instrument. Like the scale feature of the Simplified Armilla, we find that these two instruments selected the specific circumference which can be drawn by two kinds of scales. If Joseon`s astronomical instruments is applied by the dual scale drawing on one circumference, we suggest that 3.14 was used as the ratio of the circumference of circle, not 3 like China, when the ring`s size was calculated in that time. From the size of Hundred-interval disk of the extant Simplified Sundial in Korea, we make a conclusion that the three rings` diameter of the Sun-and-Stars Time-Determining Instrument described in the Sejiong Sillok (世宗實錄, Veritable Records of the King Sejong) refers to that of the middle circle of every ring, not the outer circle. As analyzing the degree of 28 lunar lodges (lunar mansions) in the equator written by Chiljeongsan-naepyeon (七政算內篇, the Inner Volume of Calculation of the Motions of the Seven Celestial Determinants), we also obtain the result that the scale of the Celestial-circumference-degree in the Small Simplified Armillary Sphere was made with a scale error about 0.1 du in root mean square (RMS).
NASA Astrophysics Data System (ADS)
Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo
2018-01-01
This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.
Susceptibility of Goethite to Fe2+-Catalyzed Recrystallization over Time.
Joshi, Prachi; Fantle, Matthew S; Larese-Casanova, Philip; Gorski, Christopher A
2017-10-17
Recent work has shown that iron oxides, such as goethite and hematite, may recrystallize in the presence of aqueous Fe 2+ under anoxic conditions. This process, referred to as Fe 2+ -catalyzed recrystallization, can influence water quality by causing the incorporation/release of environmental contaminants and biological nutrients. Accounting for the effects of Fe 2+ -catalyzed recrystallization on water quality requires knowing the time scale over which recrystallization occurs. Here, we tested the hypothesis that nanoparticulate goethite becomes less susceptible to Fe 2+ -catalyzed recrystallization over time. We set up two batches of reactors in which 55 Fe 2+ tracer was added at two different time points and tracked the 55 Fe partitioning in the aqueous and goethite phases over 60 days. Less 55 Fe uptake occurred between 30 and 60 days than between 0 and 30 days, suggesting goethite recrystallization slowed with time. Fitting the data with a box model indicated that 17% of the goethite recrystallized after 30 days of reaction, and an additional 2% recrystallized between 30 and 60 days. The decreasing susceptibility of goethite to recrystallize as it reacted with aqueous Fe 2+ suggested that recrystallization is likely only an important process over short time scales.
Jacobson, R.B.
2013-01-01
The physical habitat template is a fundamental influence on riverine ecosystem structure and function. Habitat dynamics refers to the variation in habitat through space and time as the result of varying discharge and varying geomorphology. Habitat dynamics can be assessed at spatial scales ranging from the grain (the smallest resolution at which an organism relates to its environment) to the extent (the broadest resolution inclusive of all space occupied during its life cycle). In addition to a potentially broad range of spatial scales, assessments of habitat dynamics may include dynamics of both occupied and nonoccupied habitat patches because of process interactions among patches. Temporal aspects of riverine habitat dynamics can be categorized into hydrodynamics and morphodynamics. Hydrodynamics refers to habitat variation that results from changes in discharge in the absence of significant change of channel morphology and at generally low sediment-transport rates. Hydrodynamic assessments are useful in cases of relatively high flow exceedance (percent of time a flow is equaled or exceeded) or high critical shear stress, conditions that are applicable in many studies of instream flows. Morphodynamics refers to habitat variation resulting from changes to substrate conditions or channel/floodplain morphology. Morphodynamic assessments are necessary when channel and floodplain boundary conditions have been significantly changed, generally by relatively rare flood events or in rivers with low critical shear stress. Morphodynamic habitat variation can be particularly important as disturbance mechanisms that mediate population growth or for providing conditions needed for reproduction, such as channel-migration events that erode cutbanks and provide new pointbar surfaces for germination of riparian trees. Understanding of habitat dynamics is increasing in importance as societal goals shift toward restoration of riverine ecosystems. Effective investment in restoration strategies requires that the role of physical habitat is correctly diagnosed and that restoration activities address true habitat limitations, including the role of dynamic habitats.
Ip, Ryan H L; Li, W K; Leung, Kenneth M Y
2013-09-15
Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.
French Norms for the Harvard Group Scale of Hypnotic Susceptibility, Form A.
Anlló, Hernán; Becchio, Jean; Sackur, Jérôme
2017-01-01
The authors present French norms for the Harvard Group Scale of Hypnotic Susceptibility, Form A (HGSHS:A). They administered an adapted translation of Shor and Orne's original text (1962) to a group of 126 paid volunteers. Participants also rated their own responses following our translation of Kihlstrom's Scale of Involuntariness (2006). Item pass rates, score distributions, and reliability were calculated and compared with several other reference samples. Analyses show that the present French norms are congruous with the reference samples. Interestingly, the passing rate for some items drops significantly if "entirely voluntary" responses (as identified by Kihlstrom's scale) are scored as "fail." Copies of the translated scales and response booklet are available online.
Kanaka Maoli and Kamáāina Seascapes - Knowing Our Ocean Through Times of Change
NASA Astrophysics Data System (ADS)
Puniwai, N.
2017-12-01
In Hawaíi our oceans define us, we come from the ocean. Our oceans change, and we change with them, as we always have. By learning from people who are dependent on their environment, we learn how to observe and how to adapt. Through the lens of climate change, we interviewed respected ocean observers and surfers to learn about changes they have witnessed over time and the spatial scales and ocean conditions important to them. We looked at our ancient and historical texts to see what processes they recorded and the language they used to ascribe their observations, interactions and relationships to these places. Yet, we also integrate what our mechanical data sensors have recorded over recent time. By expanding our time scales of reference, knowledge sources, and collaborators, these methods teach us how our ancestors adapted and how climate change may impact our subsistence, recreation, and interactions with the environment. Managing complex seascapes requires the integration of multiple ways of knowing; strengthening our understanding of seascapes and their resiliency in this changing environment.
Mertz, D F; Swisher, C C; Franzen, J L; Neuffer, F O; Lutz, H
2000-06-01
Sediments of the Eckfeld maar (Eifel, Germany) bear a well-preserved Eocene fauna and flora. Biostratigraphically, Eckfeld corresponds to the Middle Eocene mammal reference level MP (Mammals Paleogene) 13 of the ELMA (European Land Mammal Age) Geiseltalian. In the maar crater, basalt fragments were drilled, representing explosion crater eruption products. By 40Ar/39Ar dating of the basalt, for the first time a direct numerical calibration mark for an Eocene European mammal locality has been established. The Eckfeld basalt inverse isochron date of 44.3 +/- 0.4 Ma suggests an age for the Geiseltalian/Robiacian boundary at 44 Ma and, together with the 1995 time scale of Berggren et al., a time span ranging from 49 to 44 Ma for the Geiseltalian and from 44 to 37 Ma for the Robiacian, respectively. Additional 40Ar/39Ar dating on a genetically related basalt occurrence close to the maar confirms a period of volcanism of ca. 0.6 m.y. in the Eckfeld area, matching the oldest Eocene volcanic activity of the Hocheifel volcanic field.
Separation of components from a scale mixture of Gaussian white noises
NASA Astrophysics Data System (ADS)
Vamoş, Călin; Crăciun, Maria
2010-05-01
The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.
Barry, Bridgette A; Cooper, Ian B; De Riso, Antonio; Brewer, Scott H; Vu, Dung M; Dyer, R Brian
2006-05-09
Photosynthetic oxygen production by photosystem II (PSII) is responsible for the maintenance of aerobic life on earth. The production of oxygen occurs at the PSII oxygen-evolving complex (OEC), which contains a tetranuclear manganese (Mn) cluster. Photo-induced electron transfer events in the reaction center lead to the accumulation of oxidizing equivalents on the OEC. Four sequential photooxidation reactions are required for oxygen production. The oxidizing complex cycles among five oxidation states, called the S(n) states, where n refers to the number of oxidizing equivalents stored. Oxygen release occurs during the S(3)-to-S(0) transition from an unstable intermediate, known as the S(4) state. In this report, we present data providing evidence for the production of an intermediate during each S state transition. These protein-derived intermediates are produced on the microsecond to millisecond time scale and are detected by time-resolved vibrational spectroscopy on the microsecond time scale. Our results suggest that a protein-derived conformational change or proton transfer reaction precedes Mn redox reactions during the S(2)-to-S(3) and S(3)-to-S(0) transitions.
Weak wide-band signal detection method based on small-scale periodic state of Duffing oscillator
NASA Astrophysics Data System (ADS)
Hou, Jian; Yan, Xiao-peng; Li, Ping; Hao, Xin-hong
2018-03-01
The conventional Duffing oscillator weak signal detection method, which is based on a strong reference signal, has inherent deficiencies. To address these issues, the characteristics of the Duffing oscillatorʼs phase trajectory in a small-scale periodic state are analyzed by introducing the theory of stopping oscillation system. Based on this approach, a novel Duffing oscillator weak wide-band signal detection method is proposed. In this novel method, the reference signal is discarded, and the to-be-detected signal is directly used as a driving force. By calculating the cosine function of a phase space angle, a single Duffing oscillator can be used for weak wide-band signal detection instead of an array of uncoupled Duffing oscillators. Simulation results indicate that, compared with the conventional Duffing oscillator detection method, this approach performs better in frequency detection intervals, and reduces the signal-to-noise ratio detection threshold, while improving the real-time performance of the system. Project supported by the National Natural Science Foundation of China (Grant No. 61673066).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakharov, Leonid E.; Li, Xujing
This paper formulates the Tokamak Magneto-Hydrodynamics (TMHD), initially outlined by X. Li and L. E. Zakharov [Plasma Science and Technology 17(2), 97–104 (2015)] for proper simulations of macroscopic plasma dynamics. The simplest set of magneto-hydrodynamics equations, sufficient for disruption modeling and extendable to more refined physics, is explained in detail. First, the TMHD introduces to 3-D simulations the Reference Magnetic Coordinates (RMC), which are aligned with the magnetic field in the best possible way. The numerical implementation of RMC is adaptive grids. Being consistent with the high anisotropy of the tokamak plasma, RMC allow simulations at realistic, very high plasmamore » electric conductivity. Second, the TMHD splits the equation of motion into an equilibrium equation and the plasma advancing equation. This resolves the 4 decade old problem of Courant limitations of the time step in existing, plasma inertia driven numerical codes. The splitting allows disruption simulations on a relatively slow time scale in comparison with the fast time of ideal MHD instabilities. A new, efficient numerical scheme is proposed for TMHD.« less
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
Mosmuller, David G M; Mennes, Lisette M; Prahl, Charlotte; Kramer, Gem J C; Disse, Melissa A; van Couwelaar, Gijs M; Niessen, Frank B; Griot, J P W Don
2017-09-01
The development of the Cleft Aesthetic Rating Scale, a simple and reliable photographic reference scale for the assessment of nasolabial appearance in complete unilateral cleft lip and palate patients. A blind retrospective analysis of photographs of cleft lip and palate patients was performed with this new rating scale. VU Medical Center Amsterdam and the Academic Center for Dentistry of Amsterdam. Complete unilateral cleft lip and palate patients at the age of 6 years. Photographs that showed the highest interobserver agreement in earlier assessments were selected for the photographic reference scale. Rules were attached to the rating scale to provide a guideline for the assessment and improve interobserver reliability. Cropped photographs revealing only the nasolabial area were assessed by six observers using this new Cleft Aesthetic Rating Scale in two different sessions. Photographs of 62 children (6 years of age, 44 boys and 18 girls) were assessed. The interobserver reliability for the nose and lip together was 0.62, obtained with the intraclass correlation coefficient. To measure the internal consistency, a Cronbach alpha of .91 was calculated. The estimated reliability for three observers was .84, obtained with the Spearman Brown formula. A new, easy to use, and reliable scoring system with a photographic reference scale is presented in this study.
Added Value of Selected Images Embedded Into Radiology Reports to Referring Clinicians
Iyer, Veena R.; Hahn, Peter F.; Blaszkowsky, Lawrence S.; Thayer, Sarah P.; Halpern, Elkan F.; Harisinghani, Mukesh G.
2011-01-01
Purpose The aim of this study was to evaluate the added utility of embedding images for findings described in radiology text reports to referring clinicians. Methods Thirty-five cases referred for abdominal CT scans in 2007 and 2008 were included. Referring physicians were asked to view text-only reports, followed by the same reports with pertinent images embedded. For each pair of reports, a questionnaire was administered. A 5-point, Likert-type scale was used to assess if the clinical query was satisfactorily answered by the text-only report. A “yes-or-no” question was used to assess whether the report with images answered the clinical query better; a positive answer to this question generated “yes-or-no” queries to examine whether the report with images helped in making a more confident decision on management, whether it reduced time spent in forming the plan, and whether it altered management. The questionnaire asked whether a radiologist would be contacted with queries on reading the text-only report and the report with images. Results In 32 of 35 cases, the text-only reports satisfactorily answered the clinical queries. In these 32 cases, the reports with attached images helped in making more confident management decisions and reduced time in planning management. Attached images altered management in 2 cases. Radiologists would have been consulted for clarifications in 21 and 10 cases on reading the text-only reports and the reports with embedded images, respectively. Conclusions Providing relevant images with reports saves time, increases physicians' confidence in deciding treatment plans, and can alter management. PMID:20193926
Global Gridded Crop Model Evaluation: Benchmarking, Skills, Deficiencies and Implications.
NASA Technical Reports Server (NTRS)
Muller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Glotter, Michael; Hoek, Steven;
2017-01-01
Crop models are increasingly used to simulate crop yields at the global scale, but so far there is no general framework on how to assess model performance. Here we evaluate the simulation results of 14 global gridded crop modeling groups that have contributed historic crop yield simulations for maize, wheat, rice and soybean to the Global Gridded Crop Model Intercomparison (GGCMI) of the Agricultural Model Intercomparison and Improvement Project (AgMIP). Simulation results are compared to reference data at global, national and grid cell scales and we evaluate model performance with respect to time series correlation, spatial correlation and mean bias. We find that global gridded crop models (GGCMs) show mixed skill in reproducing time series correlations or spatial patterns at the different spatial scales. Generally, maize, wheat and soybean simulations of many GGCMs are capable of reproducing larger parts of observed temporal variability (time series correlation coefficients (r) of up to 0.888 for maize, 0.673 for wheat and 0.643 for soybean at the global scale) but rice yield variability cannot be well reproduced by most models. Yield variability can be well reproduced for most major producing countries by many GGCMs and for all countries by at least some. A comparison with gridded yield data and a statistical analysis of the effects of weather variability on yield variability shows that the ensemble of GGCMs can explain more of the yield variability than an ensemble of regression models for maize and soybean, but not for wheat and rice. We identify future research needs in global gridded crop modeling and for all individual crop modeling groups. In the absence of a purely observation-based benchmark for model evaluation, we propose that the best performing crop model per crop and region establishes the benchmark for all others, and modelers are encouraged to investigate how crop model performance can be increased. We make our evaluation system accessible to all crop modelers so that other modeling groups can also test their model performance against the reference data and the GGCMI benchmark.
Weber, Ralph; Reimann, Gernot; Weimar, Christian; Winkler, Angela; Berger, Klaus; Nordmeyer, Hannes; Hadisurya, Jeffrie; Brassel, Friedhelm; Kitzrow, Martin; Krogias, Christos; Weber, Werner; Busch, Elmar W; Eyding, Jens
2016-03-01
After thrombectomy has shown to be effective in acute stroke patients with large vessel occlusion, the potential benefit of secondary referral for such an intervention needs to be validated. We aimed to compare consecutive stoke patients directly admitted and treated with thrombectomy at a neurointerventional centre with patients secondarily referred for such a procedure from hospitals with a stroke unit. Periprocedure times and mortality in 300 patients primarily treated in eight neurointerventional centres were compared with 343 patients referred from nine other hospitals in a prospective multicentre study of a German neurovascular network. Data on functional outcome at 3 months was available in 430 (76.4%) patients. In-hospital mortality (14.8% versus 11.7%, p = 0.26) and 3 months mortality (21.9% versus 24.1%, p = 0.53) were not statistically different in both patient groups despite a significant shorter symptom to groin puncture time in directly admitted patients, which was mainly caused by a longer interfacility transfer time. We found a nonsignificant trend for better functional outcome at 3 months in directly admitted patients (modified Rankin Scale 0-2, 44.0% versus 35.7%, p = 0.08). Our results show that a drip-and-ship thrombectomy concept can be effectively organized in a metropolitan stroke network. Every effort should be made to speed up the emergency interfacility transfer to a neurointerventional centre in stroke patients eligible for thrombectomy after initial brain imaging.
Biogeochemical Response to Mesoscale Physical Forcing in the California Current System
NASA Technical Reports Server (NTRS)
Niiler, Pearn P.; Letelier, Ricardo; Moisan, John R.; Marra, John A. (Technical Monitor)
2001-01-01
In the first part of the project, we investigated the local response of the coastal ocean ecosystems (changes in chlorophyll, concentration and chlorophyll, fluorescence quantum yield) to physical forcing by developing and deploying Autonomous Drifting Ocean Stations (ADOS) within several mesoscale features along the U.S. west coast. Also, we compared the temporal and spatial variability registered by sensors mounted in the drifters to that registered by the sensors mounted in the satellites in order to assess the scales of variability that are not resolved by the ocean color satellite. The second part of the project used the existing WOCE SVP Surface Lagrangian drifters to track individual water parcels through time. The individual drifter tracks were used to generate multivariate time series by interpolating/extracting the biological and physical data fields retrieved by remote sensors (ocean color, SST, wind speed and direction, wind stress curl, and sea level topography). The individual time series of the physical data (AVHRR, TOPEX, NCEP) were analyzed against the ocean color (SeaWiFS) time-series to determine the time scale of biological response to the physical forcing. The results from this part of the research is being used to compare the decorrelation scales of chlorophyll from a Lagrangian and Eulerian framework. The results from both parts of this research augmented the necessary time series data needed to investigate the interactions between the ocean mesoscale features, wind, and the biogeochemical processes. Using the historical Lagrangian data sets, we have completed a comparison of the decorrelation scales in both the Eulerian and Lagrangian reference frame for the SeaWiFS data set. We are continuing to investigate how these results might be used in objective mapping efforts.
The global reference atmospheric model, mod 2 (with two scale perturbation model)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Hargraves, W. R.
1976-01-01
The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.
Cronn, Richard; Dolan, Peter C; Jogdeo, Sanjuro; Wegrzyn, Jill L; Neale, David B; St Clair, J Bradley; Denver, Dee R
2017-07-24
Perennial growth in plants is the product of interdependent cycles of daily and annual stimuli that induce cycles of growth and dormancy. In conifers, needles are the key perennial organ that integrates daily and seasonal signals from light, temperature, and water availability. To understand the relationship between seasonal cycles and seasonal gene expression responses in conifers, we examined diurnal and circannual needle mRNA accumulation in Douglas-fir (Pseudotsuga menziesii) needles at diurnal and circannual scales. Using mRNA sequencing, we sampled 6.1 × 10 9 reads from 19 trees and constructed a de novo pan-transcriptome reference that includes 173,882 tree-derived transcripts. Using this reference, we mapped RNA-Seq reads from 179 samples that capture daily and annual variation. We identified 12,042 diurnally-cyclic transcripts, 9299 of which showed homology to annotated genes from other plant genomes, including angiosperm core clock genes. Annual analysis revealed 21,225 circannual transcripts, 17,335 of which showed homology to annotated genes from other plant genomes. The timing of maximum gene expression is associated with light intensity at diurnal scales and photoperiod at annual scales, with approximately half of transcripts reaching maximum expression +/- 2 h from sunrise and sunset, and +/- 20 days from winter and summer solstices. Comparisons with published studies from other conifers shows congruent behavior in clock genes with Japanese cedar (Cryptomeria), and a significant preservation of gene expression patterns for 2278 putative orthologs from Douglas-fir during the summer growing season, and 760 putative orthologs from spruce (Picea) during the transition from fall to winter. Our study highlight the extensive diurnal and circannual transcriptome variability demonstrated in conifer needles. At these temporal scales, 29% of expressed transcripts show a significant diurnal cycle, and 58.7% show a significant circannual cycle. Remarkably, thousands of genes reach their annual peak activity during winter dormancy. Our study establishes the fine-scale timing of daily and annual maximum gene expression for diverse needle genes in Douglas-fir, and it highlights the potential for using this information for evaluating hypotheses concerning the daily or seasonal timing of gene activity in temperate-zone conifers, and for identifying cyclic transcriptome components in other conifer species.
Lind, Bo B; Norrman, Jenny; Larsson, Lennart B; Ohlsson, Sten-Ake; Bristav, Henrik
2008-01-01
A study was performed between June 2001 and December 2004 with the primary objective of assessing long-term leaching from municipal solid waste incineration bottom ash in a test road construction in relation to a reference road made up of conventional materials and the natural geochemical conditions in the surroundings. The metal leaching from the test road and the reference road was compared with the natural weathering in the regional surroundings for three time scales: 16, 80 and 1000 years. The results show that Cu and Zn cause a geochemical anomaly from the test road compared with the surroundings. The leaching of Cu from the test road is initially high but will decline with time and will in the long term be exceeded by natural weathering. Zn on the other hand has low initial leaching, which will increase with time and will in the long term exceed that of the test road and the surroundings by a factor of 100-300. For the other metals studied, Al, Na, K and Mg, there is only very limited leaching over time and the potential accumulation will not exceed the background values in a 1000 years.
Nontrivial Quantum Effects in Biology: A Skeptical Physicists' View
NASA Astrophysics Data System (ADS)
Wiseman, Howard; Eisert, Jens
The following sections are included: * Introduction * A Quantum Life Principle * A quantum chemistry principle? * The anthropic principle * Quantum Computing in the Brain * Nature did everything first? * Decoherence as the make or break issue * Quantum error correction * Uselessness of quantum algorithms for organisms * Quantum Computing in Genetics * Quantum search * Teleological aspects and the fast-track to life * Quantum Consciousness * Computability and free will * Time scales * Quantum Free Will * Predictability and free will * Determinism and free will * Acknowledgements * References
The Next Frontier in Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrao, John
2016-11-16
Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.
The long-range correlation and evolution law of centennial-scale temperatures in Northeast China.
Zheng, Xiaohui; Lian, Yi; Wang, Qiguang
2018-01-01
This paper applies the detrended fluctuation analysis (DFA) method to investigate the long-range correlation of monthly mean temperatures from three typical measurement stations at Harbin, Changchun, and Shenyang in Northeast China from 1909 to 2014. The results reveal the memory characteristics of the climate system in this region. By comparing the temperatures from different time periods and investigating the variations of its scaling exponents at the three stations during these different time periods, we found that the monthly mean temperature has long-range correlation, which indicates that the temperature in Northeast China has long-term memory and good predictability. The monthly time series of temperatures over the past 106 years also shows good long-range correlation characteristics. These characteristics are also obviously observed in the annual mean temperature time series. Finally, we separated the centennial-length temperature time series into two time periods. These results reveal that the long-range correlations at the Harbin station over these two time periods have large variations, whereas no obvious variations are observed at the other two stations. This indicates that warming affects the regional climate system's predictability differently at different time periods. The research results can provide a quantitative reference point for regional climate predictability assessment and future climate model evaluation.
Improving left spatial neglect through music scale playing.
Bernardi, Nicolò Francesco; Cioffi, Maria Cristina; Ronchi, Roberta; Maravita, Angelo; Bricolo, Emanuela; Zigiotto, Luca; Perucca, Laura; Vallar, Giuseppe
2017-03-01
The study assessed whether the auditory reference provided by a music scale could improve spatial exploration of a standard musical instrument keyboard in right-brain-damaged patients with left spatial neglect. As performing music scales involves the production of predictable successive pitches, the expectation of the subsequent note may facilitate patients to explore a larger extension of space in the left affected side, during the production of music scales from right to left. Eleven right-brain-damaged stroke patients with left spatial neglect, 12 patients without neglect, and 12 age-matched healthy participants played descending scales on a music keyboard. In a counterbalanced design, the participants' exploratory performance was assessed while producing scales in three feedback conditions: With congruent sound, no-sound, or random sound feedback provided by the keyboard. The number of keys played and the timing of key press were recorded. Spatial exploration by patients with left neglect was superior with congruent sound feedback, compared to both Silence and Random sound conditions. Both the congruent and incongruent sound conditions were associated with a greater deceleration in all groups. The frame provided by the music scale improves exploration of the left side of space, contralateral to the right hemisphere, damaged in patients with left neglect. Performing a scale with congruent sounds may trigger at some extent preserved auditory and spatial multisensory representations of successive sounds, thus influencing the time course of space scanning, and ultimately resulting in a more extensive spatial exploration. These findings offer new perspectives also for the rehabilitation of the disorder. © 2015 The British Psychological Society.
Diode Laser Clinical Efficacy and Mini-Invasivity in Surgical Exposure of Impacted Teeth.
Migliario, Mario; Rizzi, Manuela; Lucchina, Alberta Greco; Renò, Filippo
2016-11-01
The gold standard to arrange impacted teeth in the dental arch is represented by a surgical approach followed by orthodontic traction force application. In the literature, many surgical approaches are proposed to reach such a scope. The aim of the present study is to demonstrate how laser technique could positively assist surgical approaches.Study population was composed by 16 patients undergoing orthodontic treatment of 20 impacted teeth. In 10 patients (population A) surgical exposure of the impacted teeth was performed using a 980 nm diode laser, while in the other 10 patients (population B), surgical incision was performed using a traditional lancet.Only 3 patients of the population A needed local anesthesia for surgical procedure while the remaining 7 patients reported only faint pain during surgery. Two patients referred postsurgical pain (numerical rating scale average value = 2) and needed to take analgesics. None of the patients showed other postsurgical side effects (bleeding, edema).All population B patients needed infiltrative anesthesia and referred postsurgical pain (numerical rating scale average value >4) treated with analgesics. Moreover, in such population, 4 patients referred lips edema while 4 showed bleeding and 6 needed surgical sutures of soft tissues.The lack of side effects of laser surgical approach to expose impacted teeth must persuade dental practitioners to choose such a clinical approach to closed surgical approach every time it is possible.
Snedden, Gregg A.; Swenson, Erick M.
2012-01-01
Hourly time-series salinity and water-level data are collected at all stations within the Coastwide Reference Monitoring System (CRMS) network across coastal Louisiana. These data, in addition to vegetation and soils data collected as part of CRMS, are used to develop a suite of metrics and indices to assess wetland condition in coastal Louisiana. This document addresses the primary objectives of the CRMS hydrologic analytical team, which were to (1) adopt standard time-series analytical techniques that could effectively assess spatial and temporal variability in hydrologic characteristics across the Louisiana coastal zone on site, project, basin, and coastwide scales and (2) develop and apply an index based on wetland hydrology that can describe the suitability of local hydrology in the context of maximizing the productivity of wetland plant communities. Approaches to quantifying tidal variability (least squares harmonic analysis) and partitioning variability of time-series data to various time scales (spectral analysis) are presented. The relation between marsh elevation and the tidal frame of a given hydrograph is described. A hydrologic index that integrates water-level and salinity data, which are collected hourly, with vegetation data that are collected annually is developed. To demonstrate its utility, the hydrologic index is applied to 173 CRMS sites across the coast, and variability in index scores across marsh vegetation types (fresh, intermediate, brackish, and saline) is assessed. The index is also applied to 11 sites located in three Coastal Wetlands Planning, Protection and Restoration Act projects, and the ability of the index to convey temporal hydrologic variability in response to climatic stressors and restoration measures, as well as the effect that this community may have on wetland plant productivity, is illustrated.
Association Between Onset-to-Door Time and Clinical Outcomes After Ischemic Stroke.
Matsuo, Ryu; Yamaguchi, Yuko; Matsushita, Tomonaga; Hata, Jun; Kiyuna, Fumi; Fukuda, Kenji; Wakisaka, Yoshinobu; Kuroda, Junya; Ago, Tetsuro; Kitazono, Takanari; Kamouchi, Masahiro
2017-11-01
The role of early hospital arrival in improving poststroke clinical outcomes in patients without reperfusion treatment remains unclear. This study aimed to determine whether early hospital arrival was associated with favorable outcomes in patients without reperfusion treatment or with minor stroke. This multicenter, hospital-based study included 6780 consecutive patients (aged, 69.9±12.2 years; 63.9% men) with ischemic stroke who were prospectively registered in Fukuoka, Japan, between July 2007 and December 2014. Onset-to-door time was categorized as T 0-1 , ≤1 hour; T 1-2 , >1 and ≤2 hours; T 2-3 , >2 and ≤3 hours; T 3-6 , >3 and ≤6 hours; T 6-12 , >6 and ≤12 hours; T 12-24 , >12 and ≤24 hours; and T 24- , >24 hours. The main outcomes were neurological improvement (decrease in National Institutes of Health Stroke Scale score of ≥4 during hospitalization or 0 at discharge) and good functional outcome (3-month modified Rankin Scale score of 0-1). Associations between onset-to-door time and main outcomes were evaluated after adjusting for potential confounders using logistic regression analysis. Odds ratios (95% confidence intervals) increased significantly with shorter onset-to-door times within 6 hours, for both neurological improvement ( T 0- 1 , 2.79 [2.28-3.42]; T 1-2 , 2.49 [2.02-3.07]; T 2-3 , 1.52 [1.21-1.92]; T 3-6 , 1.72 [1.44-2.05], with reference to T 24- ) and good functional outcome ( T 0-1 , 2.68 [2.05-3.49], T 1-2 2.10 [1.60-2.77], T 2-3 1.53 [1.15-2.03], T 3-6 1.31 [1.05-1.64], with reference to T 24- ), even after adjusting for potential confounding factors including reperfusion treatment and basal National Institutes of Health Stroke Scale. These associations were maintained in 6216 patients without reperfusion treatment and in 4793 patients with minor stroke (National Institutes of Health Stroke Scale ≤4 on hospital arrival). Early hospital arrival within 6 hours after stroke onset is associated with favorable outcomes after ischemic stroke, regardless of reperfusion treatment or stroke severity. © 2017 American Heart Association, Inc.
ERIC Educational Resources Information Center
Watkins, Marley W.
2010-01-01
The structure of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; D. Wechsler, 2003a) was analyzed via confirmatory factor analysis among a national sample of 355 students referred for psychoeducational evaluation by 93 school psychologists from 35 states. The structure of the WISC-IV core battery was best represented by four…
Brock, A Paige; Grunkemeyer, Vanessa L; Fry, Michael M; Hall, James S; Bartges, Joseph W
2013-12-01
To evaluate the relationship between osmolality and specific gravity of urine samples from clinically normal adult parrots and to determine a formula to convert urine specific gravity (USG) measured on a reference scale to a more accurate USG value for an avian species, urine samples were collected opportunistically from a colony of Hispaniolan Amazon parrots (Amazona ventralis). Samples were analyzed by using a veterinary refractometer, and specific gravity was measured on both canine and feline scales. Osmolality was measured by vapor pressure osmometry. Specific gravity and osmolality measurements were highly correlated (r = 0.96). The linear relationship between refractivity measurements on a reference scale and osmolality was determined. An equation was calculated to allow specific gravity results from a medical refractometer to be converted to specific gravity values of Hispaniolan Amazon parrots: USGHAp = 0.201 +0.798(USGref). Use of the reference-canine scale to approximate the osmolality of parrot urine leads to an overestimation of the true osmolality of the sample. In addition, this error increases as the concentration of urine increases. Compared with the human-canine scale, the feline scale provides a closer approximation to urine osmolality of Hispaniolan Amazon parrots but still results in overestimation of osmolality.
A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics
NASA Astrophysics Data System (ADS)
Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.
2016-02-01
The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.
Further evaluation of traditional icing scaling methods
NASA Technical Reports Server (NTRS)
Anderson, David N.
1996-01-01
This report provides additional evaluations of two methods to scale icing test conditions; it also describes a hybrid technique for use when scaled conditions are outside the operating envelope of the test facility. The first evaluation is of the Olsen method which can be used to scale the liquid-water content in icing tests, and the second is the AEDC (Ruff) method which is used when the test model is less than full size. Equations for both scaling methods are presented in the paper, and the methods were evaluated by performing icing tests in the NASA Lewis Icing Research Tunnel (IRT). The Olsen method was tested using 53 cm diameter NACA 0012 airfoils. Tests covered liquid-water-contents which varied by as much as a factor of 1.8. The Olsen method was generally effective in giving scale ice shapes which matched the reference shapes for these tests. The AEDC method was tested with NACA 0012 airfoils with chords from 18 cm to 53 cm. The 53 cm chord airfoils were used in reference tests, and 1/2 and 1/3 scale tests were made at conditions determined by applying the AEDC scaling method. The scale and reference airspeeds were matched in these tests. The AEDC method was found to provide fairly effective scaling for 1/2 size tests, but for 1/3 size models, scaling was generally less effective. In addition to these two scaling methods, a hybrid approach was also tested in which the Olsen method was used to adjust the LWC after size was scaled using the constant Weber number method. This approach was found to be an effective way to test when scaled conditions would otherwise be outside the capability of the test facility.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
Reference Gauging System for a Small-Scale Liquid Hydrogen Tank
NASA Technical Reports Server (NTRS)
VanDresar, Neil T.; Siegwarth, James D.
2003-01-01
A system to accurately weigh the fluid contents of a small-scale liquid hydrogen test tank has been experimentally verified. It is intended for use as a reference or benchmark system when testing lowgravity liquid quantity gauging concepts in the terrestrial environment. The reference gauging system has shown a repeatable measurement accuracy of better than 0.5 percent of the full tank liquid weight. With further refinement, the system accuracy can be improved to within 0.10 percent of full scale. This report describes the weighing system design, calibration, and operational results. Suggestions are given for further refinement of the system. An example is given to illustrate additional sources of uncertainty when mass measurements are converted to volume equivalents. Specifications of the companion test tank and its multi-layer insulation system are provided.
Lynch, Tim P; Morello, Elisabetta B; Evans, Karen; Richardson, Anthony J; Rochester, Wayne; Steinberg, Craig R; Roughan, Moninya; Thompson, Peter; Middleton, John F; Feng, Ming; Sherrington, Robert; Brando, Vittorio; Tilbrook, Bronte; Ridgway, Ken; Allen, Simon; Doherty, Peter; Hill, Katherine; Moltmann, Tim C
2014-01-01
Sustained observations allow for the tracking of change in oceanography and ecosystems, however, these are rare, particularly for the Southern Hemisphere. To address this in part, the Australian Integrated Marine Observing System (IMOS) implemented a network of nine National Reference Stations (NRS). The network builds on one long-term location, where monthly water sampling has been sustained since the 1940s and two others that commenced in the 1950s. In-situ continuously moored sensors and an enhanced monthly water sampling regime now collect more than 50 data streams. Building on sampling for temperature, salinity and nutrients, the network now observes dissolved oxygen, carbon, turbidity, currents, chlorophyll a and both phytoplankton and zooplankton. Additional parameters for studies of ocean acidification and bio-optics are collected at a sub-set of sites and all data is made freely and publically available. Our preliminary results demonstrate increased utility to observe extreme events, such as marine heat waves and coastal flooding; rare events, such as plankton blooms; and have, for the first time, allowed for consistent continental scale sampling and analysis of coastal zooplankton and phytoplankton communities. Independent water sampling allows for cross validation of the deployed sensors for quality control of data that now continuously tracks daily, seasonal and annual variation. The NRS will provide multi-decadal time series, against which more spatially replicated short-term studies can be referenced, models and remote sensing products validated, and improvements made to our understanding of how large-scale, long-term change and variability in the global ocean are affecting Australia's coastal seas and ecosystems. The NRS network provides an example of how a continental scaled observing systems can be developed to collect observations that integrate across physics, chemistry and biology.
Lynch, Tim P.; Morello, Elisabetta B.; Evans, Karen; Richardson, Anthony J.; Rochester, Wayne; Steinberg, Craig R.; Roughan, Moninya; Thompson, Peter; Middleton, John F.; Feng, Ming; Sherrington, Robert; Brando, Vittorio; Tilbrook, Bronte; Ridgway, Ken; Allen, Simon; Doherty, Peter; Hill, Katherine; Moltmann, Tim C.
2014-01-01
Sustained observations allow for the tracking of change in oceanography and ecosystems, however, these are rare, particularly for the Southern Hemisphere. To address this in part, the Australian Integrated Marine Observing System (IMOS) implemented a network of nine National Reference Stations (NRS). The network builds on one long-term location, where monthly water sampling has been sustained since the 1940s and two others that commenced in the 1950s. In-situ continuously moored sensors and an enhanced monthly water sampling regime now collect more than 50 data streams. Building on sampling for temperature, salinity and nutrients, the network now observes dissolved oxygen, carbon, turbidity, currents, chlorophyll a and both phytoplankton and zooplankton. Additional parameters for studies of ocean acidification and bio-optics are collected at a sub-set of sites and all data is made freely and publically available. Our preliminary results demonstrate increased utility to observe extreme events, such as marine heat waves and coastal flooding; rare events, such as plankton blooms; and have, for the first time, allowed for consistent continental scale sampling and analysis of coastal zooplankton and phytoplankton communities. Independent water sampling allows for cross validation of the deployed sensors for quality control of data that now continuously tracks daily, seasonal and annual variation. The NRS will provide multi-decadal time series, against which more spatially replicated short-term studies can be referenced, models and remote sensing products validated, and improvements made to our understanding of how large-scale, long-term change and variability in the global ocean are affecting Australia's coastal seas and ecosystems. The NRS network provides an example of how a continental scaled observing systems can be developed to collect observations that integrate across physics, chemistry and biology. PMID:25517905
Klein, Brennan J; Li, Zhi; Durgin, Frank H
2016-04-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Klein, Brennan J.; Li, Zhi; Durgin, Frank H.
2015-01-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides in order to dissociate egocentric from allocentric reference frames. In Experiment 1 it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. PMID:26594884
Generation of scale invariant magnetic fields in bouncing universes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sriramkumar, L.; Atmjeet, Kumar; Jain, Rajeev Kumar, E-mail: sriram@physics.iitm.ac.in, E-mail: katmjeet@physics.du.ac.in, E-mail: jain@cp3.dias.sdu.dk
2015-09-01
We consider the generation of primordial magnetic fields in a class of bouncing models when the electromagnetic action is coupled non-minimally to a scalar field that, say, drives the background evolution. For scale factors that have the power law form at very early times and non-minimal couplings which are simple powers of the scale factor, one can easily show that scale invariant spectra for the magnetic field can arise before the bounce for certain values of the indices involved. It will be interesting to examine if these power spectra retain their shape after the bounce. However, analytical solutions for themore » Fourier modes of the electromagnetic vector potential across the bounce are difficult to obtain. In this work, with the help of a new time variable that we introduce, which we refer to as the e-N-fold, we investigate these scenarios numerically. Imposing the initial conditions on the modes in the contracting phase, we numerically evolve the modes across the bounce and evaluate the spectra of the electric and magnetic fields at a suitable time after the bounce. As one could have intuitively expected, though the complete spectra depend on the details of the bounce, we find that, under the original conditions, scale invariant spectra of the magnetic fields do arise for wavenumbers much smaller than the scale associated with the bounce. We also show that magnetic fields which correspond to observed strengths today can be generated for specific values of the parameters. But, we find that, at the bounce, the backreaction due to the electromagnetic modes that have been generated can be significantly large calling into question the viability of the model. We briefly discuss the implications of our results.« less
Threshold values of physical performance tests for locomotive syndrome.
Muramoto, Akio; Imagama, Shiro; Ito, Zenya; Hirano, Kenichi; Tauchi, Ryoji; Ishiguro, Naoki; Hasegawa, Yukiharu
2013-07-01
Our previous study determined which physical performance tests were the most useful for evaluating locomotive syndrome. Our current study establishes reference values for these major physical performance tests with regards to diagnosing and assessing risk of locomotive syndrome (LS). We measured timed-up-and-go test, one-leg standing time, back muscle strength, grip strength, 10-m gait time and maximum stride in 406 individuals (167 men, 239 women) between the ages of 60-88 years (mean 68.8 ± 6.7 years) during Yakumo Study 2011-12. The LS was defined as having a score of >16 points on the 25-Question Geriatric Locomotive Function Scale (GLFS-25). The reference value of each physical test was determined using receiver operating characteristics analysis. Women had a significantly higher prevalence of LS than men did and also scored significantly higher on the GLFS-25: women, 9.2 ± 10.3 pts; men, 6.7 ± 8.0 pts. Both genders in the non-LS group performed significantly better in all physical performance test gender except for back muscle strength in men and grip strength in both genders than those in the LS group, even after adjusting for age. The results of all the physical performance tests correlated significantly with the GLFS-25 scores of both genders even after adjusting for age except for grip strength. Reference values for TUG, one-leg standing time, back muscle strength, 10-m gait time, maximum stride and grip strength in men were 6.7 s, 21 s, 78 kg, 5.5 s and, 119 cm and 34 kg, respectively, and those for women were 7.5 s, 15 s, 40 kg, 6.2 s, 104 cm, and 22 kg, respectively. We established reference values for major physical performance tests used when assessing locomotive syndrome as defined by the GLFS-25. Our results can be used to characterize physical function and to help tailor an anti-LS training program for each individual.
Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions
NASA Astrophysics Data System (ADS)
Soltani, S. S.; Cvetkovic, V.; Destouni, G.
2017-12-01
The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.
NASA Astrophysics Data System (ADS)
Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.
2018-02-01
River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.
Hawkins, S J; Evans, A J; Mieszkowska, N; Adams, L C; Bray, S; Burrows, M T; Firth, L B; Genner, M J; Leung, K M Y; Moore, P J; Pack, K; Schuster, H; Sims, D W; Whittington, M; Southward, E C
2017-11-30
Marine ecosystems are subject to anthropogenic change at global, regional and local scales. Global drivers interact with regional- and local-scale impacts of both a chronic and acute nature. Natural fluctuations and those driven by climate change need to be understood to diagnose local- and regional-scale impacts, and to inform assessments of recovery. Three case studies are used to illustrate the need for long-term studies: (i) separation of the influence of fishing pressure from climate change on bottom fish in the English Channel; (ii) recovery of rocky shore assemblages from the Torrey Canyon oil spill in the southwest of England; (iii) interaction of climate change and chronic Tributyltin pollution affecting recovery of rocky shore populations following the Torrey Canyon oil spill. We emphasize that "baselines" or "reference states" are better viewed as envelopes that are dependent on the time window of observation. Recommendations are made for adaptive management in a rapidly changing world. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.
2018-03-01
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.
Everaers, Ralf; Rosa, Angelo
2012-01-07
The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.
Off-axis digital holographic microscopy with LED illumination based on polarization filtering.
Guo, Rongli; Yao, Baoli; Gao, Peng; Min, Junwei; Zhou, Meiling; Han, Jun; Yu, Xun; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan; Ye, Tong
2013-12-01
A reflection mode digital holographic microscope with light emitting diode (LED) illumination and off-axis interferometry is proposed. The setup is comprised of a Linnik interferometer and a grating-based 4f imaging unit. Both object and reference waves travel coaxially and are split into multiple diffraction orders in the Fourier plane by the grating. The zeroth and first orders are filtered by a polarizing array to select orthogonally polarized object waves and reference waves. Subsequently, the object and reference waves are combined again in the output plane of the 4f system, and then the hologram with uniform contrast over the entire field of view can be acquired with the aid of a polarizer. The one-shot nature in the off-axis configuration enables an interferometric recording time on a millisecond scale. The validity of the proposed setup is illustrated by imaging nanostructured substrates, and the experimental results demonstrate that the phase noise is reduced drastically by an order of 68% when compared to a He-Ne laser-based result.
The effect of the subprime crisis on the credit risk in global scale
NASA Astrophysics Data System (ADS)
Lee, Sangwook; Kim, Min Jae; Lee, Sun Young; Kim, Soo Yong; Ban, Joon Hwa
2013-05-01
Credit default swap (CDS) has become one of the most actively traded credit derivatives, and its importance in finance markets has increased after the subprime crisis. In this study, we analyzed the correlation structure of credit risks embedded in CDS and the influence of the subprime crisis on this topological space. We found that the correlation was stronger in the cluster constructed according to the location of the CDS reference companies than in the one constructed according to their industries. The correlation both within a given cluster and between different clusters became significantly stronger after the subprime crisis. The causality test shows that the lead lag effect between the portfolios (into which reference companies are grouped by the continent where each of them is located) is reversed in direction because the portion of non-investable and investable reference companies in each portfolio has changed since then. The effect of a single impulse has increased and the response time relaxation has become prolonged after the crisis as well.
Large scale mass redistribution and surface displacement from GRACE and SLR
NASA Astrophysics Data System (ADS)
Cheng, M.; Ries, J. C.; Tapley, B. D.
2012-12-01
Mass transport between the atmosphere, ocean and solid earth results in the temporal variations in the Earth gravity field and loading induced deformation of the Earth. Recent space-borne observations, such as GRACE mission, are providing extremely high precision temporal variations of gravity field. The results from 10-yr GRACE data has shown a significant annual variations of large scale vertical and horizontal displacements occurring over the Amazon, Himalayan region and South Asia, African, and Russian with a few mm amplitude. Improving understanding from monitoring and modeling of the large scale mass redistribution and the Earth's response are a critical for all studies in the geosciences, in particular for determination of Terrestrial Reference System (TRS), including geocenter motion. This paper will report results for the observed seasonal variations in the 3-dimentional surface displacements of SLR and GPS tracking stations and compare with the prediction from time series of GRACE monthly gravity solution.
Mobile robot motion estimation using Hough transform
NASA Astrophysics Data System (ADS)
Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu
2018-05-01
This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
Clouding tracing: Visualization of the mixing of fluid elements in convection-diffusion systems
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Philip J.
1993-01-01
This paper describes a highly interactive method for computer visualization of the basic physical process of dispersion and mixing of fluid elements in convection-diffusion systems. It is based on transforming the vector field from a traditionally Eulerian reference frame into a Lagrangian reference frame. Fluid elements are traced through the vector field for the mean path as well as the statistical dispersion of the fluid elements about the mean position by using added scalar information about the root mean square value of the vector field and its Lagrangian time scale. In this way, clouds of fluid elements are traced and are not just mean paths. We have used this method to visualize the simulation of an industrial incinerator to help identify mechanisms for poor mixing.
NASA Astrophysics Data System (ADS)
Kuzmina, Natalia
2016-12-01
Analytical solutions are found for the problem of instability of a weak geostrophic flow with linear velocity shear accounting for vertical diffusion of buoyancy. The analysis is based on the potential-vorticity equation in a long-wave approximation when the horizontal scale of disturbances is considered much larger than the local baroclinic Rossby radius. It is hypothesized that the solutions found can be applied to describe stable and unstable disturbances of the planetary scale with respect, in particular, to the Arctic Ocean, where weak baroclinic fronts with typical temporal variability periods on the order of several years or more have been observed and the β effect is negligible. Stable (decaying with time) solutions describe disturbances that, in contrast to the Rossby waves, can propagate to both the west and east, depending on the sign of the linear shear of geostrophic velocity. The unstable (growing with time) solutions are applied to explain the formation of large-scale intrusions at baroclinic fronts under the stable-stable thermohaline stratification observed in the upper layer of the Polar Deep Water in the Eurasian Basin. The suggested mechanism of formation of intrusions can be considered a possible alternative to the mechanism of interleaving at the baroclinic fronts due to the differential mixing.
Towards the 1 mm/y stability of the radial orbit error at regional scales
NASA Astrophysics Data System (ADS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-François; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2015-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West “order-1” pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
The Role of Time-Scales in Socio-hydrology
NASA Astrophysics Data System (ADS)
Blöschl, Günter; Sivapalan, Murugesu
2016-04-01
Much of the interest in hydrological modeling in the past decades revolved around resolving spatial variability. With the rapid changes brought about by human impacts on the hydrologic cycle, there is now an increasing need to refocus on time dependency. We present a co-evolutionary view of hydrologic systems, in which every part of the system including human systems, co-evolve, albeit at different rates. The resulting coupled human-nature system is framed as a dynamical system, characterized by interactions of fast and slow time scales and feedbacks between environmental and social processes. This gives rise to emergent phenomena such as the levee effect, adaptation to change and system collapse due to resource depletion. Changing human values play a key role in the emergence of these phenomena and should therefore be considered as internal to the system in a dynamic way. The co-evolutionary approach differs from the traditional view of water resource systems analysis as it allows for path dependence, multiple equilibria, lock-in situations and emergent phenomena. The approach may assist strategic water management for long time scales through facilitating stakeholder participation, exploring the possibility space of alternative futures, and helping to synthesise the observed dynamics of different case studies. Future research opportunities include the study of how changes in human values are connected to human-water interactions, historical analyses of trajectories of system co-evolution in individual places and comparative analyses of contrasting human-water systems in different climate and socio-economic settings. Reference Sivapalan, M. and G. Blöschl (2015) Time scale interactions and the coevolution of humans and water. Water Resour. Res., 51, 6988-7022, doi:10.1002/2015WR017896.
Multiscale Modeling of Human-Water Interactions: The Role of Time-Scales
NASA Astrophysics Data System (ADS)
Bloeschl, G.; Sivapalan, M.
2015-12-01
Much of the interest in hydrological modeling in the past decades revolved around resolving spatial variability. With the rapid changes brought about by human impacts on the hydrologic cycle, there is now an increasing need to refocus on time dependency. We present a co-evolutionary view of hydrologic systems, in which every part of the system including human systems, co-evolve, albeit at different rates. The resulting coupled human-nature system is framed as a dynamical system, characterized by interactions of fast and slow time scales and feedbacks between environmental and social processes. This gives rise to emergent phenomena such as the levee effect, adaptation to change and system collapse due to resource depletion. Changing human values play a key role in the emergence of these phenomena and should therefore be considered as internal to the system in a dynamic way. The co-evolutionary approach differs from the traditional view of water resource systems analysis as it allows for path dependence, multiple equilibria, lock-in situations and emergent phenomena. The approach may assist strategic water management for long time scales through facilitating stakeholder participation, exploring the possibility space of alternative futures, and helping to synthesise the observed dynamics of different case studies. Future research opportunities include the study of how changes in human values are connected to human-water interactions, historical analyses of trajectories of system co-evolution in individual places and comparative analyses of contrasting human-water systems in different climate and socio-economic settings. Reference Sivapalan, M. and G. Blöschl (2015) Time Scale Interactions and the Co-evolution of Humans and Water. Water Resour. Res., 51, in press.
Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales
NASA Technical Reports Server (NTRS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2015-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales
NASA Technical Reports Server (NTRS)
Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel
2014-01-01
An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS,SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.
NASA Technical Reports Server (NTRS)
Lockwood, V. E.
1972-01-01
The investigation was made on a 1/18-scale model of a twin-engine light airplane. Static longitudinal, lateral, and directional characteristics were obtained at 0 deg and plus or minus 5 deg sideslip at a Mach number of about 0.2. The angle of attack varied from about 20 deg at a Reynolds number of 0.39 times one million to 13 deg at a Reynolds number of 3.7 times one million, based on the reference chord. The effect of fixed transition, vertical and horizontal tails, and nacelle fillets was studied.
NASA Astrophysics Data System (ADS)
Couhert, Alexandre
The reference Ocean Surface Topography Mission/Jason-2 satellite (CNES/NASA) has been in orbit for six years (since June 2008). It extends the continuous record of highly accurate sea surface height measurements begun in 1992 by the Topex/Poseidon mission and continued in 2001 by the Jason-1 mission. The complementary missions CryoSat-2 (ESA), HY-2A (CNSA) and SARAL/AltiKa (CNES/ISRO), with lower altitudes and higher inclinations, were launched in April 2010, August 2011 and February 2013, respectively. Although the three last satellites fly in different orbits, they contribute to the altimeter constellation while enhancing the global coverage. The CNES Precision Orbit Determination (POD) Group delivers precise and homogeneous orbit solutions for these independent altimeter missions. The focus of this talk will be on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular orbit errors dependant on the tracking technique, the reference frame accuracy and stability, the modeling of the temporal variations of the geopotential. Strategies are then explored to meet a 1 mm/y radial orbit stability over decadal periods at regional scales, and the challenge of evaluating such an improvement is discussed.
NASA Astrophysics Data System (ADS)
Sivapalan, M.; Jothityangkoon, C.; Menabde, M.
2002-02-01
Two uses of the terms ``linearity'' and ``nonlinearity'' appear in recent literature. The first definition of nonlinearity is with respect to the dynamical property such as the rainfall-runoff response of a catchment, and nonlinearity in this sense refers to a nonlinear dependence of the storm response on the magnitude of the rainfall inputs [Minshall, 1960; Wang et al., 1981]. The second definition of nonlinearity [Huang and Willgoose, 1993; Goodrich et al., 1997] is with respect to the dependence of a catchment statistical property, such as the mean annual flood, on the area of the catchment. They are both linked to important and interconnected hydrologic concepts, and furthermore, the change of nonlinearity with area (scale) has been an important motivation for hydrologic research. While both definitions are correct mathematically, they refer to hydrologically different concepts. In this paper we show that nonlinearity in the dynamical sense and that in the statistical sense can exist independently of each other (i.e., can be unrelated). If not carefully distinguished, the existence of these two definitions can lead to a catchment's response being described as being both linear and nonlinear at the same time. We therefore recommend separating these definitions by reserving the term ``nonlinearity'' for the classical, dynamical definition with respect to rainfall inputs, while adopting the term ``scaling relationship'' for the dependence of a catchment hydrological property on catchment area.
NASA Astrophysics Data System (ADS)
Li, Wei; Gu, Jiao; Cai, Xu
2008-06-01
We study message spreading on a scale-free network, by introducing a novel forget-remember mechanism. Message, a general term which can refer to email, news, rumor or disease, etc, can be forgotten and remembered by its holder. The way the message is forgotten and remembered is governed by the forget and remember function, F and R, respectively. Both F and R are functions of history time t concerning individual's previous states, namely being active (with message) or inactive (without message). Our systematic simulations show at the low transmission rate whether or not the spreading can be efficient is primarily determined by the corresponding parameters for F and R.
Polymer Dispersed Liquid Crystal Displays
NASA Astrophysics Data System (ADS)
Doane, J. William
The following sections are included: * INTRODUCTION AND HISTORICAL DEVELOPMENT * PDLC MATERIALS PREPARATION * Polymerization induced phase separation (PIPS) * Thermally induced phase separation (TIPS) * Solvent induced phase separation (SIPS) * Encapsulation (NCAP) * RESPONSE VOLTAGE * Dielectric and resistive effects * Radial configuration * Bipolar configuration * Other director configurations * RESPONSE TIME * DISPLAY CONTRAST * Light scattering and index matching * Incorporation of dyes * Contrast measurements * PDLC DISPLAY DEVICES AND INNOVATIONS * Reflective direct view displays * Large-scale, flexible displays * Switchable windows * Projection displays * High definition spatial light modulator * Haze-free PDLC shutters: wide angle view displays * ENVIRONMENTAL STABILITY * ACKNOWLEDGEMENTS * REFERENCES
The Next Frontier in Computing
Sarrao, John
2018-06-13
Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of todayâs most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.
Bellido-Zanin, Gloria; Perona-Garcelán, Salvador; Senín-Calderón, Cristina; López-Jiménez, Ana María; Ruiz-Veguilla, Miguel; Rodríguez-Testal, Juan Francisco
2018-05-29
Recent studies have emphasized the importance of childhood memories of threatening experiences and submissiveness in a diversity of psychological disorders. The purpose of this work was to study their specific relationship with hallucination proneness and ideas of reference in healthy subjects. The ELES scale for measuring memory of adverse childhood experiences, the DES-II scale for measuring dissociation, the LSHS-R scale for measuring hallucination proneness, and the REF for ideas of reference were applied to a sample of 472 subjects. A positive association was found between childhood memories of adverse experiences and hallucination proneness and ideas of reference, on one hand, and dissociation on the other. A mediation analysis showed that dissociation was a mediator between the memory of adverse childhood experiences and hallucination proneness on one hand, and ideas of reference on the other. When the role of mediator of the types of dissociative experiences was studied, it was found that absorption and depersonalization mediated between adverse experiences and hallucination proneness. However, this mediating effect was not found between adverse experiences and ideas of reference. The relationship between these last two variables was direct. The results suggest that childhood memories of adverse experiences are a relevant factor in understanding hallucination proneness and ideas of reference. Similarly, dissociation is a specific mediator between adverse childhood experiences and hallucination proneness. © 2018 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Upper Limit of Weights in TAI Computation
NASA Technical Reports Server (NTRS)
Thomas, Claudine; Azoubib, Jacques
1996-01-01
The international reference time scale International Atomic Time (TAI) computed by the Bureau International des Poids et Mesures (BIPM) relies on a weighted average of data from a large number of atomic clocks. In it, the weight attributed to a given clock depends on its long-term stability. In this paper the TAI algorithm is used as the basis for a discussion of how to implement an upper limit of weight for clocks contributing to the ensemble time. This problem is approached through the comparison of two different techniques. In one case, a maximum relative weight is fixed: no individual clock can contribute more than a given fraction to the resulting time scale. The weight of each clock is then adjusted according to the qualities of the whole set of contributing elements. In the other case, a parameter characteristic of frequency stability is chosen: no individual clock can appear more stable than the stated limit. This is equivalent to choosing an absolute limit of weight and attributing this to to the most stable clocks independently of the other elements of the ensemble. The first technique is more robust than the second and automatically optimizes the stability of the resulting time scale, but leads to a more complicated computatio. The second technique has been used in the TAI algorithm since the very beginning. Careful analysis of tests on real clock data shows that improvement of the stability of the time scale requires revision from time to time of the fixed value chosen for the upper limit of absolute weight. In particular, we present results which confirm the decision of the CCDS Working Group on TAI to increase the absolute upper limit by a factor of 2.5. We also show that the use of an upper relative contribution further helps to improve the stability and may be a useful step towards better use of the massive ensemble of HP 507IA clocks now contributing to TAI.
Bohrer, Stefanie L; Limb, Ryan F; Daigh, Aaron L; Volk, Jay M; Wick, Abbey F
2017-03-01
Rangelands are described as heterogeneous, due to patterning in species assemblages and productivity that arise from species dispersal and interactions with environmental gradients and disturbances across multiple scales. The objectives of rangeland reclamation are typically vegetation establishment, plant community productivity, and soil stability. However, while fine-scale diversity is often promoted through species-rich seed mixes, landscape heterogeneity and coarse-scale diversity are largely overlooked. Our objectives were to evaluate fine and coarse-scale vegetation patterns across a 40-year reclamation chronosequence on reclaimed surface coalmine lands. We hypothesized that both α-diversity and β-diversity would increase and community patch size and species dissimilarity to reference sites would decrease on independent sites over 40 years. Plant communities were surveyed on 19 post-coalmine reclaimed sites and four intact native reference sites in central North Dakota mixed-grass prairie. Our results showed no differences in α or β-diversity and plant community patch size over the 40-year chronosequence. However, both α-diversity and β-diversity on reclaimed sites was similar to reference sites. Native species establishment was limited due to the presence of non-native species such as Kentucky bluegrass (Poa pratensis) on both the reclaimed and reference sites. Species composition was different between reclaimed and reference sites and community dissimilarity increased on reclaimed sites over the 40-year chronosequence. Plant communities resulting from reclamation followed non-equilibrium succession, even with consistent seeds mixes established across all reclaimed years. This suggests post-reclamation management strategies influence species composition outcomes and land management strategies applied uniformly may not increase landscape-level diversity.
NASA Astrophysics Data System (ADS)
Rice, Joshua S.; Emanuel, Ryan E.; Vose, James M.
2016-09-01
As human activity and climate variability alter the movement of water through the environment the need to better understand hydrologic cycle responses to these changes has grown. A reasonable starting point for gaining such insight is studying changes in streamflow given the importance of streamflow as a source of renewable freshwater. Using a wavelet assisted method we analyzed trends in the magnitude of annual scale streamflow variability from 967 watersheds in the continental U.S. (CONUS) over a 70 year period (1940-2009). Decreased annual variability was the dominant pattern at the CONUS scale. Ecoregion scale results agreed with the CONUS pattern with the exception of two ecoregions closely divided between increases and decreases and one where increases dominated. A comparison of trends in reference and non-reference watersheds indicated that trend magnitudes in non-reference watersheds were significantly larger than those in reference watersheds. Boosted regression tree (BRT) models were used to study the relationship between watershed characteristics and the magnitude of trends in streamflow. At the CONUS scale, the balance between precipitation and evaporative demand, and measures of geographic location were of high relative importance. Relationships between the magnitude of trends and watershed characteristics at the ecoregion scale exhibited differences from the CONUS results and substantial variability was observed among ecoregions. Additionally, the methodology used here has the potential to serve as a robust framework for top-down, data driven analyses of the relationships between changes in the hydrologic cycle and the spatial context within which those changes occur.
A new resource for developing and strengthening large-scale community health worker programs.
Perry, Henry; Crigler, Lauren; Lewin, Simon; Glenton, Claire; LeBan, Karen; Hodgins, Steve
2017-01-12
Large-scale community health worker programs are now growing in importance around the world in response to the resurgence of interest and growing evidence of the importance of community-based primary health care for improving the health of populations in resource-constrained, high-mortality settings. These programs, because of their scale and operational challenges, merit special consideration by the global health community, national policy-makers, and program implementers. A new online resource is now available to assist in that effort: Developing and Strengthening Community Health Worker Programs at Scale: A Reference Guide and Case Studies for Program Managers and Policymakers ( http://www.mchip.net/CHWReferenceGuide ). This CHW Reference Guide is the product of 27 different collaborators who, collectively, have a formidable breadth and depth of experience and knowledge about CHW programming around the world. It provides a thoughtful discussion about the many operational issues that large-scale CHW programs need to address as they undergo the process of development, expansion or strengthening. Detailed case studies of 12 national CHW programs are included in the Appendix-the most current and complete cases studies as a group that are currently available. Future articles in this journal will highlight many of the themes in the CHW Reference Guide and provide an update of recent advances and experiences. These articles will serve, we hope, to (1) increase awareness about the CHW Reference Guide and its usefulness and (2) connect a broader audience to the critical importance of strengthening large-scale CHW programs for the health benefits that they can bring to underserved populations around the world.
Decision support for the selection of reference sites using 137Cs as a soil erosion tracer
NASA Astrophysics Data System (ADS)
Arata, Laura; Meusburger, Katrin; Bürge, Alexandra; Zehringer, Markus; Ketterer, Michael E.; Mabit, Lionel; Alewell, Christine
2017-08-01
The classical approach of using 137Cs as a soil erosion tracer is based on the comparison between stable reference sites and sites affected by soil redistribution processes; it enables the derivation of soil erosion and deposition rates. The method is associated with potentially large sources of uncertainty with major parts of this uncertainty being associated with the selection of the reference sites. We propose a decision support tool to Check the Suitability of reference Sites (CheSS). Commonly, the variation among 137Cs inventories of spatial replicate reference samples is taken as the sole criterion to decide on the suitability of a reference inventory. Here we propose an extension of this procedure using a repeated sampling approach, in which the reference sites are resampled after a certain time period. Suitable reference sites are expected to present no significant temporal variation in their decay-corrected 137Cs depth profiles. Possible causes of variation are assessed by a decision tree. More specifically, the decision tree tests for (i) uncertainty connected to small-scale variability in 137Cs due to its heterogeneous initial fallout (such as in areas affected by the Chernobyl fallout), (ii) signs of erosion or deposition processes and (iii) artefacts due to the collection, preparation and measurement of the samples; (iv) finally, if none of the above can be assigned, this variation might be attributed to turbation
processes (e.g. bioturbation, cryoturbation and mechanical turbation, such as avalanches or rockfalls). CheSS was exemplarily applied in one Swiss alpine valley where the apparent temporal variability called into question the suitability of the selected reference sites. In general we suggest the application of CheSS as a first step towards a comprehensible approach to test for the suitability of reference sites.
Validating Large Scale Networks Using Temporary Local Scale Networks
USDA-ARS?s Scientific Manuscript database
The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...
Schneider, Valerie A.; Graves-Lindsay, Tina; Howe, Kerstin; Bouk, Nathan; Chen, Hsiu-Chuan; Kitts, Paul A.; Murphy, Terence D.; Pruitt, Kim D.; Thibaud-Nissen, Françoise; Albracht, Derek; Fulton, Robert S.; Kremitzki, Milinn; Magrini, Vincent; Markovic, Chris; McGrath, Sean; Steinberg, Karyn Meltz; Auger, Kate; Chow, William; Collins, Joanna; Harden, Glenn; Hubbard, Timothy; Pelan, Sarah; Simpson, Jared T.; Threadgold, Glen; Torrance, James; Wood, Jonathan M.; Clarke, Laura; Koren, Sergey; Boitano, Matthew; Peluso, Paul; Li, Heng; Chin, Chen-Shan; Phillippy, Adam M.; Durbin, Richard; Wilson, Richard K.; Flicek, Paul; Eichler, Evan E.; Church, Deanna M.
2017-01-01
The human reference genome assembly plays a central role in nearly all aspects of today's basic and clinical research. GRCh38 is the first coordinate-changing assembly update since 2009; it reflects the resolution of roughly 1000 issues and encompasses modifications ranging from thousands of single base changes to megabase-scale path reorganizations, gap closures, and localization of previously orphaned sequences. We developed a new approach to sequence generation for targeted base updates and used data from new genome mapping technologies and single haplotype resources to identify and resolve larger assembly issues. For the first time, the reference assembly contains sequence-based representations for the centromeres. We also expanded the number of alternate loci to create a reference that provides a more robust representation of human population variation. We demonstrate that the updates render the reference an improved annotation substrate, alter read alignments in unchanged regions, and impact variant interpretation at clinically relevant loci. We additionally evaluated a collection of new de novo long-read haploid assemblies and conclude that although the new assemblies compare favorably to the reference with respect to continuity, error rate, and gene completeness, the reference still provides the best representation for complex genomic regions and coding sequences. We assert that the collected updates in GRCh38 make the newer assembly a more robust substrate for comprehensive analyses that will promote our understanding of human biology and advance our efforts to improve health. PMID:28396521
An examination of the MASC Social Anxiety Scale in a non-referred sample of adolescents.
Anderson, Emily R; Jordan, Judith A; Smith, Ashley J; Inderbitzen-Nolan, Heidi M
2009-12-01
Social phobia is prevalent during adolescence and is associated with negative outcomes. Two self-report instruments are empirically validated to specifically assess social phobia symptomatology in youth: the Social Phobia and Anxiety Inventory for Children and the Social Anxiety Scale for Adolescents. The Multidimensional Anxiety Scale for Children is a broad-band measure of anxiety containing a scale assessing the social phobia construct. The present study investigated the MASC Social Anxiety Scale in relation to other well-established measures of social phobia and depression in a non-referred sample of adolescents. Results support the convergent validity of the MASC Social Anxiety Scale and provide some support for its discriminant validity, suggesting its utility in the initial assessment of social phobia. Receiver Operating Characteristics (ROCs) calculated the sensitivity and specificity of the MASC Social Anxiety Scale. Binary logistic regression analyses determined the predictive utility of the MASC Social Anxiety Scale. Implications for assessment are discussed.
The East Asian Jet Stream and Asian-Pacific-American Climate
NASA Technical Reports Server (NTRS)
Yang, Song; Lau, K.-M.; Kim, K.-M.
2000-01-01
The upper-tropospheric westerly jet stream over subtropical East Asia and western Pacific, often referred to as East Asian Jet (EAJ), is an important atmospheric circulation system in the Asian-Pacific-American (APA) region during winter. It is characterized by variabilities on a wide range of time scales and exerts a strong impact on the weather and climate of the region. On the synoptic scale, the jet stream is closely linked to many phenomena such as cyclogenesis, frontogenesis, blocking, storm track activity, and the development of other atmospheric disturbances. On the seasonal time scale, the variation of the EAJ determines many characteristics of the seasonal transition of the atmospheric circulation especially over East Asia. The variabilities of the EAJ on these time scales have been relatively well documented. It has also been understood since decades ago that the interannual. variability of the EAJ is associated with many climate signals in the APA region. These signals include the persistent anomalies of the East Asian winter monsoon and the changes in diabatic heating and in the Hadley circulation. However, many questions remain for the year-to-year variabilities of the EAJ and their relation to the APA climate. For example, what is the relationship between the EAJ and El Nino/Southern Oscillation (ENSO)? Will the EAJ and ENSO play different roles in modulating the APA climate? How is the jet stream linked to the non-ENSO-related sea surface temperature (SST) anomalies and to the Pacific/North American (PNA) teleconnection pattern?
Development of a scale of executive functioning for the RBANS.
Spencer, Robert J; Kitchen Andren, Katherine A; Tolle, Kathryn A
2018-01-01
The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) is a cognitive battery that contains scales of several cognitive abilities, but no scale in the instrument is exclusively dedicated to executive functioning. Although the subtests allow for observation of executive-type errors, each error is of fairly low base rate, and healthy and clinical normative data are lacking on the frequency of these types of errors, making their significance difficult to interpret in isolation. The aim of this project was to create an RBANS executive errors scale (RBANS EE) with items comprised of qualitatively dysexecutive errors committed throughout the test. Participants included Veterans referred for outpatient neuropsychological testing. Items were initially selected based on theoretical literature and were retained based on item-total correlations. The RBANS EE (a percentage calculated by dividing the number of dysexecutive errors by the total number of responses) was moderately related to each of seven established measures of executive functioning and was strongly predictive of dichotomous classification of executive impairment. Thus, the scale had solid concurrent validity, justifying its use as a supplementary scale. The RBANS EE requires no additional administration time and can provide a quantified measure of otherwise unmeasured aspects of executive functioning.
Isotalo, Aarno E.; Wieselquist, William A.
2015-05-15
A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Lastly, in most cases, the new solver is upmore » to several times faster due to not requiring similar substepping as the original one.« less
Saccani, Raquel; Valentini, Nadia C
2012-01-01
To compare Alberta Infant Motor Scale scores for Brazilian infants with the Canadian norm and to construct sex-specific reference curves and percentiles for motor development for a Brazilian population. This study recruited 795 children aged 0 to 18 months from a number of different towns in Brazil. Infants were assessed by an experienced researcher in a silent room using the Alberta Infant Motor Scale. Sex-specific percentiles (P5, P10, P25, P50, P75 and P90) were calculated and analyzed for each age in months from 0 to 18 months. No significant differences (p > 0.05) between boys and girls were observed for the majority of ages. The exception was 14 months, where the girls scored higher for overall motor performance (p = 0.015) and had a higher development percentile (0.021). It was observed that the development curves demonstrated a tendency to nonlinear development in both sexes and for both typical and atypical children. Variation in motor acquisition was minimal at the extremes of the age range: during the first two months of life and from 15 months onwards. Although the Alberta Infant Motor Scale is widely used in both research and clinical practice, it has certain limitations in terms of behavioral differentiation before 2 months and after 15 months. This reduced sensitivity at the extremes of the age range may be related to the number of motor items assessed at these ages and their difficulty. It is suggested that other screening instruments be employed for children over the age of 15 months.
Al-Saleh, Khaled; Al-Awadi, Ahmad; Soliman, Najla A; Mostafa, Sobhy; Mostafa, Mohammad; Mostafa, Wafaa; Alsirafy, Samy A
2017-05-01
Compared to other regions of the world, palliative care (PC) in the Eastern Mediterranean region is at an earlier stage of development. The Palliative Care Center of Kuwait (PCC-K) was established a few years ago as the first stand-alone PC center in the region. This study was conducted to investigate the timing of referral to the PCC-K and its outcome. Retrospective review of referrals to the PCC-K during its first 3 years of action. Late referral was defined as referral during the last 30 days of life. During the 3-year period, 498 patients with cancer were referred to the PCC-K of whom 467 were eligible for analysis. Referral was considered late in 58% of patients. Nononcology facilities were more likely to refer patients late when compared to oncology facilities ( P = .033). The palliative performance scale (PPS) was ≤30 in 59% of late referrals and 21% in earlier referrals ( P < .001). Among 467 referred patients, 342 (73%) were eligible for transfer to the PCC-K, 102 (22%) were ineligible, and 23 (5%) died before assessment by the PCC-K consultation team. From the 342 eligible patients, the family caregivers refused the transfer of 64 (19%) patients to the PCC-K. Patients are frequently referred late to the PCC-K. Further research to identify barriers to PC and its early integration in Kuwait is required. The PPS may be useful in identifying late referrals.
Two Point Space-Time Correlation of Density Fluctuations Measured in High Velocity Free Jets
NASA Technical Reports Server (NTRS)
Panda, Jayanta
2006-01-01
Two-point space-time correlations of air density fluctuations in unheated, fully-expanded free jets at Mach numbers M(sub j) = 0.95, 1.4, and 1.8 were measured using a Rayleigh scattering based diagnostic technique. The molecular scattered light from two small probe volumes of 1.03 mm length was measured for a completely non-intrusive means of determining the turbulent density fluctuations. The time series of density fluctuations were analyzed to estimate the integral length scale L in a moving frame of reference and the convective Mach number M(sub c) at different narrow Strouhal frequency (St) bands. It was observed that M(sub c) and the normalized moving frame length scale L*St/D, where D is the jet diameter, increased with Strouhal frequency before leveling off at the highest resolved frequency. Significant differences were observed between data obtained from the lip shear layer and the centerline of the jet. The wave number frequency transform of the correlation data demonstrated progressive increase in the radiative part of turbulence fluctuations with increasing jet Mach number.
Constant- q data representation in Neutron Compton scattering on the VESUVIO spectrometer
NASA Astrophysics Data System (ADS)
Senesi, R.; Pietropaolo, A.; Andreani, C.
2008-09-01
Standard data analysis on the VESUVIO spectrometer at ISIS is carried out within the Impulse Approximation framework, making use of the West scaling variable y. The experiments are performed using the time-of-flight technique with the detectors positioned at constant scattering angles. Line shape analysis is routinely performed in the y-scaling framework, using two different (and equivalent) approaches: (1) fitting the parameters of the recoil peaks directly to fixed-angle time-of-flight spectra; (2) transforming the time-of-flight spectra into fixed-angle y spectra, referred to as the Neutron Compton Profiles, and then fitting the line shape parameters. The present work shows that scattering signals from different fixed-angle detectors can be collected and rebinned to obtain Neutron Compton Profiles at constant wave vector transfer, q, allowing for a suitable interpretation of data in terms of the dynamical structure factor, S(q,ω). The current limits of applicability of such a procedure are discussed in terms of the available q-range and relative uncertainties for the VESUVIO experimental set up and of the main approximations involved.
On-chip dual-comb source for spectroscopy.
Dutt, Avik; Joshi, Chaitanya; Ji, Xingchen; Cardenas, Jaime; Okawachi, Yoshitomo; Luke, Kevin; Gaeta, Alexander L; Lipson, Michal
2018-03-01
Dual-comb spectroscopy is a powerful technique for real-time, broadband optical sampling of molecular spectra, which requires no moving components. Recent developments with microresonator-based platforms have enabled frequency combs at the chip scale. However, the need to precisely match the resonance wavelengths of distinct high quality-factor microcavities has hindered the development of on-chip dual combs. We report the simultaneous generation of two microresonator combs on the same chip from a single laser, drastically reducing experimental complexity. We demonstrate broadband optical spectra spanning 51 THz and low-noise operation of both combs by deterministically tuning into soliton mode-locked states using integrated microheaters, resulting in narrow (<10 kHz) microwave beat notes. We further use one comb as a reference to probe the formation dynamics of the other comb, thus introducing a technique to investigate comb evolution without auxiliary lasers or microwave oscillators. We demonstrate high signal-to-noise ratio absorption spectroscopy spanning 170 nm using the dual-comb source over a 20-μs acquisition time. Our device paves the way for compact and robust spectrometers at nanosecond time scales enabled by large beat-note spacings (>1 GHz).
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Vijayan, D.
Gir wildlife sanctuary located between 20 r 57 to 21 r 20 N and 70 r 28 to 71 r 13 E is the last home of Asiatic lions Its biodiversity comprises of 450 recorded flowering plant species 32 species of mammals 26 species of reptiles about 300 species of birds and more than 2000 species of insects As per 1995 census it has 304 lions and 268 leopards The movement of wildlife to thermally comfortable zones to reduce stress conditions forces the changes in management plan with reference to change in localized water demand This necessitates the use of space based thermal data available from AVHRR MODIS etc to monitor temperature of Gir-ecosystem for meso-scale level operational utility As the time scale of the variability of NDVI parameter is much higher than that for lower boundary temperature LBT the dense patch in riverine forest having highest NDVI value would not experience change in its vigour with the change in the season NDVI value of such patch would be near invariant over the year and temperature of this pixel could serve as reference temperature for developing the concept of relative thermal stress index RTSI which is defined as RTSI T p -T r T max -T r wherein T r T max and T p refer to LBT over the maximum NDVI reference point maximum LBT observed in the Gir ecosystem and the temperature of the pixel in the image respectively RTSI images were computed from AVHRR images for post-monsoon leaf-shedded and summer seasons Scatter plot between RTSI and NDVI for summer seasons
RANS Simulation (Rotating Reference Frame Model [RRF]) of Single Lab-Scaled DOE RM1 MHK Turbine
Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph
2014-04-15
Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study taking advantage of the symmetry of lab-scaled DOE RM1 geometry, only half of the geometry is models using (Single) Rotating Reference Frame model [RRF]. In this model RANS equations, coupled with k-\\omega turbulence closure model, are solved in the rotating reference frame. The actual geometry of the turbine blade is included and the turbulent boundary layer along the blade span is simulated using wall-function approach. The rotation of the blade is modeled by applying periodic boundary condition to sets of plane of symmetry. This case study simulates the performance and flow field in the near and far wake of the device at the desired operating conditions. The results of these simulations were validated against in-house experimental data. Please see the attached paper.
2010-01-01
Background Psychological distress (PD) includes symptoms of depression and anxiety and is associated with considerable emotional suffering, social dysfunction and, often, with problematic alcohol use. The rate of current PD among American Indian women is approximately 2.5 times higher than that of U.S. women in general. Our study aims to fill the current knowledge gap about the prevalence and characteristics of PD and its association with self-reported current drinking problems among American Indian mothers whose children were referred to screening for fetal alcohol spectrum disorders (FASD). Methods Secondary analysis of cross-sectional data was conducted from maternal interviews of referred American Indian mothers (n = 152) and a comparison group of mothers (n = 33) from the same Plains culture tribes who participated in an NIAAA-funded epidemiology study of FASD. Referred women were from one of six Plains Indian reservation communities and one urban area who bore children suspected of having an FASD. A 6-item PD scale (PD-6, Cronbach's alpha = .86) was constructed with a summed score range of 0-12 and a cut-point of 7 indicating serious PD. Multiple statistical tests were used to examine the characteristics of PD and its association with self-reported current drinking problems. Results Referred and comparison mothers had an average age of 31.3 years but differed (respectively) on: education (
Modelling Greenland Outlet Glaciers
NASA Technical Reports Server (NTRS)
vanderVeen, Cornelis; Abdalati, Waleed (Technical Monitor)
2001-01-01
The objective of this project was to develop simple yet realistic models of Greenland outlet glaciers to better understand ongoing changes and to identify possible causes for these changes. Several approaches can be taken to evaluate the interaction between climate forcing and ice dynamics, and the consequent ice-sheet response, which may involve changes in flow style. To evaluate the icesheet response to mass-balance forcing, Van der Veen (Journal of Geophysical Research, in press) makes the assumption that this response can be considered a perturbation on the reference state and may be evaluated separately from how this reference state evolves over time. Mass-balance forcing has an immediate effect on the ice sheet. Initially, the rate of thickness change as compared to the reference state equals the perturbation in snowfall or ablation. If the forcing persists, the ice sheet responds dynamically, adjusting the rate at which ice is evacuated from the interior to the margins, to achieve a new equilibrium. For large ice sheets, this dynamic adjustment may last for thousands of years, with the magnitude of change decreasing steadily over time as a new equilibrium is approached. This response can be described using kinematic wave theory. This theory, modified to pertain to Greenland drainage basins, was used to evaluate possible ice-sheet responses to perturbations in surface mass balance. The reference state is defined based on measurements along the central flowline of Petermann Glacier in north-west Greenland, and perturbations on this state considered. The advantage of this approach is that the particulars of the dynamical flow regime need not be explicitly known but are incorporated through the parameterization of the reference ice flux or longitudinal velocity profile. The results of the kinematic wave model indicate that significant rates of thickness change can occur immediately after the prescribed change in surface mass balance but adjustments in flow rapidly diminish these rates to a few cm/yr at most. The time scale for adjustment is of the order of a thousand years or so.
Krasny-Pacini, A; Pauly, F; Hiebel, J; Godon, S; Isner-Horobeti, M-E; Chevignard, M
2017-07-01
Goal Attainment Scaling (GAS) is a method for writing personalized evaluation scales to quantify progress toward defined goals. It is useful in rehabilitation but is hampered by the experience required to adequately "predict" the possible outcomes relating to a particular goal before treatment and the time needed to describe all 5 levels of the scale. Here we aimed to investigate the feasibility of using GAS in a clinical setting of a pediatric spasticity clinic with a shorter method, the "3-milestones" GAS (goal setting with 3 levels and goal rating with the classical 5 levels). Secondary aims were to (1) analyze the types of goals children's therapists set for botulinum toxin treatment and (2) compare the score distribution (and therefore the ability to predict outcome) by goal type. Therapists were trained in GAS writing and prepared GAS scales in the regional spasticity-management clinic they attended with their patients and families. The study included all GAS scales written during a 2-year period. GAS score distribution across the 5 GAS levels was examined to assess whether the therapist could reliably predict outcome and whether the 3-milestones GAS yielded similar distributions as the original GAS method. In total, 541 GAS scales were written and showed the expected score distribution. Most scales (55%) referred to movement quality goals and fewer (29%) to family goals and activity domains. The 3-milestones GAS method was feasible within the time constraints of the spasticity clinic and could be used by local therapists in cooperation with the hospital team. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Tepavcevic, Darija Kisic; Pekmezovic, Tatjana; Stojsavljevic, Nebojsa; Kostic, Jelena; Basuroski, Irena Dujmovic; Mesaros, Sarlota; Drulovic, Jelena
2014-04-01
The aim of this study was to determine the changes in the health-related quality of life (HRQoL) and predictors of change among patients with multiple sclerosis (MS) at 3 and 6 years during the follow-up period. A group of 109 consecutive MS patients (McDonald's criteria) referred to the Clinic of Neurology, Belgrade, were enrolled in the study. At three time points during the study (baseline, and at 3 and 6 years during the follow-up period), the HRQoL (measured by MSQoL-54), Expanded Disability Status Scale, and Hamilton Rating Scale for Depression and Fatigue Severity Scale were assessed. During the study period, 93 patients provided both follow-up assessments. Statistically significant deterioration in the HRQoL at each subsequent time point was detected for all scales of the MSQoL-54 except for the pain and change in health scales. A higher level of education was a significant prognostic factor for a better HRQoL on the cognitive function scale throughout the entire period of observation, while marital status (single, including divorced and widowed) and increased age at the onset of MS had significant predictive values of poorer quality-of-life scores on the overall quality-of-life scale at 6-year follow-up. Higher levels of physical disability and depression at baseline were statistically significant prognostic markers for deterioration in HRQoL for the majority of MSQoL-54 scales during the entire follow-up period. Our study suggests that baseline demographic and clinical characteristics could be applied as prognostic markers of the HRQOL for patients diagnosed with MS.
Optimal Reference Genes for Gene Expression Normalization in Trichomonas vaginalis.
dos Santos, Odelta; de Vargas Rigo, Graziela; Frasson, Amanda Piccoli; Macedo, Alexandre José; Tasca, Tiana
2015-01-01
Trichomonas vaginalis is the etiologic agent of trichomonosis, the most common non-viral sexually transmitted disease worldwide. This infection is associated with several health consequences, including cervical and prostate cancers and HIV acquisition. Gene expression analysis has been facilitated because of available genome sequences and large-scale transcriptomes in T. vaginalis, particularly using quantitative real-time polymerase chain reaction (qRT-PCR), one of the most used methods for molecular studies. Reference genes for normalization are crucial to ensure the accuracy of this method. However, to the best of our knowledge, a systematic validation of reference genes has not been performed for T. vaginalis. In this study, the transcripts of nine candidate reference genes were quantified using qRT-PCR under different cultivation conditions, and the stability of these genes was compared using the geNorm and NormFinder algorithms. The most stable reference genes were α-tubulin, actin and DNATopII, and, conversely, the widely used T. vaginalis reference genes GAPDH and β-tubulin were less stable. The PFOR gene was used to validate the reliability of the use of these candidate reference genes. As expected, the PFOR gene was upregulated when the trophozoites were cultivated with ferrous ammonium sulfate when the DNATopII, α-tubulin and actin genes were used as normalizing gene. By contrast, the PFOR gene was downregulated when the GAPDH gene was used as an internal control, leading to misinterpretation of the data. These results provide an important starting point for reference gene selection and gene expression analysis with qRT-PCR studies of T. vaginalis.
Optimal Reference Genes for Gene Expression Normalization in Trichomonas vaginalis
dos Santos, Odelta; de Vargas Rigo, Graziela; Frasson, Amanda Piccoli; Macedo, Alexandre José; Tasca, Tiana
2015-01-01
Trichomonas vaginalis is the etiologic agent of trichomonosis, the most common non-viral sexually transmitted disease worldwide. This infection is associated with several health consequences, including cervical and prostate cancers and HIV acquisition. Gene expression analysis has been facilitated because of available genome sequences and large-scale transcriptomes in T. vaginalis, particularly using quantitative real-time polymerase chain reaction (qRT-PCR), one of the most used methods for molecular studies. Reference genes for normalization are crucial to ensure the accuracy of this method. However, to the best of our knowledge, a systematic validation of reference genes has not been performed for T. vaginalis. In this study, the transcripts of nine candidate reference genes were quantified using qRT-PCR under different cultivation conditions, and the stability of these genes was compared using the geNorm and NormFinder algorithms. The most stable reference genes were α-tubulin, actin and DNATopII, and, conversely, the widely used T. vaginalis reference genes GAPDH and β-tubulin were less stable. The PFOR gene was used to validate the reliability of the use of these candidate reference genes. As expected, the PFOR gene was upregulated when the trophozoites were cultivated with ferrous ammonium sulfate when the DNATopII, α-tubulin and actin genes were used as normalizing gene. By contrast, the PFOR gene was downregulated when the GAPDH gene was used as an internal control, leading to misinterpretation of the data. These results provide an important starting point for reference gene selection and gene expression analysis with qRT-PCR studies of T. vaginalis. PMID:26393928
Long-term coastal measurements for large-scale climate trends characterization
NASA Astrophysics Data System (ADS)
Pomaro, Angela; Cavaleri, Luigi; Lionello, Piero
2017-04-01
Multi-decadal time-series of observational wave data beginning in the late 1970's are relatively rare. The present study refers to the analysis of the 37-year long directional wave time-series recorded between 1979 and 2015 at the CNR-ISMAR (Institute of Marine Sciences of the Italian National Research Council) "Acqua Alta" oceanographic research tower, located in the Northern Adriatic Sea, 15 km offshore the Venice lagoon, on 16 m depth. The extent of the time series allows to exploit its content not only for modelling purposes or short-term statistical analyses, but also at the climatological scale thanks to the peculiar meteorological and oceanographic aspects of the coastal area where this relevant infrastructure has been installed. We explore the dataset both to characterize the local average climate and its variability, and to detect the possible long-term trends that might be suggestive of, or emphasize, large scale circulation patterns and trends. Measured data are essential for the assessment, and often for the calibration, of model data, generally, if long enough, also the reference also for climate studies. By applying this analysis to an area well characterized from the meteorological point of view, we first assess the changes in time based on measured data, and then we compare them to the ones derived from the ERA-Interim regional simulation over the same area, thus showing the strong improvement that is still needed to get reliable climate models projections on coastal areas and the Mediterranean Region as a whole. Moreover, long term hindcast aiming at climatic considerations are well known for 1) underestimating, if their resolution is not high enough, the actual wave heights as well as for 2) being strongly affected by different conditions over time that are likely to introduce spurious trends of variable magnitude. In particular the different amount, in time, of assimilated data by the hindcast models, directly and indirectly affects the results, making it difficult, if not impossible, to distinguish the imposed effects from the climate signal itself, as demonstrated by Aarnes et al. (2015). From this point of view the problem is that long-term measured datasets are relatively unique, due to the cost and technical difficulty of maintaining fixed instrumental equipment over time, as well as of assuring the homogeneity and availability of the entire dataset. For this reason we are furthermore working on the publication of the quality controlled dataset to make it widely available for open-access research purposes. The analysis and homogenization of the original dataset has actually required a substantial part of the time spent on the study, because of the strong impact that the quality of the data may have on the final result. We consider this particularly relevant, especially when referring to coastal areas, where the lack of reliable satellite data makes it difficult to improve the model capability to resolve the local peculiar oceanographic processes. We describe in detail any step and procedure used in producing the data, including full descriptions of the experimental design, data acquisition assays, and any computational processing needed to support the technical quality of the dataset.
Validity of flowmeter data in heterogeneous alluvial aquifers
NASA Astrophysics Data System (ADS)
Bianchi, Marco
2017-04-01
Numerical simulations are performed to evaluate the impact of medium-scale sedimentary architecture and small-scale heterogeneity on the validity of the borehole flowmeter test, a widely used method for measuring hydraulic conductivity (K) at the scale required for detailed groundwater flow and solute transport simulations. Reference data from synthetic K fields representing the range of structures and small-scale heterogeneity typically observed in alluvial systems are compared with estimated values from numerical simulations of flowmeter tests. Systematic errors inherent in the flowmeter K estimates are significant when the reference K field structure deviates from the hypothetical perfectly stratified conceptual model at the basis of the interpretation method of flowmeter tests. Because of these errors, the true variability of the K field is underestimated and the distributions of the reference K data and log-transformed spatial increments are also misconstrued. The presented numerical analysis shows that the validity of flowmeter based K data depends on measureable parameters defining the architecture of the hydrofacies, the conductivity contrasts between the hydrofacies and the sub-facies-scale K variability. A preliminary geological characterization is therefore essential for evaluating the optimal approach for accurate K field characterization.
Use of a Walk Through Time to Facilitate Student Understandings of the Geological Time Scale
NASA Astrophysics Data System (ADS)
Shipman, H. L.
2004-12-01
Students often have difficulties in appreciating just how old the earth and the universe are. While they can simply memorize a number, they really do not understand just how big that number really is, in comparison with other, more familiar student referents like the length of a human lifetime or how long it takes to eat a pizza. (See, e.g., R.D. Trend 2001, J. Research in Science Teaching 38(2): 191-221) Students, and members of the general public, also display such well-known misconceptions as the "Flintstone chronology" of believing that human beings and dinosaurs walked the earth at the same time. (In the classic American cartoon "The Flintstones," human beings used dinosaurs as draft animals. As scientists we know this is fiction, but not all members of the public understand that.) In an interdisciplinary undergraduate college class that dealt with astronomy, cosmology, and biological evolution, I used a familiar activity to try to improve student understanding of the concept of time's vastness. Students walked through a pre-determined 600-step path which provided a spatial analogy to the geological time scale. They stopped at various points and engaged in some pre-determined discussions and debates. This activity is as old as the hills, but reports of its effectiveness or lack thereof are quite scarce. This paper demonstrates that this activity was effective for a general-audience, college student population in the U.S. The growth of student understandings of the geological time scale was significant as a result of this activity. Students did develop an understanding of time's vastness and were able to articulate this understanding in various ways. This growth was monitored through keeping track of several exam questions and through pre- and post- analysis of student writings. In the pre-writings, students often stated that they had "no idea" about how to illustrate the size of the geological time scale to someone else. While some post-time walk responses simply restated what was done in the walk through time, some students were able to develop their own ways of conceptualizing the vastness of the geological time scale. A variety of findings from student understandings will be presented. This work has been supported in part by the Distinguished Scholars Program of the National Science Foundation (DUE-0308557).
NASA Astrophysics Data System (ADS)
Khodayar, S.; Sehlinger, A.; Feldmann, H.; Kottmeier, C.
2015-12-01
The impact of soil initialization is investigated through perturbation simulations with the regional climate model COSMO-CLM. The focus of the investigation is to assess the sensitivity of simulated extreme periods, dry and wet, to soil moisture initialization in different climatic regions over Europe and to establish the necessary spin up time within the framework of decadal predictions for these regions. Sensitivity experiments consisted of a reference simulation from 1968 to 1999 and 5 simulations from 1972 to 1983. The Effective Drought Index (EDI) is used to select and quantify drought status in the reference run to establish the simulation time period for the sensitivity experiments. Different soil initialization procedures are investigated. The sensitivity of the decadal predictions to soil moisture initial conditions is investigated through the analysis of water cycle components' (WCC) variability. In an episodic time scale the local effects of soil moisture on the boundary-layer and the propagated effects on the large-scale dynamics are analysed. The results show: (a) COSMO-CLM reproduces the observed features of the drought index. (b) Soil moisture initialization exerts a relevant impact on WCC, e.g., precipitation distribution and intensity. (c) Regional characteristics strongly impact the response of the WCC. Precipitation and evapotranspiration deviations are larger for humid regions. (d) The initial soil conditions (wet/dry), the regional characteristics (humid/dry) and the annual period (wet/dry) play a key role in the time that soil needs to restore quasi-equilibrium and the impact on the atmospheric conditions. Humid areas, and for all regions, a humid initialization, exhibit shorter spin up times, also soil reacts more sensitive when initialised during dry periods. (e) The initial soil perturbation may markedly modify atmospheric pressure field, wind circulation systems and atmospheric water vapour distribution affecting atmospheric stability conditions, thus modifying precipitation intensity and distribution even several years after the initialization.
The WCA reference system for four- and five-dimensional Lennard-Jones fluids
NASA Astrophysics Data System (ADS)
Bishop, Marvin
1988-05-01
The WCA reference system is investigated for four- and five-dimensional Lennard-Jones fluids by molecular dynamics simulations. It is found that the WCA prescription for the scaling of the reference system to a hard hypersphere one is a very good approximation in the fluid region.
Accuracy assessment of NOAA's daily reference evapotranspiration maps for the Texas High Plains
USDA-ARS?s Scientific Manuscript database
The National Oceanic and Atmospheric Administration (NOAA) provides daily reference ET for the continental U.S. using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large scale spatial representation for reference ET, which is essential for regional scal...
The gallium melting-point standard: its role in our temperature measurement system.
Mangum, B W
1977-01-01
The latest internationally-adopted temperature scale, the International Practical Temperature Scale of 1968 (amended edition of 1975), is discussed in some detail and a brief description is given of its evolution. The melting point of high-purity gallium (stated to be at least 99.99999% pure) as a secondary temperature reference point is evaluated. I believe that this melting-point temperature of gallium should be adopted by the various medical professional societies and voluntary standards groups as the reaction temperature for enzyme reference methods in clinical enzymology. Gallium melting-point cells are available at the National Bureau of Standards as Standard Reference Material No. 1968.
Large fluctuations in anti-coordination games on scale-free graphs
NASA Astrophysics Data System (ADS)
Sabsovich, Daniel; Mobilia, Mauro; Assaf, Michael
2017-05-01
We study the influence of the complex topology of scale-free graphs on the dynamics of anti-coordination games (e.g. snowdrift games). These reference models are characterized by the coexistence (evolutionary stable mixed strategy) of two competing species, say ‘cooperators’ and ‘defectors’, and, in finite systems, by metastability and large-fluctuation-driven fixation. In this work, we use extensive computer simulations and an effective diffusion approximation (in the weak selection limit) to determine under which circumstances, depending on the individual-based update rules, the topology drastically affects the long-time behavior of anti-coordination games. In particular, we compute the variance of the number of cooperators in the metastable state and the mean fixation time when the dynamics is implemented according to the voter model (death-first/birth-second process) and the link dynamics (birth/death or death/birth at random). For the voter update rule, we show that the scale-free topology effectively renormalizes the population size and as a result the statistics of observables depend on the network’s degree distribution. In contrast, such a renormalization does not occur with the link dynamics update rule and we recover the same behavior as on complete graphs.
From monoscale to multiscale modeling of fatigue crack growth: Stress and energy density factor
NASA Astrophysics Data System (ADS)
Sih, G. C.
2014-01-01
The formalism of the earlier fatigue crack growth models is retained to account for multiscaling of the fatigue process that involves the creation of macrocracks from the accumulation of micro damage. The effects of at least two scales, say micro to macro, must be accounted for. The same data can thus be reinterpreted by the invariancy of the transitional stress intensity factors such that the microcracking and macrocracking data would lie on a straight line. The threshold associated with the sigmoid curve disappears. Scale segmentation is shown to be a necessity for addressing multiscale energy dissipative processes such as fatigue and creep. Path independency and energy release rate are monoscale criteria that can lead to unphysical results, violating the first principles. Application of monoscale failure or fracture criteria to nanomaterials is taking toll at the expense of manufacturing super strength and light materials and structural components. This brief view is offered in the spirit of much needed additional research for the reinforcement of materials by creating nanoscale interfaces with sustainable time in service. The step by step consideraton at the different scales may offer a better understanding of the test data and their limitations with reference to space and time.
Time Effects, Displacement, and Leadership Roles on a Lunar Space Station Analogue.
Wang, Ya; Wu, Ruilin
2015-09-01
A space mission's crewmembers are the most important group of people involved and, thus, their emotions and interpersonal interactions have gained significant attention. Because crewmembers are confined in an isolated environment, the aim of this study was to identify possible changes in the emotional states, group dynamics, displacement, and leadership of crewmembers during an 80-d isolation period. The experiment was conducted in an analogue space station referred to as Lunar Palace 1 at Beihang University. In our experiment, all of the crewmembers completed a Profile of Mood States (POMS) questionnaire every week and two group climate scales questionnaires every 2 wk; specifically, a group environment scale and a work environment scale. There was no third-quarter phenomenon observed in Lunar Palace 1. However, fluctuations in the fatigue and autonomy subscales were observed. Significant displacement effects were observed when Group 3 was in the analogue. Leader support was positively correlated with the cohesion, expressiveness, and involvement of Group 3. However, leader control was not. The results suggest that time effects, displacement, and leadership roles can influence mood states and cohesion in isolated crew. These findings from Lunar Palace 1 are in agreement with those obtained from Mir and the International Space Station (ISS).
NASA Astrophysics Data System (ADS)
Cao, Bochao; Xu, Hongyi
2018-05-01
Based on direct numerical simulation (DNS) data of the straight ducts, namely square and rectangular annular ducts, detailed analyses were conducted for the mean streamwise velocity, relevant velocity scales, and turbulence statistics. It is concluded that turbulent boundary layers (TBL) should be broadly classified into three types (Type-A, -B, and -C) in terms of their distribution patterns of the time-averaged local wall-shear stress (τ _w ) or the mean local frictional velocity (u_τ ) . With reference to the Type-A TBL analysis by von Karman in developing the law-of-the-wall using the time-averaged local frictional velocity (u_τ ) as scale, the current study extended the approach to the Type-B TBL and obtained the analytical expressions for streamwise velocity in the inner-layer using ensemble-averaged frictional velocity (\\bar{{u}}_τ ) as scale. These analytical formulae were formed by introducing the general damping and enhancing functions. Further, the research applied a near-wall DNS-guided integration to the governing equations of Type-B TBL and quantitatively proved the correctness and accuracy of the inner-layer analytical expressions for this type.
Global Solar Magnetology and Reference Points of the Solar Cycle
NASA Astrophysics Data System (ADS)
Obridko, V. N.; Shelting, B. D.
2003-11-01
The solar cycle can be described as a complex interaction of large-scale/global and local magnetic fields. In general, this approach agrees with the traditional dynamo scheme, although there are numerous discrepancies in the details. Integrated magnetic indices introduced earlier are studied over long time intervals, and the epochs of the main reference points of the solar cycles are refined. A hypothesis proposed earlier concerning global magnetometry and the natural scale of the cycles is verified. Variations of the heliospheric magnetic field are determined by both the integrated photospheric i(B r )ph and source surface i(B r )ss indices, however, their roles are different. Local fields contribute significantly to the photospheric index determining the total increase in the heliospheric magnetic field. The i(B r )ss index (especially the partial index ZO, which is related to the quasi-dipolar field) determines narrow extrema. These integrated indices supply us with a “passport” for reference points, making it possible to identify them precisely. A prominent dip in the integrated indices is clearly visible at the cycle maximum, resulting in the typical double-peak form (the Gnevyshev dip), with the succeeding maximum always being higher than the preceding maximum. At the source surface, this secondary maximum significantly exceeds the primary maximum. Using these index data, we can estimate the progression expected for the 23rd cycle and predict the dates of the ends of the 23rd and 24th cycles (the middle of 2007 and December 2018, respectively).
QuickMap: a public tool for large-scale gene therapy vector insertion site mapping and analysis.
Appelt, J-U; Giordano, F A; Ecker, M; Roeder, I; Grund, N; Hotz-Wagenblatt, A; Opelz, G; Zeller, W J; Allgayer, H; Fruehauf, S; Laufs, S
2009-07-01
Several events of insertional mutagenesis in pre-clinical and clinical gene therapy studies have created intense interest in assessing the genomic insertion profiles of gene therapy vectors. For the construction of such profiles, vector-flanking sequences detected by inverse PCR, linear amplification-mediated-PCR or ligation-mediated-PCR need to be mapped to the host cell's genome and compared to a reference set. Although remarkable progress has been achieved in mapping gene therapy vector insertion sites, public reference sets are lacking, as are the possibilities to quickly detect non-random patterns in experimental data. We developed a tool termed QuickMap, which uniformly maps and analyzes human and murine vector-flanking sequences within seconds (available at www.gtsg.org). Besides information about hits in chromosomes and fragile sites, QuickMap automatically determines insertion frequencies in +/- 250 kb adjacency to genes, cancer genes, pseudogenes, transcription factor and (post-transcriptional) miRNA binding sites, CpG islands and repetitive elements (short interspersed nuclear elements (SINE), long interspersed nuclear elements (LINE), Type II elements and LTR elements). Additionally, all experimental frequencies are compared with the data obtained from a reference set, containing 1 000 000 random integrations ('random set'). Thus, for the first time a tool allowing high-throughput profiling of gene therapy vector insertion sites is available. It provides a basis for large-scale insertion site analyses, which is now urgently needed to discover novel gene therapy vectors with 'safe' insertion profiles.
Arnetz, J E; Hasson, H
2007-07-01
Lack of professional development opportunities among nursing staff is a major concern in elderly care and has been associated with work dissatisfaction and staff turnover. There is a lack of prospective, controlled studies evaluating the effects of educational interventions on nursing competence and work satisfaction. The aim of this study was to evaluate the possible effects of an educational "toolbox" intervention on nursing staff ratings of their competence, psychosocial work environment and overall work satisfaction. The study was a prospective, non-randomized, controlled intervention. Nursing staff in two municipal elderly care organizations in western Sweden. In an initial questionnaire survey, nursing staff in the intervention municipality described several areas in which they felt a need for competence development. Measurement instruments and educational materials for improving staff knowledge and work practices were then collated by researchers and managers in a "toolbox." Nursing staff ratings of their competence and work were measured pre and post-intervention by questionnaire. Staff ratings in the intervention municipality were compared to staff ratings in the reference municipality, where no toolbox was introduced. Nursing staff ratings of their competence and psychosocial work environment, including overall work satisfaction, improved significantly over time in the intervention municipality, compared to the reference group. Both competence and work environment ratings were largely unchanged among reference municipality staff. Multivariate analysis revealed a significant interaction effect between municipalities over time for nursing staff ratings of participation, leadership, performance feedback and skills' development. Staff ratings for these four scales improved significantly in the intervention municipality as compared to the reference municipality. Compared to a reference municipality, nursing staff ratings of their competence and the psychosocial work environment improved in the municipality where the toolbox was introduced.
Janssen, Elisabeth M.-L.; Thompson, Janet K.; Luoma, Samuel N.; Luthy, Richard G.
2011-01-01
The benthic community was analyzed to evaluate pollution-induced changes for the polychlorinated biphenyl (PCB)-contaminated site at Hunters Point (HP) relative to 30 reference sites in San Francisco Bay, California, USA. An analysis based on functional traits of feeding, reproduction, and position in the sediment shows that HP is depauperate in deposit feeders, subsurface carnivores, and species with no protective barrier. Sediment chemistry analysis shows that PCBs are the major risk drivers at HP (1,570 ppb) and that the reference sites contain very low levels of PCB contamination (9 ppb). Different feeding traits support the existence of direct pathways of exposure, which can be mechanistically linked to PCB bioaccumulation by biodynamic modeling. The model shows that the deposit feeder Neanthes arenaceodentata accumulates approximately 20 times more PCBs in its lipids than the facultative deposit feeder Macoma balthica and up to 130 times more than the filter feeder Mytilus edulis. The comparison of different exposure scenarios suggests that PCB tissue concentrations at HP are two orders of magnitude higher than at the reference sites. At full scale, in situ sorbent amendment with activated carbon may reduce PCB bioaccumulation at HP by up to 85 to 90% under favorable field and treatment conditions. The modeling framework further demonstrates that such expected remedial success corresponds to exposure conditions suggested as the cleanup goal for HP. However, concentrations remain slightly higher than at the reference sites. The present study demonstrates how the remedial success of a sorbent amendment, which lowers the PCB availability, can be compared to reference conditions and traditional cleanup goals, which are commonly based on bulk sediment concentrations.
Evaluation of constant-Weber-number scaling for icing tests
NASA Technical Reports Server (NTRS)
Anderson, David N.
1996-01-01
Previous studies showed that for conditions simulating an aircraft encountering super-cooled water droplets the droplets may splash before freezing. Other surface effects dependent on the water surface tension may also influence the ice accretion process. Consequently, the Weber number appears to be important in accurately scaling ice accretion. A scaling method which uses a constant-Weber-number approach has been described previously; this study provides an evaluation of this scaling method. Tests are reported on cylinders of 2.5 to 15-cm diameter and NACA 0012 airfoils with chords of 18 to 53 cm in the NASA Lewis Icing Research Tunnel (IRT). The larger models were used to establish reference ice shapes, the scaling method was applied to determine appropriate scaled test conditions using the smaller models, and the ice shapes were compared. Icing conditions included warm glaze, horn glaze and mixed. The smallest size scaling attempted was 1/3, and scale and reference ice shapes for both cylinders and airfoils indicated that the constant-Weber-number scaling method was effective for the conditions tested.
Relativistic theory for syntonization of clocks in the vicinity of the Earth
NASA Technical Reports Server (NTRS)
Wolf, Peter; Petit, G.
1995-01-01
A well known prediction of Einstein's general theory of relativity states that two ideal clocks that move with a relative velocity, and are submitted to different gravitational fields will, in general, be observed to run at different rates. Similarly the rate of a clock with respect to the coordinate time of some spacetime reference system is dependent on the velocity of the clock in that reference system and on the gravitational fields it is submitted to. For the syntonization of clocks and the realization of coordinate times (like TAI) this rate shift has to be taken into account at an accuracy level which should be below the frequency stability of the clocks in question, i.e. all terms that are larger than the instability of the clocks should be corrected for. We present a theory for the calculation of the relativistic rate shift for clocks in the vicinity of the Earth, including all terms larger than one part in 10(exp 18). This, together with previous work on clock synchronization (Petit & Wolf 1993, 1994), amounts to a complete relativistic theory for the realization of coordinate time scales at picosecond synchronization and 10(exp -18) syntonization accuracy, which should be sufficient to accommodate future developments in time transfer and clock technology.
A causal contiguity effect that persists across time scales.
Kiliç, Asli; Criss, Amy H; Howard, Marc W
2013-01-01
The contiguity effect refers to the tendency to recall an item from nearby study positions of the just recalled item. Causal models of contiguity suggest that recalled items are used as probes, causing a change in the memory state for subsequent recall attempts. Noncausal models of the contiguity effect assume the memory state is unaffected by recall per se, relying instead on the correlation between the memory states at study and at test to drive contiguity. We examined the contiguity effect in a probed recall task in which the correlation between the study context and the test context was disrupted. After study of several lists of words, participants were given probe words in a random order and were instructed to recall a word from the same list as the probe. The results showed both short-term and long-term contiguity effects. Because study order and test order are uncorrelated, these contiguity effects require a causal contiguity mechanism that operates across time scales.
Mcdonald, P. Sean; Galloway, Aaron W.E.; McPeek, Kathleen C.; VanBlaricom, Glenn R.
2015-01-01
In Washington state, commercial culture of geoducks (Panopea generosa) involves large-scale out-planting of juveniles to intertidal habitats, and installation of PVC tubes and netting to exclude predators and increase early survival. Structures associated with this nascent aquaculture method are examined to determine whether they affect patterns of use by resident and transient macrofauna. Results are summarized from regular surveys of aquaculture operations and reference beaches in 2009 to 2011 at three sites during three phases of culture: (1) pregear (-geoducks, -structure), (2) gear present (+geoducks, +structures), and (3) postgear (+geoducks, -structures). Resident macroinvertebrates (infauna and epifauna) were sampled monthly (in most cases) using coring methods at low tide during all three phases. Differences in community composition between culture plots and reference areas were examined with permutational analysis of variance and homogeneity of multivariate dispersion tests. Scuba and shoreline transect surveys were used to examine habitat use by transient fish and macroinvertebrates. Analysis of similarity and complementary nonmetric multidimensional scaling were used to compare differences between species functional groups and habitat type during different aquaculture phases. Results suggest that resident and transient macrofauna respond differently to structures associated with geoduck aquaculture. No consistent differences in the community of resident macrofauna were observed at culture plots or reference areas at the three sites during any year. Conversely, total abundance of transient fish and macroinvertebrates were more than two times greater at culture plots than reference areas when aquaculture structures were in place. Community composition differed (analysis of similarity) between culture and reference plots during the gear-present phase, but did not persist to the next farming stage (postgear). Habitat complexity associated with shellfish aquaculture may attract some structure-associated transient species observed infrequently on reference beaches, and may displace other species that typically occur in areas lacking epibenthic structure. This study provides a first look at the effects of multiple phases of geoduck farming on macrofauna, and has important implications for the management of a rapidly expanding sector of the aquaculture industry.
NASA Astrophysics Data System (ADS)
Mortensen, Henrik Lund; Sørensen, Jens Jakob W. H.; Mølmer, Klaus; Sherson, Jacob Friis
2018-02-01
We propose an efficient strategy to find optimal control functions for state-to-state quantum control problems. Our procedure first chooses an input state trajectory, that can realize the desired transformation by adiabatic variation of the system Hamiltonian. The shortcut-to-adiabaticity formalism then provides a control Hamiltonian that realizes the reference trajectory exactly but on a finite time scale. As the final state is achieved with certainty, we define a cost functional that incorporates the resource requirements and a perturbative expression for robustness. We optimize this functional by systematically varying the reference trajectory. We demonstrate the method by application to population transfer in a laser driven three-level Λ-system, where we find solutions that are fast and robust against perturbations while maintaining a low peak laser power.
NASA Astrophysics Data System (ADS)
Noyes, H. P.; Gefwert, C.; Manthey, M. J.
1983-06-01
The discretization of physics which has occurred thanks to the advent of quantum mechanics has replaced the continuum standards of time, length and mass which brought physics to maturity by counting. The (arbitrary in the sense of conventional dimensional analysis) standards were replaced by three dimensional constants: the limiting velocity c, the unit of action h, and either a reference mass (eg m/sub p/) or a coupling constant (et G related to mass scale by hc/(2(LC OMEGA)Gm/sub/p(2)) approx. - 1.7 x 10 to the 38th power. Once these physical and experimental reference standards are accepted, the conventional approach is to connect physics to mathematics by means of dimensionless ratios. A program for physics which will meet these rigid criteria while preserving, in so far as possible, the successes that conventional physics has already achieved is outlined.
Monitoring Active Atmospheres on Uranus and Neptune
NASA Astrophysics Data System (ADS)
Rages, Kathy
2009-07-01
We propose Snapshot observations of Uranus and Neptune to monitor changes in their atmospheres on time scales of weeks and months, as we have been doing for the past seven years. Previous Hubble Space Telescope observations {including previous Snapshot programs 8634, 10170, 10534, and 11156}, together with near-IR images obtained using adaptive optics on the Keck Telescope, reveal both planets to be dynamic worlds which change on time scales ranging from hours to {terrestrial} years. Uranus equinox occurred in December 2007, and the northern hemisphere is becoming fully visible for the first time since the early 1960s. HST observations during the past several years {Hammel et al. 2005, Icarus 175, 284 and references therein} have revealed strongly wavelength-dependent latitudinal structure, the presence of numerous visible-wavelength cloud features in the northern hemisphere, at least one very long-lived discrete cloud in the southern hemisphere, and in 2006 the first clearly defined dark spot seen on Uranus. Long-term ground-based observations {Lockwood and Jerzekiewicz, 2006, Icarus 180, 442; Hammel and Lockwood 2007, Icarus 186, 291} reveal seasonal brightness changes that seem to demand the appearance of a bright northern polar cap within the next few years. Recent HST and Keck observations of Neptune {Sromovsky et al. 2003, Icarus 163, 256 and references therein} show a general increase in activity at south temperate latitudes until 2004, when Neptune returned to a rather Voyager-like appearance with discrete bright spots rather than active latitude bands. Further Snapshot observations of these two dynamic planets will elucidate the nature of long-term changes in their zonal atmospheric bands and clarify the processes of formation, evolution, and dissipation of discrete albedo features.
Dushaw, Brian D; Sagen, Hanne
2017-12-01
Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.
Comprehensive geo-spatial data creation for Najran region in the KSA
NASA Astrophysics Data System (ADS)
Alrajhi, M.; Hawarey, M.
2009-04-01
The General Directorate for Surveying and Mapping (GDSM) of the Deputy Ministry for Land and Surveying (DMLS) of the Ministry of Municipal and Rural Affairs (MOMRA) in the Kingdom of Saudi Arabia (KSA) has the exclusive mandate to carry out aerial photography and produce large-scale detailed maps for about 220 cities and villages in the KSA. This presentation is about the comprehensive geo-spatial data creation for the Najran region, South KSA, that was founded on country-wide horizontal geodetic ground control using Global Navigation Satellite Systems (GNSS) within the MOMRA's Terrestrial Reference Frame 2000 (MTRF2000) that is tied to International Terrestrial Reference Frame 2000 (ITRF2000) Epoch 2004.0, and vertical geodetic ground control using precise digital leveling in reference to Jeddah 1969 mean sea level, and included aerial photography of area 917 km2 at 1:5,500 scale and 14,304 km2 at 1:45,000 scale, full aerial triangulation, and production of orthophoto maps at scale of 1:10,000 (298 sheets) for 14,304 km2, with aerial photography lasting from May 2006 until July 2006.
Comprehensive geo-spatial data creation for Ar-Riyadh region in the KSA
NASA Astrophysics Data System (ADS)
Alrajhi, M.; Hawarey, M.
2009-04-01
The General Directorate for Surveying and Mapping (GDSM) of the Deputy Ministry for Land and Surveying (DMLS) of the Ministry of Municipal and Rural Affairs (MOMRA) in the Kingdom of Saudi Arabia (KSA) has the exclusive mandate to carry out aerial photography and produce large-scale detailed maps for about 220 cities and villages in the KSA. This presentation is about the comprehensive geo-spatial data creation for the Ar-Riyadh region, Central KSA, that was founded on country-wide horizontal geodetic ground control using Global Navigation Satellite Systems (GNSS) within the MOMRA's Terrestrial Reference Frame 2000 (MTRF2000) that is tied to International Terrestrial Reference Frame 2000 (ITRF2000) Epoch 2004.0, and vertical geodetic ground control using precise digital leveling in reference to Jeddah 1969 mean sea level, and included aerial photography of area 3,000 km2 at 1:5,500 scale and 10,000 km2 at 1:45,000 scale, full aerial triangulation, and production of orthophoto maps at scale of 1:10,000 (480 sheets) for 10,000 km2, with aerial photography lasting from July 2007 thru August 2007.
Comprehensive geo-spatial data creation for Asir region in the KSA
NASA Astrophysics Data System (ADS)
Alrajhi, M.; Hawarey, M.
2009-04-01
The General Directorate for Surveying and Mapping (GDSM) of the Deputy Ministry for Land and Surveying (DMLS) of the Ministry of Municipal and Rural Affairs (MOMRA) in the Kingdom of Saudi Arabia (KSA) has the exclusive mandate to carry out aerial photography and produce large-scale detailed maps for about 220 cities and villages in the KSA. This presentation is about the comprehensive geo-spatial data creation for the Asir region, South West KSA, that was founded on country-wide horizontal geodetic ground control using Global Navigation Satellite Systems (GNSS) within the MOMRA's Terrestrial Reference Frame 2000 (MTRF2000) that is tied to International Terrestrial Reference Frame 2000 (ITRF2000) Epoch 2004.0, and vertical geodetic ground control using precise digital leveling in reference to Jeddah 1969 mean sea level, and included aerial photography of area 2,188 km2 at 1:5,500 scale and 32,640 km2 at 1:45,000 scale, full aerial triangulation, and production of orthophoto maps at scale of 1:10,000 (680 sheets) for 32,640 km2, with aerial photography lasting from July 2007 thru October 2007.
Aqueous solvation from the water perspective.
Ahmed, Saima; Pasti, Andrea; Fernández-Terán, Ricardo J; Ciardi, Gustavo; Shalit, Andrey; Hamm, Peter
2018-06-21
The response of water re-solvating a charge-transfer dye (deprotonated Coumarin 343) after photoexcitation has been measured by means of transient THz spectroscopy. Two steps of increasing THz absorption are observed, a first ∼10 ps step on the time scale of Debye relaxation of bulk water and a much slower step on a 3.9 ns time scale, the latter of which reflecting heating of the bulk solution upon electronic relaxation of the dye molecules from the S 1 back into the S 0 state. As an additional reference experiment, the hydroxyl vibration of water has been excited directly by a short IR pulse, establishing that the THz signal measures an elevated temperature within ∼1 ps. This result shows that the first step upon dye excitation (10 ps) is not limited by the response time of the THz signal; it rather reflects the reorientation of water molecules in the solvation layer. The apparent discrepancy between the relatively slow reorientation time and the general notion that water is among the fastest solvents with a solvation time in the sub-picosecond regime is discussed. Furthermore, non-equilibrium molecular dynamics simulations have been performed, revealing a close-to-quantitative agreement with experiment, which allows one to disentangle the contribution of heating to the overall THz response from that of water orientation.
Capacitor-Chain Successive-Approximation ADC
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2003-01-01
A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.
Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide
2014-06-01
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.
fMRI Evidence for Strategic Decision-Making during Resolution of Pronoun Reference
ERIC Educational Resources Information Center
McMillan, Corey T.; Clark, Robin; Gunawardena, Delani; Ryant, Neville; Grossman, Murray
2012-01-01
Pronouns are extraordinarily common in daily language yet little is known about the neural mechanisms that support decisions about pronoun reference. We propose a large-scale neural network for resolving pronoun reference that consists of two components. First, a core language network in peri-Sylvian cortex supports syntactic and semantic…
NASA Astrophysics Data System (ADS)
Paula Leite, Rodolfo; Freitas, Rodrigo; Azevedo, Rodolfo; de Koning, Maurice
2016-11-01
The Uhlenbeck-Ford (UF) model was originally proposed for the theoretical study of imperfect gases, given that all its virial coefficients can be evaluated exactly, in principle. Here, in addition to computing the previously unknown coefficients B11 through B13, we assess its applicability as a reference system in fluid-phase free-energy calculations using molecular simulation techniques. Our results demonstrate that, although the UF model itself is too soft, appropriately scaled Uhlenbeck-Ford (sUF) models provide robust reference systems that allow accurate fluid-phase free-energy calculations without the need for an intermediate reference model. Indeed, in addition to the accuracy with which their free energies are known and their convenient scaling properties, the fluid is the only thermodynamically stable phase for a wide range of sUF models. This set of favorable properties may potentially put the sUF fluid-phase reference systems on par with the standard role that harmonic and Einstein solids play as reference systems for solid-phase free-energy calculations.
NASA Astrophysics Data System (ADS)
Hargrove, W. W.; Norman, S. P.; Kumar, J.; Hoffman, F. M.
2017-12-01
National-scale polar analysis of MODIS NDVI allows quantification of degree of seasonality expressed by local vegetation, and also selects the most optimum start/end of a local "phenological year" that is empirically customized for the vegetation that is growing at each location. Interannual differences in timing of phenology make direct comparisons of vegetation health and performance between years difficult, whether at the same or different locations. By "sliding" the two phenologies in time using a Procrustean linear time shift, any particular phenological event or "completion milestone" can be synchronized, allowing direct comparison of differences in timing of other remaining milestones. Going beyond a simple linear translation, time can be "rubber-sheeted," compressed or dilated. Considering one phenology curve to be a reference, the second phenology can be "rubber-sheeted" to fit that baseline as well as possible by stretching or shrinking time to match multiple control points, which can be any recognizable phenological events. Similar to "rubber sheeting" to georectify a map inside a GIS, rubber sheeting a phenology curve also yields a warping signature that shows at every time and every location how many days the adjusted phenology is ahead or behind the phenological development of the reference vegetation. Using such temporal methods to "adjust" phenologies may help to quantify vegetation impacts from frost, drought, wildfire, insects and diseases by permitting the most commensurate quantitative comparisons with unaffected vegetation.
International Linear Collider Reference Design Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brau, James,; Okada, Yasuhiro,; Walker, Nicholas J.,
2007-08-13
{lg_bullet} What is the universe? How did it begin? {lg_bullet} What are matter and energy? What are space and time? These basic questions have been the subject of scientific theories and experiments throughout human history. The answers have revolutionized the enlightened view of the world, transforming society and advancing civilization. Universal laws and principles govern everyday phenomena, some of them manifesting themselves only at scales of time and distance far beyond everyday experience. Particle physics experiments using particle accelerators transform matter and energy, to reveal the basic workings of the universe. Other experiments exploit naturally occurring particles, such as solarmore » neutrinos or cosmic rays, and astrophysical observations, to provide additional insights.« less
Effect of spatial averaging on multifractal properties of meteorological time series
NASA Astrophysics Data System (ADS)
Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika
2016-04-01
Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased by -29.1 % (precipitation; width of MS) up to >4 % (min. Temperature, Radiation; asymmetry of MS). Also, the spatial variability of MS parameters was strongly affected at the highest aggregation (100 km). Obtained results confirm that spatial data aggregation may strongly affect temporal scaling properties. This should be taken into account when upscaling for large-scale studies. Acknowledgements The study was conducted within FACCE MACSUR. Please see Baranowski et al. (2015) for details on funding. References Baranowski, P., Krzyszczak, J., Sławiński, C. et al. (2015). Climate Research 65, 39-52. Hoffman, H., G. Zhao, L.G.J. Van Bussel et al. (2015). Climate Research 65, 53-69. Zhao, G., Siebert, S., Rezaei E. et al. (2015). Agricultural and Forest Meteorology 200, 156-171.
Measurement of New Observables from the pi+pi- Electroproduction off the Proton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trivedi, Arjun
Knowledge of the Universe as constructed by human beings, in order to tackle its complexity, can be thought to be organized at varying scales at which it is observed. Implicit in such an approach is the idea of a smooth evolution of knowledge between scales and, therefore, access to how Nature constructs the visible Universe beginning from its most fundamental constituents. New and, in a sense, fundamental phenomena may typically be emergent as the scale of observation changes. The study of the Strong Interaction, which is responsible for the construction of the bulk of the visible matter in the Universemore » (98% by mass), in this sense, is a labor of exploring evolutions and unifying aspects of its knowledge found at varying scales ranging from interaction of quarks and gluons as represented by the theory of Quantum Chromodynamics (QCD) at small space-time scale to emerging dressed quark and even meson-baryon degrees of freedom mostly described by effective models as the space-time scale increases. A direct effort to study the Strong Interaction over this scale forms the basis of an international collaborative effort often referred to as the N* program. The core work of this thesis is an experimental analysis prompted by the need to measure experimental observables that are of particular interest to the theory-experiment epistemological framework of this collaboration. While the core of this thesis, therefore, discusses the nature of the experimental analysis and presents its results which will serve as input to the N* program's epistemological framework, the particular nature of this framework in the context of not only the Strong Interaction, but also that of the physical science and human knowledge in general will be used to motivate and introduce the experimental analysis and its related observables.« less
Measurement of new observables from the pi+pi - electroproduction off the proton
NASA Astrophysics Data System (ADS)
Trivedi, Arjun
Knowledge of the Universe as constructed by human beings, in order to tackle its complexity, can be thought to be organized at varying scales at which it is observed. Implicit in such an approach is the idea of a smooth evolution of knowledge between scales and, therefore, access to how Nature constructs the visible Universe beginning from its most fundamental constituents. New and, in a sense, fundamental phenomena may typically be emergent as the scale of observation changes. The study of the Strong Interaction, which is responsible for the construction of the bulk of the visible matter in the Universe (98% by mass), in this sense, is a labor of exploring evolutions and unifying aspects of its knowledge found at varying scales ranging from interaction of quarks and gluons as represented by the theory of Quantum Chromodynamics (QCD) at small space-time scale to emerging dressed quark and even mesonbaryon degrees of freedom mostly described by effective models as the space-time scale increases. A direct effort to study the Strong Interaction over this scale forms the basis of an international collaborative effort often referred to as the N* program. The core work of this thesis is an experimental analysis prompted by the need to measure experimental observables that are of particular interest to the theory-experiment epistemological framework of this collaboration. While the core of this thesis, therefore, discusses the nature of the experimental analysis and presents its results which will serve as input to the N* program's epistemological framework, the particular nature of this framework in the context of not only the Strong Interaction, but also that of the physical science and human knowledge in general will be used to motivate and introduce the experimental analysis and its related observables.
NASA Astrophysics Data System (ADS)
Phillips, D. A.; Herring, T.; Melbourne, T. I.; Murray, M. H.; Szeliga, W. M.; Floyd, M.; Puskas, C. M.; King, R. W.; Boler, F. M.; Meertens, C. M.; Mattioli, G. S.
2017-12-01
The Geodesy Advancing Geosciences and EarthScope (GAGE) Facility, operated by UNAVCO, provides a diverse suite of geodetic data, derived products and cyberinfrastructure services to support community Earth science research and education. GPS data and products including decadal station position time series and velocities are provided for 2000+ continuous GPS stations from the Plate Boundary Observatory (PBO) and other networks distributed throughout the high Arctic, North America, and Caribbean regions. The position time series contain a multitude of signals in addition to the secular motions, including coseismic and postseismic displacements, interseismic strain accumulation, and transient signals associated with hydrologic and other processes. We present our latest velocity field solutions, new time series offset estimate products, and new time series examples associated with various phenomena. Position time series, and the signals they contain, are inherently dependent upon analysis parameters such as network scaling and reference frame realization. The estimation of scale changes for example, a common practice, has large impacts on vertical motion estimates. GAGE/PBO velocities and time series are currently provided in IGS (IGb08) and North America (NAM08, IGb08 rotated to a fixed North America Plate) reference frames. We are reprocessing all data (1996 to present) as part of the transition from IGb08 to IGS14 that began in 2017. New NAM14 and IGS14 data products are discussed. GAGE/PBO GPS data products are currently generated using onsite computing clusters. As part of an NSF funded EarthCube Building Blocks project called "Deploying MultiFacility Cyberinfrastructure in Commercial and Private Cloud-based Systems (GeoSciCloud)", we are investigating performance, cost, and efficiency differences between local computing resources and cloud based resources. Test environments include a commercial cloud provider (Amazon/AWS), NSF cloud-like infrastructures within XSEDE (TACC, the Texas Advanced Computing Center), and in-house cyberinfrastructures. Preliminary findings from this effort are presented. Web services developed by UNAVCO to facilitate the discovery, customization and dissemination of GPS data and products are also presented.
Generalizability of Scaling Gradients on Direct Behavior Ratings
ERIC Educational Resources Information Center
Chafouleas, Sandra M.; Christ, Theodore J.; Riley-Tillman, T. Chris
2009-01-01
Generalizability theory is used to examine the impact of scaling gradients on a single-item Direct Behavior Rating (DBR). A DBR refers to a type of rating scale used to efficiently record target behavior(s) following an observation occasion. Variance components associated with scale gradients are estimated using a random effects design for persons…
Bendo, Cristiane B.; Shulman, Robert J.; Self, Mariella M.; Nurko, Samuel; Franciosi, James P.; Saps, Miguel; Saeed, Shehzad; Zacur, George M.; Vaughan Dark, Chelsea; Pohl, John F.
2015-01-01
Objective The present study investigates the clinical interpretability of the Pediatric Quality of Life Inventory™ (PedsQL™) Gastrointestinal Symptoms Scales and Worry Scales in pediatric patients with functional gastrointestinal disorders or organic gastrointestinal diseases in comparison with healthy controls. Methods The PedsQL™ Gastrointestinal Scales were completed by 587 patients with gastrointestinal disorders/diseases and 685 parents, and 513 healthy children and 337 parents. Minimal important difference (MID) scores were derived from the standard error of measurement (SEM). Cut-points were derived based on one and two standard deviations (SDs) from the healthy reference means. Results The percentages of patients below the scales’ cut-points were significantly greater than the healthy controls (most p values ≤ .001). Scale scores 2 SDs from the healthy reference means were within the range of scores for pediatric patients with a gastrointestinal disorder. MID values were generated using the SEM. Conclusions The findings support the clinical interpretability of the new PedsQL™ Gastrointestinal Symptoms Scales and Worry Scales. PMID:25682210
United time-frequency spectroscopy for dynamics and global structure.
Marian, Adela; Stowe, Matthew C; Lawall, John R; Felinto, Daniel; Ye, Jun
2004-12-17
Ultrashort laser pulses have thus far been used in two distinct modes. In the time domain, the pulses have allowed probing and manipulation of dynamics on a subpicosecond time scale. More recently, phase stabilization has produced optical frequency combs with absolute frequency reference across a broad bandwidth. Here we combine these two applications in a spectroscopic study of rubidium atoms. A wide-bandwidth, phase-stabilized femtosecond laser is used to monitor the real-time dynamic evolution of population transfer. Coherent pulse accumulation and quantum interference effects are observed and well modeled by theory. At the same time, the narrow linewidth of individual comb lines permits a precise and efficient determination of the global energy-level structure, providing a direct connection among the optical, terahertz, and radio-frequency domains. The mechanical action of the optical frequency comb on the atomic sample is explored and controlled, leading to precision spectroscopy with an appreciable reduction in systematic errors.
A Manual of Instruction for Log Scaling and the Measurement of Timber Products.
ERIC Educational Resources Information Center
Idaho State Board of Vocational Education, Boise. Div. of Trade and Industrial Education.
This manual was developed by a state advisory committee in Idaho to improve and standardize log scaling and provide a reference in training men for the job of log scaling in timber measurement. The content includes: (1) an introduction containing the scope of the manual, a definition and history of scaling, the reasons for scaling, and the…
NASA Astrophysics Data System (ADS)
Gowda, P. H.
2016-12-01
Evapotranspiration (ET) is an important process in ecosystems' water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. There are efforts to develop such datasets on a regional to global scale but often faced with the limitations of spatial-temporal resolution tradeoffs in satellite remote sensing technology. In this study, we developed frameworks for generating high and medium resolution daily ET maps from Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data, respectively. For developing high resolution (30-m) daily time series ET maps with Landsat TM data, the series version of Two Source Energy Balance (TSEB) model was used to compute sensible and latent heat fluxes of soil and canopy separately. Landsat 5 (2000-2011) and Landsat 8 (2013-2014) imageries for row 28/35 and 27/36 covering central Oklahoma was used. MODIS data (2001-2014) covering Oklahoma and Texas Panhandle was used to develop medium resolution (250-m), time series daily ET maps with SEBS (Surface Energy Balance System) model. An extensive network of weather stations managed by Texas High Plains ET Network and Oklahoma Mesonet was used to generate spatially interpolated inputs of air temperature, relative humidity, wind speed, solar radiation, pressure, and reference ET. A linear interpolation sub-model was used to estimate the daily ET between the image acquisition days. Accuracy assessment of daily ET maps were done against eddy covariance data from two grassland sites at El Reno, OK. Statistical results indicated good performance by modeling frameworks developed for deriving time series ET maps. Results indicated that the proposed ET mapping framework is suitable for deriving daily time series ET maps at regional scale with Landsat and MODIS data.
NASA Technical Reports Server (NTRS)
Mair, R. W.; Sen, P. N.; Hurlimann, M. D.; Patz, S.; Cory, D. G.; Walsworth, R. L.
2002-01-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Pade approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Pade interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Pade length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
Mair, R W; Sen, P N; Hürlimann, M D; Patz, S; Cory, D G; Walsworth, R L
2002-06-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Padé approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Padé interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Padé length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
NASA Astrophysics Data System (ADS)
Magic, Z.; Collet, R.; Hayek, W.; Asplund, M.
2013-12-01
Aims: We study the implications of averaging methods with different reference depth scales for 3D hydrodynamical model atmospheres computed with the Stagger-code. The temporally and spatially averaged (hereafter denoted as ⟨3D⟩) models are explored in the light of local thermodynamic equilibrium (LTE) spectral line formation by comparing spectrum calculations using full 3D atmosphere structures with those from ⟨3D⟩ averages. Methods: We explored methods for computing mean ⟨3D⟩ stratifications from the Stagger-grid time-dependent 3D radiative hydrodynamical atmosphere models by considering four different reference depth scales (geometrical depth, column-mass density, and two optical depth scales). Furthermore, we investigated the influence of alternative averages (logarithmic, enforced hydrostatic equilibrium, flux-weighted temperatures). For the line formation we computed curves of growth for Fe i and Fe ii lines in LTE. Results: The resulting ⟨3D⟩ stratifications for the four reference depth scales can be very different. We typically find that in the upper atmosphere and in the superadiabatic region just below the optical surface, where the temperature and density fluctuations are highest, the differences become considerable and increase for higher Teff, lower log g, and lower [Fe / H]. The differential comparison of spectral line formation shows distinctive differences depending on which ⟨3D⟩ model is applied. The averages over layers of constant column-mass density yield the best mean ⟨3D⟩ representation of the full 3D models for LTE line formation, while the averages on layers at constant geometrical height are the least appropriate. Unexpectedly, the usually preferred averages over layers of constant optical depth are prone to increasing interference by reversed granulation towards higher effective temperature, in particular at low metallicity. Appendix A is available in electronic form at http://www.aanda.orgMean ⟨3D⟩ models are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A8 as well as at http://www.stagger-stars.net
Izrael, Y A; Nazarov, I M; Ryaboshapko, A G
1982-12-01
The authors consider some possible ways of regulating three types of atmospheric emission of pollutants: - emission of substances causing pollution of the natural environment on the global scale (global pollutants); - emission of substances causing pollution on a regional scale, most often including territories of several countries (international pollutants); - emission of substances causing negative effects in a relatively limited region, for example within border area of two adjoining countries. Substances (gaseous, as a rule) of a long life-time in the atmosphere that can contaminate natural media on a global scale irrespective of the place of emission refer to the first class of pollutants that are subject to emission regulation at an international level and to quota establishement for individual countries. They are carbon dioxide, freon, krypton-85.Various approaches to determining permissible emission and to quota establishing are discussed in the paper.The second group includes substances of a limited, yet rather long, life-time whose emission intensity makes a notable contribution to environmental pollution of a large region including territories of several countries. Here it is needed to regulate internationally not the atmospheric emission as it is but pollutant transport over national boundaries (sulphur and nitrogen oxides, pesticides, heavy metals).The third group includes substances of relatively short time of life producing local effects. Emission regulation in such cases should be based upon bilateral agreements with due account of countries' mutual interests.
A laboratory assessment of the measurement accuracy of weighing type rainfall intensity gauges
NASA Astrophysics Data System (ADS)
Colli, M.; Chan, P. W.; Lanza, L. G.; La Barbera, P.
2012-04-01
In recent years the WMO Commission for Instruments and Methods of Observation (CIMO) fostered noticeable advancements in the accuracy of precipitation measurement issue by providing recommendations on the standardization of equipment and exposure, instrument calibration and data correction as a consequence of various comparative campaigns involving manufacturers and national meteorological services from the participating countries (Lanza et al., 2005; Vuerich et al., 2009). Extreme events analysis is proven to be highly affected by the on-site RI measurement accuracy (see e.g. Molini et al., 2004) and the time resolution of the available RI series certainly constitutes another key-factor in constructing hyetographs that are representative of real rain events. The OTT Pluvio2 weighing gauge (WG) and the GEONOR T-200 vibrating-wire precipitation gauge demonstrated very good performance under previous constant flow rate calibration efforts (Lanza et al., 2005). Although WGs do provide better performance than more traditional Tipping Bucket Rain gauges (TBR) under continuous and constant reference intensity, dynamic effects seem to affect the accuracy of WG measurements under real world/time varying rainfall conditions (Vuerich et al., 2009). The most relevant is due to the response time of the acquisition system and the derived systematic delay of the instrument in assessing the exact weight of the bin containing cumulated precipitation. This delay assumes a relevant role in case high resolution rain intensity time series are sought from the instrument, as is the case of many hydrologic and meteo-climatic applications. This work reports the laboratory evaluation of Pluvio2 and T-200 rainfall intensity measurements accuracy. Tests are carried out by simulating different artificial precipitation events, namely non-stationary rainfall intensity, using a highly accurate dynamic rainfall generator. Time series measured by an Ogawa drop counter (DC) at a field test site located within the Hong Kong International Airport (HKIA) were aggregated at a 1-minute scale and used as reference for the artificial rain generation (Colli et al., 2012). The preliminary development and validation of the rainfall simulator for the generation of variable time steps reference intensities is also shown. The generator is characterized by a sufficiently short time response with respect to the expected weighing gauges behavior in order to ensure effective comparison of the measured/reference intensity at very high resolution in time.
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
NASA Astrophysics Data System (ADS)
Kumar, Anil; Kumar, Harish; Mandal, Goutam; Das, M. B.; Sharma, D. C.
The present paper discusses the establishment of traceability of reference grade hydrometers at National Physical Laboratory, India (NPLI). The reference grade hydrometers are calibrated and traceable to the primary solid density standard. The calibration has been done according to standard procedure based on Cuckow's Method and the reference grade hydrometers calibrated covers a wide range. The uncertainty of the reference grade hydrometers has been computed and corrections are also calculated for the scale readings, at which observations are taken.
NASA Astrophysics Data System (ADS)
Lamontagne, J. R.; Reed, P. M.
2017-12-01
Impacts and adaptations to global change largely occur at regional scales, yet they are shaped globally through the interdependent evolution of the climate, energy, agriculture, and industrial systems. It is important for regional actors to account for the impacts of global changes on their systems in a globally consistent but regionally relevant way. This can be challenging because emerging global reference scenarios may not reflect regional challenges. Likewise, regionally specific scenarios may miss important global feedbacks. In this work, we contribute a scenario discovery framework to identify regionally-specific decision relevant scenarios from an ensemble of scenarios of global change. To this end, we generated a large ensemble of time evolving regional, multi-sector global change scenarios by a full factorial sampling of the underlying assumptions in the emerging shared socio-economic pathways (SSPs), using the Global Change Assessment Model (GCAM). Statistical and visual analytics were then used to discover which SSP assumptions are particularly consequential for various regions, considering a broad range of time-evolving metrics that encompass multiple spatial scales and sectors. In an illustrative examples, we identify the most important global change narratives to inform water resource scenarios for several geographic regions using the proposed scenario discovery framework. Our results highlight the importance of demographic and agricultural evolution compared to technical improvements in the energy sector. We show that narrowly sampling a few canonical reference scenarios provides a very narrow view of the consequence space, increasing the risk of tacitly ignoring major impacts. Even optimistic scenarios contain unintended, disproportionate regional impacts and intergenerational transfers of consequence. Formulating consequential scenarios of deeply and broadly uncertain futures requires a better exploration of which quantitative measures of consequences are important, for whom are they important, where, and when. To this end, we have contributed a large database of climate change futures that can support `backwards' scenario generation techniques, that capture a broader array of consequences than those that emerge from limited sampling of a few reference scenarios.
Green, C E L; Freeman, D; Kuipers, E; Bebbington, P; Fowler, D; Dunn, G; Garety, P A
2008-01-01
Paranoia is increasingly being studied in clinical and non-clinical populations. However there is no multi-dimensional measure of persecutory ideas developed for use across the general population-psychopathology continuum. This paper reports the development of such a questionnaire: the 'Green et al. Paranoid Thought Scales'. The aim was to devise a tool to assess ideas of persecution and social reference in a simple self-report format, guided by a current definition of persecutory ideation, and incorporating assessment of conviction, preoccupation and distress. A total of 353 individuals without a history of mental illness, and 50 individuals with current persecutory delusions completed a pool of paranoid items and additional measures to assess validity. Items were devised from a recent definition of persecutory delusions, current assessments of paranoia, the authors' clinical experience, and incorporated dimensions of conviction, preoccupation and distress. Test-retest reliability in the non-clinical group was assessed at 2 weeks follow-up, and clinical change in the deluded group at 6 months follow-up. Two 16-item scales were extracted, assessing ideas of social reference and persecution. Good internal consistency and validity was established for both scales and their dimensions. The scales were sensitive to clinical change. A hierarchical relationship between social reference and persecution was found. The data provide further evidence for a continuum of paranoid ideas between deluded and healthy individuals. A reliable and valid tool for assessing paranoid thoughts is presented. It will provide an effective way for researchers to ensure consistency in research and for clinicians to assess change with treatment.
Interpretation of positive troponin results among patients with and without myocardial infarction
Tecson, Kristen M.; Arnold, William; Barrett, Tyler; Birkhahn, Robert; Daniels, Lori B.; DeFilippi, Christopher; Headden, Gary; Peacock, W. Frank; Reed, Michael; Singer, Adam J.; Schussler, Jeffrey M.; Smith, Stephen; Than, Martin P.
2017-01-01
Measuring cardiac troponins is integral to diagnosing acute myocardial infarction (AMI); however, troponins may be elevated without AMI, and the use of multiple different assays confounds comparisons. We considered characteristics and serial troponin values in emergency department chest pain patients with and without AMI to interpret troponin excursions. We compared serial troponin in 124 AMI and non-AMI patients from the observational Performance of Triage Cardiac Markers in the Clinical Setting (PEARL) study who presented with chest pain and had at least one troponin value exceeding the 99th percentile of normal. Because 8 assays were used during data collection, we employed a method of scaling the troponin value to the corresponding assay's 99th percentile upper reference limit to standardize the results. In 81 AMI patients, 96% had elevated troponin at the first test following initial elevation, compared to 73% of the 43 non-AMI patients (P < 0.001). Scaling troponin to the 99th percentile of normal yielded a median value that was 4.8 [2.2, 14.1] times higher than the 99th percentile cutpoint among AMI patients, compared to 2.3 [1.5, 6.5] times higher among non-AMI patients (P = 0.04). The rise in serial scaled troponin values distinguished the AMI patients. Scaling to the 99th percentile was useful for comparing troponin when different assays were utilized. PMID:28127121
Abatzoglou, John T; Dobrowski, Solomon Z; Parks, Sean A; Hegewisch, Katherine C
2018-01-09
We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.
NASA Astrophysics Data System (ADS)
Abatzoglou, John T.; Dobrowski, Solomon Z.; Parks, Sean A.; Hegewisch, Katherine C.
2018-01-01
We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.
De Pauw, Ruben; Shoykhet Choikhet, Konstantin; Desmet, Gert; Broeckhoven, Ken
2016-08-12
When using compressible mobile phases such as fluidic CO2, the density, the volumetric flow rates and volumetric fractions are pressure dependent. The pressure and temperature definition of these volumetric parameters (referred to as the reference conditions) may alter between systems, manufacturers and operating conditions. A supercritical fluid chromatography system was modified to operate in two modes with different definition of the eluent delivery parameters, referred to as fixed and variable mode. For the variable mode, the volumetric parameters are defined with reference to the pump operating pressure and actual pump head temperature. These conditions may vary when, e.g. changing the column length, permeability, flow rate, etc. and are thus variable reference conditions. For the fixed mode, the reference conditions were set at 150bar and 30°C, resulting in a mass flow rate and mass fraction of modifier definition which is independent of the operation conditions. For the variable mode, the mass flow rate of carbon dioxide increases with system pump operating pressure, decreasing the fraction of modifier. Comparing the void times and retention factor shows that the deviation between the two modes is almost independent of modifier percentage, but depends on the operating pressure. Recalculating the set volumetric fraction of modifier to the mass fraction results in the same retention behaviour for both modes. This shows that retention in SFC can be best modelled using the mass fraction of modifier. The fixed mode also simplifies method scaling as it only requires matching average column pressure. Copyright © 2016 Elsevier B.V. All rights reserved.
Schneider, Valerie A; Graves-Lindsay, Tina; Howe, Kerstin; Bouk, Nathan; Chen, Hsiu-Chuan; Kitts, Paul A; Murphy, Terence D; Pruitt, Kim D; Thibaud-Nissen, Françoise; Albracht, Derek; Fulton, Robert S; Kremitzki, Milinn; Magrini, Vincent; Markovic, Chris; McGrath, Sean; Steinberg, Karyn Meltz; Auger, Kate; Chow, William; Collins, Joanna; Harden, Glenn; Hubbard, Timothy; Pelan, Sarah; Simpson, Jared T; Threadgold, Glen; Torrance, James; Wood, Jonathan M; Clarke, Laura; Koren, Sergey; Boitano, Matthew; Peluso, Paul; Li, Heng; Chin, Chen-Shan; Phillippy, Adam M; Durbin, Richard; Wilson, Richard K; Flicek, Paul; Eichler, Evan E; Church, Deanna M
2017-05-01
The human reference genome assembly plays a central role in nearly all aspects of today's basic and clinical research. GRCh38 is the first coordinate-changing assembly update since 2009; it reflects the resolution of roughly 1000 issues and encompasses modifications ranging from thousands of single base changes to megabase-scale path reorganizations, gap closures, and localization of previously orphaned sequences. We developed a new approach to sequence generation for targeted base updates and used data from new genome mapping technologies and single haplotype resources to identify and resolve larger assembly issues. For the first time, the reference assembly contains sequence-based representations for the centromeres. We also expanded the number of alternate loci to create a reference that provides a more robust representation of human population variation. We demonstrate that the updates render the reference an improved annotation substrate, alter read alignments in unchanged regions, and impact variant interpretation at clinically relevant loci. We additionally evaluated a collection of new de novo long-read haploid assemblies and conclude that although the new assemblies compare favorably to the reference with respect to continuity, error rate, and gene completeness, the reference still provides the best representation for complex genomic regions and coding sequences. We assert that the collected updates in GRCh38 make the newer assembly a more robust substrate for comprehensive analyses that will promote our understanding of human biology and advance our efforts to improve health. © 2017 Schneider et al.; Published by Cold Spring Harbor Laboratory Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; ...
2017-12-07
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less
NASA Astrophysics Data System (ADS)
Antonenko, I.; Osinski, G. R.; Battler, M.; Beauchamp, M.; Cupelli, L.; Chanou, A.; Francis, R.; Mader, M. M.; Marion, C.; McCullough, E.; Pickersgill, A. E.; Preston, L. J.; Shankar, B.; Unrau, T.; Veillette, D.
2013-07-01
Remote robotic data provides different information than that obtained from immersion in the field. This significantly affects the geological situational awareness experienced by members of a mission control science team. In order to optimize science return from planetary robotic missions, these limitations must be understood and their effects mitigated to fully leverage the field experience of scientists at mission control.Results from a 13-day analogue deployment at the Mistastin Lake impact structure in Labrador, Canada suggest that scale, relief, geological detail, and time are intertwined issues that impact the mission control science team's effectiveness in interpreting the geology of an area. These issues are evaluated and several mitigation options are suggested. Scale was found to be difficult to interpret without the reference of known objects, even when numerical scale data were available. For this reason, embedding intuitive scale-indicating features into image data is recommended. Since relief is not conveyed in 2D images, both 3D data and observations from multiple angles are required. Furthermore, the 3D data must be observed in animation or as anaglyphs, since without such assistance much of the relief information in 3D data is not communicated. Geological detail may also be missed due to the time required to collect, analyze, and request data.We also suggest that these issues can be addressed, in part, by an improved understanding of the operational time costs and benefits of scientific data collection. Robotic activities operate on inherently slow time-scales. This fact needs to be embraced and accommodated. Instead of focusing too quickly on the details of a target of interest, thereby potentially minimizing science return, time should be allocated at first to more broad data collection at that target, including preliminary surveys, multiple observations from various vantage points, and progressively smaller scale of focus. This operational model more closely follows techniques employed by field geologists and is fundamental to the geologic interpretation of an area. Even so, an operational time cost/benefit analyses should be carefully considered in each situation, to determine when such comprehensive data collection would maximize the science return.Finally, it should be recognized that analogue deployments cannot faithfully model the time scales of robotic planetary missions. Analogue missions are limited by the difficulty and expense of fieldwork. Thus, analogue deployments should focus on smaller aspects of robotic missions and test components in a modular way (e.g., dropping communications constraints, limiting mission scope, focusing on a specific problem, spreading the mission over several field seasons, etc.).
NASA Astrophysics Data System (ADS)
Pirveli, Marika; Lewczuk, Barbara
2013-12-01
The proposed text presents a conceptual change in the scope of some of the key concepts in the light of the two dictionaries (Britannica and Human Geography Dictionary) and Anglo-Saxon publications about the future of geography. Then, it combines the concept of references to the ongoing interdisciplinary studies included in the structure of the University of the Second and Third Generation. Applications built this way are of two types: (1) referring to a fundamental change in the process within the human perception of the environment for generations X and Y, and (2) referring to the process of glocalization, glocal scale and premises of the University of the Third Generation (3GU)
Gonçalves, Rui Soles; Pinheiro, João Páscoa; Cabri, Jan
2012-08-01
The purpose of this cross sectional study was to estimate the contributions of potentially modifiable physical factors to variations in perceived health status in knee osteoarthritis (OA) patients referred for physical therapy. Health status was measured by three questionnaires: Knee injury and Osteoarthritis Outcome Score (KOOS); Knee Outcome Survey - Activities of Daily Living Scale (KOS-ADLS); and Medical Outcomes Study - 36 item Short Form (SF-36). Physical factors were measured by a battery of tests: body mass index (BMI); visual analog scale (VAS) of pain intensity; isometric dynamometry; universal goniometry; step test (ST); timed "up and go" test (TUGT); 20-meter walk test (20MWT); and 6-minute walk test (6MWT). All tests were administered to 136 subjects with symptomatic knee OA (94 females, 42 males; age: 67.2 ± 7.1 years). Multiple stepwise regression analyses revealed that knee muscle strength, VAS of pain intensity, 6MWT, degree of knee flexion and BMI were moderate predictors of health status. In the final models, selected combinations of these potentially modifiable physical factors explained 22% to 37% of the variance in KOOS subscale scores, 40% of the variance in the KOS-ADLS scale score, and 21% to 34% of the variance in physical health SF-36 subscale scores. More research is required in order to evaluate whether therapeutic interventions targeting these potentially modifiable physical factors would improve health status in knee OA patients. Copyright © 2011 Elsevier B.V. All rights reserved.
The relationship between reference canopy conductance and simplified hydraulic architecture
NASA Astrophysics Data System (ADS)
Novick, Kimberly; Oren, Ram; Stoy, Paul; Juang, Jehn-Yih; Siqueira, Mario; Katul, Gabriel
2009-06-01
Terrestrial ecosystems are dominated by vascular plants that form a mosaic of hydraulic conduits to water movement from the soil to the atmosphere. Together with canopy leaf area, canopy stomatal conductance regulates plant water use and thereby photosynthesis and growth. Although stomatal conductance is coordinated with plant hydraulic conductance, governing relationships across species has not yet been formulated at a practical level that can be employed in large-scale models. Here, combinations of published conductance measurements obtained with several methodologies across boreal to tropical climates were used to explore relationships between canopy conductance rates and hydraulic constraints. A parsimonious hydraulic model requiring sapwood-to-leaf area ratio and canopy height generated acceptable agreement with measurements across a range of biomes (r2=0.75). The results suggest that, at long time scales, the functional convergence among ecosystems in the relationship between water-use and hydraulic architecture eclipses inter-specific variation in physiology and anatomy of the transport system. Prognostic applicability of this model requires independent knowledge of sapwood-to-leaf area. In this study, we did not find a strong relationship between sapwood-to-leaf area and physical or climatic variables that are readily determinable at coarse scales, though the results suggest that climate may have a mediating influence on the relationship between sapwood-to-leaf area and height. Within temperate forests, canopy height alone explained a large amount of the variance in reference canopy conductance (r2=0.68) and this relationship may be more immediately applicable in the terrestrial ecosystem models.
1988-04-04
year or small groups of units USATB 2 over several years, the nature of the time factor for RC units is as described above and the implications for...relocation of soldiers, but more often leave groups of soldiers where they are and convert them in place. This in-place conversion creates large scale MOS...they are not issued a new group of MOS qualified soldiers to start up the new organization. 6. The turbulence levels faced by RC units is significant
1992-01-01
basic reference structure, changes to which can be studied as a function of doping and/or processing parameters . and correlated to electrical and...MICROSCOPY CHARACTERIZATION OF EPITAXIAL GROWTH OF Ag DEPOSITED ON MgO MICROCUBES 127 J. Liu, M. Pan, and GE. Spinnler REAL-TIME VIEWING OF DYNAMIC...IMAGING OF GRAIN BOUNDARIES IN Pr- DOPED ZnO CERAMICS 189 I.G. Solorzano, J.B. VanDer Sande, K.K. Baek, and H.L. Tuller ATOMIC STRUCTURES AND DEFECTS OF
THE CONCEPT OF REFERENCE CONDITION, REVISITED ...
Ecological assessments of aquatic ecosystems depend on the ability to compare current conditions against some expectation of how they could be in the absence of significant human disturbance. The concept of a ‘‘reference condition’’ is often used to describe the standard or benchmark against which current condition is compared. If assessments are to be conducted consistently, then a common understanding of the definitions and complications of reference condition is necessary. A 2006 paper (Stoddard et al., 2006, Ecological Applications 16:1267-1276) made an early attempt at codifying the reference condition concept; in this presentation we will revisit the points raised in that paper (and others) and examine how our thinking has changed in a little over 10 years.Among the issues to be discussed: (1) the “moving target” created when reference site data are used to set thresholds in large scale assessments; (2) natural vs. human disturbance and their effects on reference site distributions; (3) circularity and the use of biological data to assist in reference site identification; (4) using site-scale (in-stream or in-lake) measurements vs. landscape-level human activity to identify reference conditions. Ecological assessments of aquatic ecosystems depend on the ability to compare current conditions against some expectation of how they could be in the absence of significant human disturbance. The concept of a ‘‘reference condition’’ is often use
Stage line diagram: an age-conditional reference diagram for tracking development.
van Buuren, Stef; Ooms, Jeroen C L
2009-05-15
This paper presents a method for calculating stage line diagrams, a novel type of reference diagram useful for tracking developmental processes over time. Potential fields of applications include: dentistry (tooth eruption), oncology (tumor grading, cancer staging), virology (HIV infection and disease staging), psychology (stages of cognitive development), human development (pubertal stages) and chronic diseases (stages of dementia). Transition probabilities between successive stages are modeled as smoothly varying functions of age. Age-conditional references are calculated from the modeled probabilities by the mid-P value. It is possible to eliminate the influence of age by calculating standard deviation scores (SDS). The method is applied to the empirical data to produce reference charts on secondary sexual maturation. The mean of the empirical SDS in the reference population is close to zero, whereas the variance depends on age. The stage line diagram provides quick insight into both status (in SDS) and tempo (in SDS/year) of development of an individual child. Other measures (e.g. height SDS, body mass index SDS) from the same child can be added to the chart. Diagrams for sexual maturation are available as a web application at http://vps.stefvanbuuren.nl/puberty. The stage line diagram expresses status and tempo of discrete changes on a continuous scale. Wider application of these measures scores opens up new analytic possibilities. (c) 2009 John Wiley & Sons, Ltd.
Distribution of shortest path lengths in a class of node duplication network models
NASA Astrophysics Data System (ADS)
Steinbock, Chanania; Biham, Ofer; Katzav, Eytan
2017-09-01
We present analytical results for the distribution of shortest path lengths (DSPL) in a network growth model which evolves by node duplication (ND). The model captures essential properties of the structure and growth dynamics of social networks, acquaintance networks, and scientific citation networks, where duplication mechanisms play a major role. Starting from an initial seed network, at each time step a random node, referred to as a mother node, is selected for duplication. Its daughter node is added to the network, forming a link to the mother node, and with probability p to each one of its neighbors. The degree distribution of the resulting network turns out to follow a power-law distribution, thus the ND network is a scale-free network. To calculate the DSPL we derive a master equation for the time evolution of the probability Pt(L =ℓ ) , ℓ =1 ,2 ,⋯ , where L is the distance between a pair of nodes and t is the time. Finding an exact analytical solution of the master equation, we obtain a closed form expression for Pt(L =ℓ ) . The mean distance 〈L〉 t and the diameter Δt are found to scale like lnt , namely, the ND network is a small-world network. The variance of the DSPL is also found to scale like lnt . Interestingly, the mean distance and the diameter exhibit properties of a small-world network, rather than the ultrasmall-world network behavior observed in other scale-free networks, in which 〈L〉 t˜lnlnt .
2011-01-01
Background The criteria for deciding who should be admitted first from a waiting list to a forensic secure hospital are not necessarily the same as those for assessing need. Criteria were drafted qualitatively and tested in a prospective 'real life' observational study over a 6-month period. Methods A researcher rated all those presented at the weekly referrals meeting using the DUNDRUM-1 triage security scale and the DUNDRUM-2 triage urgency scale. The key outcome measure was whether or not the individual was admitted. Results Inter-rater reliability and internal consistency for the DUNDRUM-2 were acceptable. The DUNDRUM-1 triage security score and the DUNDRUM-2 triage urgency score correlated r = 0.663. At the time of admission, after a mean of 23.9 (SD35.9) days on the waiting list, those admitted had higher scores on the DUNDRUM-2 triage urgency scale than those not admitted, with no significant difference between locations (remand or sentenced prisoners, less secure hospitals) at the time of admission. Those admitted also had higher DUNDRUM-1 triage security scores. At baseline the receiver operating characteristic area under the curve for a combined score was the best predictor of admission while at the time of admission the DUNDRUM-2 triage urgency score had the largest AUC (0.912, 95% CI 0.838 to 0.986). Conclusions The triage urgency items and scale add predictive power to the decision to admit. This is particularly true in maintaining equitability between those referred from different locations. PMID:21722397
Ripple scalings in geothermal facilities, a key to understand the scaling process
NASA Astrophysics Data System (ADS)
Köhl, Bernhard; Grundy, James; Baumann, Thomas
2017-04-01
Scalings are a widespread problem among geothermal plants which exploit the Malm Aquifer in the Bavarian Molasse Zone. They effect the technical and economic efficiency of geothermal plants. The majority of the scalings observed at geothermal facilities exploring the Malm aquifer in the Bavarian Molasse Basin are carbonates. They are formed due to a disruption of the lime-carbonic-acid equilibrium during production caused by degassing of CO2. These scalings are found in the production pipes, at the pumps and at filters and can nicely be described using existing hydrogeochemical models. This study proposes a second mechanism for the formation of scalings in ground-level facilities. We investigated scalings which accumulated at the inlet to the heat exchanger. Interestingly, the scalings were recovered after the ground level facilities had been cleaned. The scalings showed distinct ripple structures, which is likely a result of solid particle deposition. From the ripple features the the flow conditions during their formation were calculated based on empirical equations (Soulsby, 2012). The calculations suggest that the deposits were formed during maintenance works. Thin section images of the sediments indicate a two-step process: deposition of sediment grains, followed by stabilization with a calcite layer. The latter likely occured during maintenance. To prevent this type of scalings blocking the heat exchangers, the maintenance procedure has to be revised. References: Soulsby, R. L.; Whitehouse, R. J. S.; Marten, K. V.: Prediction of time-evolving sand ripples in shelf seas. Continental Shelf Research 2012, 38, 47-62
Multi-chord fiber-coupled interferometer with a long coherence length laser
NASA Astrophysics Data System (ADS)
Merritt, Elizabeth C.; Lynn, Alan G.; Gilmore, Mark A.; Hsu, Scott C.
2012-03-01
This paper describes a 561 nm laser heterodyne interferometer that provides time-resolved measurements of line-integrated plasma electron density within the range of 1015-1018 cm-2. Such plasmas are produced by railguns on the plasma liner experiment, which aims to produce μs-, cm-, and Mbar-scale plasmas through the merging of 30 plasma jets in a spherically convergent geometry. A long coherence length, 320 mW laser allows for a strong, sub-fringe phase-shift signal without the need for closely matched probe and reference path lengths. Thus, only one reference path is required for all eight probe paths, and an individual probe chord can be altered without altering the reference or other probe path lengths. Fiber-optic decoupling of the probe chord optics on the vacuum chamber from the rest of the system allows the probe paths to be easily altered to focus on different spatial regions of the plasma. We demonstrate that sub-fringe resolution capability allows the interferometer to operate down to line-integrated densities of the order of 5 × 1015 cm-2.
GPA, GMAT, and Scale: A Method of Quantification of Admissions Criteria.
ERIC Educational Resources Information Center
Sobol, Marion G.
1984-01-01
Multiple regression analysis was used to establish a scale, measuring college student involvement in campus activities, work experience, technical background, references, and goals. This scale was tested to see whether it improved the prediction of success in graduate school. (Author/MLW)
Viable tensor-to-scalar ratio in a symmetric matter bounce
NASA Astrophysics Data System (ADS)
Nath Raveendran, Rathul; Chowdhury, Debika; Sriramkumar, L.
2018-01-01
Matter bounces refer to scenarios wherein the universe contracts at early times as in a matter dominated epoch until the scale factor reaches a minimum, after which it starts expanding. While such scenarios are known to lead to scale invariant spectra of primordial perturbations after the bounce, the challenge has been to construct completely symmetric bounces that lead to a tensor-to-scalar ratio which is small enough to be consistent with the recent cosmological data. In this work, we construct a model involving two scalar fields (a canonical field and a non-canonical ghost field) to drive the symmetric matter bounce and study the evolution of the scalar perturbations in the model. We find that the model can be completely described in terms of a single parameter, viz. the ratio of the scale associated with the bounce to the value of the scale factor at the bounce. We evolve the scalar perturbations numerically across the bounce and evaluate the scalar power spectra after the bounce. We show that, while the scalar and tensor perturbation spectra are scale invariant over scales of cosmological interest, the tensor-to-scalar ratio proves to be much smaller than the current upper bound from the observations of the cosmic microwave background anisotropies by the Planck mission. We also support our numerical analysis with analytical arguments.
Characterization of clay scales forming in Philippine geothermal wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyes, A.G.; Cardile, C.M.
1989-01-01
Smectite scales occur in 24 out of the 36 blocked wells located in Tongonan, Palinpinon and Bacon-Manito. These comprise 2-85% of the well scales and form at depths of 33-2620 m, where measured and fluid inclusion temperatures are 40-320{sup 0}C. Most, however, occur below the production casing show where temperatures are {ge}230{sup 0}C, often at depths coinciding with aquifers. The clay scales are compositionally and structurally different from the bentonite used in drilling, which is essentially sodium-rich montmorillonite. The clay deposits are expanding, generally disordered, and combine the characteristics of a montmorillonite, saponite and vermiculite in terms of reaction tomore » cationic exchange treatments, structure and composition. Six types of clay scales are identified, but the predominant one, comprising 60-100% of the clay deposits in a well, is Mg- and Fe-rich and referred to as a vermiculitic species. The crystallinity, degree of disorder, textures, optical characteristics, structure and relative amounts of structural Al, Mg and Fe vary with time, temperature and fluid composition, but not with depth and measured pressure. Despite its variance from bentonite characteristics, one of the dominant suggested mechanisms of clay scale formation uses the drilling mud in the well as a substrate, from which the Mg- and Fe-rich clay evolves.« less
Timmermann, Carsten
2013-09-01
To use the history of the Karnofsky Performance Scale as a case study illustrating the emergence of interest in the measurement and standardisation of quality of life; to understand the origins of current-day practices. Articles referring to the Karnofsky scale and quality of life measurements published from the 1940s to the 1990s were identified by searching databases and screening journals, and analysed using close-reading techniques. Secondary literature was consulted to understand the context in which articles were written. The Karnofsky scale was devised for a different purpose than measuring quality of life: as a standardisation device that helped quantify effects of chemotherapeutic agents less easily measurable than survival time. Interest in measuring quality of life only emerged around 1970. When quality of life measurements were increasingly widely discussed in the medical press from the late 1970s onwards, a consensus emerged that the Karnofsky scale was not a very good tool. More sophisticated approaches were developed, but Karnofsky continued to be used. I argue that the scale provided a quick and simple, approximate assessment of the 'soft' effects of treatment by physicians, overlapping but not identical with quality of life.
Timmermann, Carsten
2013-01-01
Objectives: To use the history of the Karnofsky Performance Scale as a case study illustrating the emergence of interest in the measurement and standardisation of quality of life; to understand the origins of current-day practices. Methods: Articles referring to the Karnofsky scale and quality of life measurements published from the 1940s to the 1990s were identified by searching databases and screening journals, and analysed using close-reading techniques. Secondary literature was consulted to understand the context in which articles were written. Results: The Karnofsky scale was devised for a different purpose than measuring quality of life: as a standardisation device that helped quantify effects of chemotherapeutic agents less easily measurable than survival time. Interest in measuring quality of life only emerged around 1970. Discussion: When quality of life measurements were increasingly widely discussed in the medical press from the late 1970s onwards, a consensus emerged that the Karnofsky scale was not a very good tool. More sophisticated approaches were developed, but Karnofsky continued to be used. I argue that the scale provided a quick and simple, approximate assessment of the ‘soft’ effects of treatment by physicians, overlapping but not identical with quality of life. PMID:23239756
Soydaş, Emine; Bozkaya, Uğur
2013-03-12
An assessment of the OMP3 method and its spin-component and spin-scaled variants for thermochemistry and kinetics is presented. For reaction energies of closed-shell systems, the CCSD, SCS-MP3, and SCS-OMP3 methods show better performances than other considered methods, and no significant improvement is observed due to orbital optimization. For barrier heights, OMP3 and SCS-OMP3 provide the lowest mean absolute deviations. The MP3 method yields considerably higher errors, and the spin scaling approaches do not help to improve upon MP3, but worsen it. For radical stabilization energies, the CCSD, OMP3, and SCS-OMP3 methods exhibit noticeably better performances than MP3 and its variants. Our results demonstrate that if the reference wave function suffers from a spin-contamination, then the MP3 methods dramatically fail. On the other hand, the OMP3 method and its variants can tolerate the spin-contamination in the reference wave function. For overall evaluation, we conclude that OMP3 is quite helpful, especially in electronically challenged systems, such as free radicals or transition states where spin contamination dramatically deteriorates the quality of the canonical MP3 and SCS-MP3 methods. Both OMP3 and CCSD methods scale as n(6), where n is the number of basis functions. However, the OMP3 method generally converges in much fewer iterations than CCSD. In practice, OMP3 is several times faster than CCSD in energy computations. Further, the stationary properties of OMP3 make it much more favorable than CCSD in the evaluation of analytic derivatives. For OMP3, the analytic gradient computations are much less expensive than CCSD. For the frequency computation, both methods require the evaluation of the perturbed amplitudes and orbitals. However, in the OMP3 case there is still a significant computational time savings due to simplifications in the analytic Hessian expression owing to the stationary property of OMP3. Hence, the OMP3 method emerges as a very useful tool for computational quantum chemistry.
NASA Technical Reports Server (NTRS)
Lee, Sam; Addy, Harold E. Jr.; Broeren, Andy P.; Orchard, David M.
2017-01-01
A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two new scaling methods based on Weber number were compared against a method based on Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel where the three methods of scaling were also tested and compared along with reference (altitude) icing conditions. In those tests, the Weber number-based scaling methods yielded results much closer to those observed at the reference icing conditions than the Reynolds number-based icing conditions. The test in the NASA IRT used a much larger, asymmetric airfoil with an ice protection system that more closely resembled designs used in commercial aircraft. Following the trends observed during the AIWT tests, the Weber number based scaling methods resulted in smaller runback ice than the Reynolds number based scaling, and the ice formed farther upstream. The results show that the new Weber number based scaling methods, particularly the Weber number with water loading scaling, continue to show promise for ice protection system development and evaluation in atmospheric icing tunnels.
Farrand, Sarah; Evans, Andrew H; Mangelsdorf, Simone; Loi, Samantha M; Mocellin, Ramon; Borham, Adam; Bevilacqua, JoAnne; Blair-West, Scott; Walterfang, Mark A; Bittar, Richard G; Velakoulis, Dennis
2017-09-01
Deep brain stimulation can be of benefit in carefully selected patients with severe intractable obsessive-compulsive disorder. The aim of this paper is to describe the outcomes of the first seven deep brain stimulation procedures for obsessive-compulsive disorder undertaken at the Neuropsychiatry Unit, Royal Melbourne Hospital. The primary objective was to assess the response to deep brain stimulation treatment utilising the Yale-Brown Obsessive Compulsive Scale as a measure of symptom severity. Secondary objectives include assessment of depression and anxiety, as well as socio-occupational functioning. Patients with severe obsessive-compulsive disorder were referred by their treating psychiatrist for assessment of their suitability for deep brain stimulation. Following successful application to the Psychosurgery Review Board, patients proceeded to have deep brain stimulation electrodes implanted in either bilateral nucleus accumbens or bed nucleus of stria terminalis. Clinical assessment and symptom rating scales were undertaken pre- and post-operatively at 6- to 8-week intervals. Rating scales used included the Yale-Brown Obsessive Compulsive Scale, Obsessive Compulsive Inventory, Depression Anxiety Stress Scale and Social and Occupational Functioning Assessment Scale. Seven patients referred from four states across Australia underwent deep brain stimulation surgery and were followed for a mean of 31 months (range, 8-54 months). The sample included four females and three males, with a mean age of 46 years (range, 37-59 years) and mean duration of obsessive-compulsive disorder of 25 years (range, 15-38 years) at the time of surgery. The time from first assessment to surgery was on average 18 months. All patients showed improvement on symptom severity rating scales. Three patients showed a full response, defined as greater than 35% improvement in Yale-Brown Obsessive Compulsive Scale score, with the remaining showing responses between 7% and 20%. Deep brain stimulation was an effective treatment for obsessive-compulsive disorder in these highly selected patients. The extent of the response to deep brain stimulation varied between patients, as well as during the course of treatment for each patient. The results of this series are comparable with the literature, as well as having similar efficacy to ablative psychosurgery techniques such as capsulotomy and cingulotomy. Deep brain stimulation provides advantages over lesional psychosurgery but is more expensive and requires significant multidisciplinary input at all stages, pre- and post-operatively, ideally within a specialised tertiary clinical and/or academic centre. Ongoing research is required to better understand the neurobiological basis for obsessive-compulsive disorder and how this can be manipulated with deep brain stimulation to further improve the efficacy of this emerging treatment.
A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling
NASA Technical Reports Server (NTRS)
Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne
2003-01-01
Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.
Exploring the brain on multiple scales with correlative two-photon and light sheet microscopy
NASA Astrophysics Data System (ADS)
Silvestri, Ludovico; Allegra Mascaro, Anna Letizia; Costantini, Irene; Sacconi, Leonardo; Pavone, Francesco S.
2014-02-01
One of the unique features of the brain is that its activity cannot be framed in a single spatio-temporal scale, but rather spans many orders of magnitude both in space and time. A single imaging technique can reveal only a small part of this complex machinery. To obtain a more comprehensive view of brain functionality, complementary approaches should be combined into a correlative framework. Here, we describe a method to integrate data from in vivo two-photon fluorescence imaging and ex vivo light sheet microscopy, taking advantage of blood vessels as reference chart. We show how the apical dendritic arbor of a single cortical pyramidal neuron imaged in living thy1-GFP-M mice can be found in the large-scale brain reconstruction obtained with light sheet microscopy. Starting from the apical portion, the whole pyramidal neuron can then be segmented. The correlative approach presented here allows contextualizing within a three-dimensional anatomic framework the neurons whose dynamics have been observed with high detail in vivo.
NASA Technical Reports Server (NTRS)
Tang, Ling; Hossain, Faisal; Huffman, George J.
2010-01-01
Hydrologists and other users need to know the uncertainty of the satellite rainfall data sets across the range of time/space scales over the whole domain of the data set. Here, uncertainty' refers to the general concept of the deviation' of an estimate from the reference (or ground truth) where the deviation may be defined in multiple ways. This uncertainty information can provide insight to the user on the realistic limits of utility, such as hydrologic predictability, that can be achieved with these satellite rainfall data sets. However, satellite rainfall uncertainty estimation requires ground validation (GV) precipitation data. On the other hand, satellite data will be most useful over regions that lack GV data, for example developing countries. This paper addresses the open issues for developing an appropriate uncertainty transfer scheme that can routinely estimate various uncertainty metrics across the globe by leveraging a combination of spatially-dense GV data and temporally sparse surrogate (or proxy) GV data, such as the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar and the Global Precipitation Measurement (GPM) mission Dual-Frequency Precipitation Radar. The TRMM Multi-satellite Precipitation Analysis (TMPA) products over the US spanning a record of 6 years are used as a representative example of satellite rainfall. It is shown that there exists a quantifiable spatial structure in the uncertainty of satellite data for spatial interpolation. Probabilistic analysis of sampling offered by the existing constellation of passive microwave sensors indicate that transfer of uncertainty for hydrologic applications may be effective at daily time scales or higher during the GPM era. Finally, a commonly used spatial interpolation technique (kriging), that leverages the spatial correlation of estimation uncertainty, is assessed at climatologic, seasonal, monthly and weekly timescales. It is found that the effectiveness of kriging is sensitive to the type of uncertainty metric, time scale of transfer and the density of GV data within the transfer domain. Transfer accuracy is lowest at weekly timescales with the error doubling from monthly to weekly.However, at very low GV data density (<20% of the domain), the transfer accuracy is too low to show any distinction as a function of the timescale of transfer.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
Preparations for the IGS realization of ITRF2014
NASA Astrophysics Data System (ADS)
Rebischung, Paul; Schmid, Ralf
2016-04-01
The International GNSS Service (IGS) currently prepares its own realization, called IGS14, of the latest release of the International Terrestrial Reference Frame (ITRF2014). This preparation involves: - a selection of the most suitable reference frame (RF) stations from the complete set of GNSS stations in ITRF2014; - the design of a well-distributed core network of RF stations for the purpose of aligning global GNSS solutions; - a re-evaluation of the GPS and GLONASS satellite antenna phase center offsets (PCOs), based on the SINEX files provided by the IGS Analysis Centers (ACs) in the frame of the second IGS reprocessing campaign repro2. This presentation will first cover the criteria used for the selection of the IGS14 and IGS14 core RF stations as well as preliminary station selection results. We will then use the preliminary IGS14 RF to re-align the daily IGS combined repro2 SINEX solutions and study the impact of the RF change on GNSS-derived geodetic parameter time series. In a second part, we will focus on the re-evaluation of the GNSS satellite antenna PCOs. A re-evaluation of at least their radial (z) components is indeed required, despite the negligible scale difference between ITRF2008 and ITRF2014, because of modeling changes recently introduced within the IGS which affect the scale of GNSS terrestrial frames (Earth radiation pressure, antenna thrust). Moreover, the 13 GPS and GLONASS satellites launched since September 2012 are currently assigned preliminary block-specific mean PCO values which need to be updated. From the daily AC repro2 SINEX files, we will therefore derive time series of satellite z-PCO estimates and analyze the resulting time series. Since several ACs provided all three components of the satellite PCOs in their SINEX files, we will additionally derive similar x- and y-PCO time series and discuss the relevance of their potential re-evaluation.
AnClim and ProClimDB software for data quality control and homogenization of time series
NASA Astrophysics Data System (ADS)
Stepanek, Petr
2015-04-01
During the last decade, a software package consisting of AnClim, ProClimDB and LoadData for processing (mainly climatological) data has been created. This software offers a complex solution for processing of climatological time series, starting from loading the data from a central database (e.g. Oracle, software LoadData), through data duality control and homogenization to time series analysis, extreme value evaluations and RCM outputs verification and correction (ProClimDB and AnClim software). The detection of inhomogeneities is carried out on a monthly scale through the application of AnClim, or newly by R functions called from ProClimDB, while quality control, the preparation of reference series and the correction of found breaks is carried out by the ProClimDB software. The software combines many statistical tests, types of reference series and time scales (monthly, seasonal and annual, daily and sub-daily ones). These can be used to create an "ensemble" of solutions, which may be more reliable than any single method. AnClim software is suitable for educational purposes: e.g. for students getting acquainted with methods used in climatology. Built-in graphical tools and comparison of various statistical tests help in better understanding of a given method. ProClimDB is, on the contrary, tool aimed for processing of large climatological datasets. Recently, functions from R may be used within the software making it more efficient in data processing and capable of easy inclusion of new methods (when available under R). An example of usage is easy comparison of methods for correction of inhomogeneities in daily data (HOM of Paul Della-Marta, SPLIDHOM method of Olivier Mestre, DAP - own method, QM of Xiaolan Wang and others). The software is available together with further information on www.climahom.eu . Acknowledgement: this work was partially funded by the project "Building up a multidisciplinary scientific team focused on drought" No. CZ.1.07/2.3.00/20.0248.
NASA Technical Reports Server (NTRS)
Cezairliyan, Ared
1993-01-01
Rapid (subsecond) heating techniques developed at the National Institute of Standards and Technology for the measurements of selected thermophysical and related properties of metals and alloys at high temperatures (above 1000 C) are described. The techniques are based on rapid resistive self-heating of the specimen from room temperature to the desired high temperature in short times and measuring the relevant experimental quantities, such as electrical current through the specimen, voltage across the specimen, specimen temperature, length, etc., with appropriate time resolution. The first technique, referred to as the millisecond-resolution technique, is for measurements on solid metals and alloys in the temperature range 1000 C to the melting temperature of the specimen. It utilizes a heavy battery bank for the energy source, and the total heating time of the specimen is typically in the range of 100-1000 ms. Data are recorded digitally every 0.5 ms with a full-scale resolution of about one part in 8000. The properties that can be measured with this system are as follows: specific heat, enthalpy, thermal expansion, electrical resistivity, normal spectral emissivity, hemispherical total emissivity, temperature and energy of solid-solid phase transformations, and melting temperature (solidus). The second technique, referred to as the microsecond-resolution technique, is for measurements on liquid metals and alloys in the temperature range 1200 to 6000 C. It utilizes a capacitor bank for the energy source, and the total heating time of the specimen is typically in the range 50-500 micro-s. Data are recorded digitally every 0.5 micro-s with a full-scale resolution of about one part in 4000. The properties that can be measured with this system are: melting temperature (solidus and liquidus), heat of fusion, specific heat, enthalpy, and electrical resistivity. The third technique is for measurements of the surface tension of liquid metals and alloys at their melting temperature. It utilizes a modified millisecond-resolution heating system designed for use in a microgravity environment.
GUIDANCE OF THE FIELD DEMONSTRATION OF REMEDIATION TECHNOLOGIES
This paper will focus on the demonstration of hazardous waste cleanup technologies in the field. The technologies will be at the pilot- or full-scale, and further referred to as field-scale. The main objectives of demonstration at the field-scale are development of reliable perfo...
Evaluation of Scaling Methods for Rotorcraft Icing
NASA Technical Reports Server (NTRS)
Tsao, Jen-Ching; Kreeger, Richard E.
2010-01-01
This paper reports result of an experimental study in the NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the current recommended scaling methods developed for fixed-wing unprotected surface icing applications might apply to representative rotor blades at finite angle of attack. Unlike the fixed-wing case, there is no single scaling method that has been systematically developed and evaluated for rotorcraft icing applications. In the present study, scaling was based on the modified Ruff method with scale velocity determined by maintaining constant Weber number. Models were unswept NACA 0012 wing sections. The reference model had a chord of 91.4 cm and scale model had a chord of 35.6 cm. Reference tests were conducted with velocities of 76 and 100 kt (39 and 52 m/s), droplet MVDs of 150 and 195 fun, and with stagnation-point freezing fractions of 0.3 and 0.5 at angle of attack of 0deg and 5deg. It was shown that good ice shape scaling was achieved for NACA 0012 airfoils with angle of attack lip to 5deg.
Time Reference of Verbs in Biblical Hebrew Poetry
ERIC Educational Resources Information Center
Zwyghuizen, Jill E.
2012-01-01
This dissertation suggests that the time reference of verbs in Hebrew poetry can be determined from a combination of form (aspect) and "Aktionsart" (stative vs. fientive). Specifically, perfective forms of stative verbs have past or present time reference. Perfective forms of fientive verbs have past time reference. Imperfective forms of…
Computer-Based Indexing on a Small Scale: Bibliography.
ERIC Educational Resources Information Center
Douglas, Kimberly; Wismer, Don
The 131 references on small scale computer-based indexing cited in this bibliography are subdivided as follows: general, general (computer), index structure, microforms, specific systems, KWIC KWAC KWOC, and thesauri. (RAA)
Laboratory meter-scale seismic monitoring of varying water levels in granular media
NASA Astrophysics Data System (ADS)
Pasquet, S.; Bodet, L.; Bergamo, P.; Guérin, R.; Martin, R.; Mourgues, R.; Tournat, V.
2016-12-01
Laboratory physical modelling and non-contacting ultrasonic techniques are frequently proposed to tackle theoretical and methodological issues related to geophysical prospecting. Following recent developments illustrating the ability of seismic methods to image spatial and/or temporal variations of water content in the vadose zone, we developed laboratory experiments aimed at testing the sensitivity of seismic measurements (i.e., pressure-wave travel times and surface-wave phase velocities) to water saturation variations. Ultrasonic techniques were used to simulate typical seismic acquisitions on small-scale controlled granular media presenting different water levels. Travel times and phase velocity measurements obtained at the dry state were validated with both theoretical models and numerical simulations and serve as reference datasets. The increasing water level clearly affects the recorded wave field in both its phase and amplitude, but the collected data cannot yet be inverted in the absence of a comprehensive theoretical model for such partially saturated and unconsolidated granular media. The differences in travel time and phase velocity observed between the dry and wet models show patterns that are interestingly coincident with the observed water level and depth of the capillary fringe, thus offering attractive perspectives for studying soil water content variations in the field.
On-chip dual-comb source for spectroscopy
Dutt, Avik; Joshi, Chaitanya; Ji, Xingchen; Cardenas, Jaime; Okawachi, Yoshitomo; Luke, Kevin; Gaeta, Alexander L.; Lipson, Michal
2018-01-01
Dual-comb spectroscopy is a powerful technique for real-time, broadband optical sampling of molecular spectra, which requires no moving components. Recent developments with microresonator-based platforms have enabled frequency combs at the chip scale. However, the need to precisely match the resonance wavelengths of distinct high quality-factor microcavities has hindered the development of on-chip dual combs. We report the simultaneous generation of two microresonator combs on the same chip from a single laser, drastically reducing experimental complexity. We demonstrate broadband optical spectra spanning 51 THz and low-noise operation of both combs by deterministically tuning into soliton mode-locked states using integrated microheaters, resulting in narrow (<10 kHz) microwave beat notes. We further use one comb as a reference to probe the formation dynamics of the other comb, thus introducing a technique to investigate comb evolution without auxiliary lasers or microwave oscillators. We demonstrate high signal-to-noise ratio absorption spectroscopy spanning 170 nm using the dual-comb source over a 20-μs acquisition time. Our device paves the way for compact and robust spectrometers at nanosecond time scales enabled by large beat-note spacings (>1 GHz). PMID:29511733
Yamagata, Tetsuo; Zanelli, Ugo; Gallemann, Dieter; Perrin, Dominique; Dolgos, Hugues; Petersson, Carl
2017-09-01
1. We compared direct scaling, regression model equation and the so-called "Poulin et al." methods to scale clearance (CL) from in vitro intrinsic clearance (CL int ) measured in human hepatocytes using two sets of compounds. One reference set comprised of 20 compounds with known elimination pathways and one external evaluation set based on 17 compounds development in Merck (MS). 2. A 90% prospective confidence interval was calculated using the reference set. This interval was found relevant for the regression equation method. The three outliers identified were justified on the basis of their elimination mechanism. 3. The direct scaling method showed a systematic underestimation of clearance in both the reference and evaluation sets. The "Poulin et al." and the regression equation methods showed no obvious bias in either the reference or evaluation sets. 4. The regression model equation was slightly superior to the "Poulin et al." method in the reference set and showed a better absolute average fold error (AAFE) of value 1.3 compared to 1.6. A larger difference was observed in the evaluation set were the regression method and "Poulin et al." resulted in an AAFE of 1.7 and 2.6, respectively (removing the three compounds with known issues mentioned above). A similar pattern was observed for the correlation coefficient. Based on these data we suggest the regression equation method combined with a prospective confidence interval as the first choice for the extrapolation of human in vivo hepatic metabolic clearance from in vitro systems.
Evaluation from 3-Years Time Serie of Daily Actual Evapotranspiration over the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Faivre, R.; Menenti, M.
2016-08-01
The estimation of turbulent uxes is of primary interest for hydrological and climatological studies. Also the use of optical remote sensing data in the VNIR and TIR domain already proved to allow for the parameterization of surface energy balance, leading to many algorithms. Their use over arid high elevation areas require detailed characterisation of key surface physical properties and atmospheric statement at a reference level. Satellite products aquired over the Tibetan Plateau and simulations results delivered in the frame of the CEOP-AEGIS project provide incentives for a regular analysis at medium scale.This work aims at evaluating the use Feng-Yun 2 series and MODIS data (VNIR and TIR) for land surface evapotranspiration (ET) daily mapping based on SEBI algorithm, over the whole Tibetan Plateau (Faivre, 2014). An evaluation is performed over some reference sites set-up through the Tibetan Plateau.
Pressure Ratio to Thermal Environments
NASA Technical Reports Server (NTRS)
Lopez, Pedro; Wang, Winston
2012-01-01
A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.
Position and morphology of the compact non-thermal radio source at the Galactic Center
NASA Technical Reports Server (NTRS)
Marcaide, J. M.; Alberdi, A.; Bartel, N.; Clark, T. A.; Corey, B. E.; Elosegui, P.; Gorenstein, M. V.; Guirado, J. C.; Kardashev, N.; Popov, M.
1992-01-01
We have determined with VLBI the position of the compact nonthermal radio source at the Galactic Center, commonly referred to as SgrA*, in the J2000.0 reference frame of extragalactic radio sources. We have also determined the size of SgrA* at 1.3, 3.6, and 13 cm wavelengths and found that the apparent size of the source increases proportionally to the observing wavelength squared, as expected from source size broadening by interstellar scattering and as reported previously by other authors. We have also established an upper limit of about 8 mJy at 3.6 cm wavelength for any ultracompact component. The actual size of the source is less than 15 AU. Fourier analysis of our very sensitive 3.6 cm observations of this source shows no significant variations of correlated flux density on time scales from 12 to 700 s.
Langner, Mildred Crowe
1967-01-01
How have the three services—indexing, reference (including history of medicine), and interlibrary loan—been provided throughout the years by NLM, and how have they been used? At the present time of great growth and development, the use of the computer has influenced these services and will continue to figure prominently in plans for the future. NLM's services often have not been well or correctly used by its public, even by librarians. Some of its services, however, need to be provided in more depth and on a higher scale, and they should be publicized more widely. History shows that NLM has been faithful to its basic charge and has gone far beyond it in its service to the medical, educational, and library communities. Medical librarians are most fortunate that such a great national resource exists to provide materials and services to fulfill the needs of their libraries. PMID:6016365
Using endmembers in AVIRIS images to estimate changes in vegetative biomass
NASA Technical Reports Server (NTRS)
Smith, Milton O.; Adams, John B.; Ustin, Susan L.; Roberts, Dar A.
1992-01-01
Field techniques for estimating vegetative biomass are labor intensive, and rarely are used to monitor changes in biomass over time. Remote-sensing offers an attractive alternative to field measurements; however, because there is no simple correspondence between encoded radiance in multispectral images and biomass, it is not possible to measure vegetative biomass directly from AVIRIS images. Ways to estimate vegetative biomass by identifying community types and then applying biomass scalars derived from field measurements are investigated. Field measurements of community-scale vegetative biomass can be made, at least for local areas, but it is not always possible to identify vegetation communities unambiguously using remote measurements and conventional image-processing techniques. Furthermore, even when communities are well characterized in a single image, it typically is difficult to assess the extent and nature of changes in a time series of images, owing to uncertainties introduced by variations in illumination geometry, atmospheric attenuation, and instrumental responses. Our objective is to develop an improved method based on spectral mixture analysis to characterize and identify vegetative communities, that can be applied to multi-temporal AVIRIS and other types of images. In previous studies, multi-temporal data sets (AVIRIS and TM) of Owens Valley, CA were analyzed and vegetation communities were defined in terms of fractions of reference (laboratory and field) endmember spectra. An advantage of converting an image to fractions of reference endmembers is that, although fractions in a given pixel may vary from image to image in a time series, the endmembers themselves typically are constant, thus providing a consistent frame of reference.
NASA Astrophysics Data System (ADS)
Plastino, A.; Rocca, M. C.
2018-05-01
We generalize several well known quantum equations to a Tsallis’ q-scenario, and provide a quantum version of some classical fields associated with them in the recent literature. We refer to the q-Schródinger, q-Klein-Gordon, q-Dirac, and q-Proca equations advanced in, respectively, Phys. Rev. Lett. 106, 140601 (2011), EPL 118, 61004 (2017) and references therein. We also introduce here equations corresponding to q-Yang-Mills fields, both in the Abelian and non-Abelian instances. We show how to define the q-quantum field theories corresponding to the above equations, introduce the pertinent actions, and obtain equations of motion via the minimum action principle. These q-fields are meaningful at very high energies (TeV scale) for q = 1.15, high energies (GeV scale) for q = 1.001, and low energies (MeV scale) for q = 1.000001 [Nucl. Phys. A 955 (2016) 16 and references therein]. (See the ALICE experiment at the LHC). Surprisingly enough, these q-fields are simultaneously q-exponential functions of the usual linear fields’ logarithms.
Final report on RMO Vickers key comparison COOMET M.H-K1
NASA Astrophysics Data System (ADS)
Aslanyan, E.; Menelao, F.; Herrmann, K.; Aslanyan, A.; Pivovarov, V.; Galat, E.; Dovzhenko, Y.; Zhamanbalin, M.
2013-01-01
This report describes a COOMET key comparison on Vickers hardness scales involving five National Metrology Institutes: PTB (Germany), BelGIM (Belarus), NSC IM (Ukraine), KazInMetr (Kazakhstan) and VNIIFTRI (Russia). The pilot laboratory was VNIIFTRI, and PTB acted as the linking institute to key comparisons CCM.H-K1.b and CCM.H-K1.c conducted for the Vickers hardness scales HV1 and HV30, respectively. The comparison was also conducted for the HV5 Vickers hardness scale, since this scale is most frequently used in practice in Russia and CIS countries that work according to GOST standards. In the key comparison, two sets of hardness reference blocks for the Vickers hardness scales HV1, HV5 and HV30 consisting each of three hardness reference blocks with hardness levels of 450 HV and 750 HV were used. The measurement results and uncertainty assessments for HV1 and HV30 hardness scales, as announced by BelGIM, NSC IM, KazInMetr and VNIIFTRI, are in good agreement with the key comparison reference values of CCM.H-K1.b and CCM.H-K1.c. The comparison results for the HV5 hardness scale are viewed as additional information, since up to today no CCM key comparisons on this scale have yet been carried out. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Fagioli, F; Telesforo, L; Dell'Erba, A; Consolazione, M; Migliorini, V; Patanè, M; Boldrini, T; Graziani, R; Nicoletti, F; Fiori-Nastro, P
2015-07-01
"Depersonalization" (DP) is a common symptom in the general population and psychiatric patients (Michal et al., 2011 [1]). DP is characterized by an alteration in the experience of the self, so that one feels detached from his or her own mental processes or body (or from the world), feeling as being an outside observer of his or her own self, and loosing the experience of unity and identity (American Psychiatric Association, 2013 [2]). We performed an exploratory factor analysis of the Cambridge Depersonalization Scale Italian version (CDS-IV). We enrolled 149 inpatients and outpatients of psychiatric services located in two Italian regions, Lazio and Campania. Patients were aged between 15 and 65 and diagnosed with schizophrenic, depressive or anxiety disorders. Four factors accounted for 97.4% of the variance. Factor 1 (10, 24, 26, 1, 13, 23, 9, 2, 5, and 11), called "Detachment from the Self", captures experiences of detachment from actions and thoughts. Factor 2 (19, 20, 27, 3, 12, 23, 22, and 11), called "Anomalous bodily experiences", refers to unusual bodily experiences. Factor 3 (7, 28, 25, 6, 9, and 2), named "Numbing", describes the dampening of affects. Factor 4 (14, 17, and 16), named "Temporal blunting", refers to the subjective experience of time. We did not find any specific factor that refers to derealization; this suggests that the constructs of depersonalization/derealization (DP/DR) were strongly related to each other. Our results show that the constructs of DP/DR subsume several psychopathological dimensions; moreover, the above mentioned factors were broadly consistent with prior literature. Copyright © 2015. Published by Elsevier Inc.
Trade-offs across space, time, and ecosystem services
Rodriguez, J.P.; Beard, T.D.; Bennett, E.M.; Cumming, Graeme S.; Cork, S.J.; Agard, J.; Dobson, A.P.; Peterson, G.D.
2006-01-01
Ecosystem service (ES) trade-offs arise from management choices made by humans, which can change the type, magnitude, and relative mix of services provided by ecosystems. Trade-offs occur when the provision of one ES is reduced as a consequence of increased use of another ES. In some cases, a trade-off may be an explicit choice; but in others, trade-offs arise without premeditation or even awareness that they are taking place. Trade-offs in ES can be classified along three axes: spatial scale, temporal scale, and reversibility. Spatial scale refers to whether the effects of the trade-off are felt locally or at a distant location. Temporal scale refers to whether the effects take place relatively rapidly or slowly. Reversibility expresses the likelihood that the perturbed ES may return to its original state if the perturbation ceases. Across all four Millennium Ecosystem Assessment scenarios and selected case study examples, trade-off decisions show a preference for provisioning, regulating, or cultural services (in that order). Supporting services are more likely to be "taken for granted." Cultural ES are almost entirely unquantified in scenario modeling; therefore, the calculated model results do not fully capture losses of these services that occur in the scenarios. The quantitative scenario models primarily capture the services that are perceived by society as more important - provisioning and regulating ecosystem services - and thus do not fully capture trade-offs of cultural and supporting services. Successful management policies will be those that incorporate lessons learned from prior decisions into future management actions. Managers should complement their actions with monitoring programs that, in addition to monitoring the short-term provisions of services, also monitor the long-term evolution of slowly changing variables. Policies can then be developed to take into account ES trade-offs at multiple spatial and temporal scales. Successful strategies will recognize the inherent complexities of ecosystem management and will work to develop policies that minimize the effects of ES trade-offs. Copyright ?? 2006 by the author(s).
Towards an purely data driven view on the global carbon cycle and its spatiotemporal variability
NASA Astrophysics Data System (ADS)
Zscheischler, Jakob; Mahecha, Miguel; Reichstein, Markus; Avitabile, Valerio; Carvalhais, Nuno; Ciais, Philippe; Gans, Fabian; Gruber, Nicolas; Hartmann, Jens; Herold, Martin; Jung, Martin; Landschützer, Peter; Laruelle, Goulven; Lauerwald, Ronny; Papale, Dario; Peylin, Philippe; Regnier, Pierre; Rödenbeck, Christian; Cuesta, Rosa Maria Roman; Valentini, Ricardo
2015-04-01
Constraining carbon (C) fluxes between the Earth's surface and the atmosphere at regional scale via observations is essential for understanding the Earth's carbon budget and predicting future atmospheric C concentrations. Carbon budgets have often been derived based on merging observations, statistical models and process-based models, for example in the Global Carbon Project (GCP). However, it would be helpful to derive global C budgets and fluxes at global scale as independent as possible from model assumptions to obtain an independent reference. Long-term in-situ measurements of land and ocean C stocks and fluxes have enabled the derivation of a new generation of data driven upscaled data products. Here, we combine a wide range of in-situ derived estimates of terrestrial and aquatic C fluxes for one decade. The data were produced and/or collected during the FP7 project GEOCARBON and include surface-atmosphere C fluxes from the terrestrial biosphere, fossil fuels, fires, land use change, rivers, lakes, estuaries and open ocean. By including spatially explicit uncertainties in each dataset we are able to identify regions that are well constrained by observations and areas where more measurements are required. Although the budget cannot be closed at the global scale, we provide, for the first time, global time-varying maps of the most important C fluxes, which are all directly derived from observations. The resulting spatiotemporal patterns of C fluxes and their uncertainties inform us about the needs for intensifying global C observation activities. Likewise, we provide priors for inversion exercises or to identify regions of high (and low) uncertainty of integrated C fluxes. We discuss the reasons for regions of high observational uncertainties, and for biases in the budget. Our data synthesis might also be used as empirical reference for other local and global C budgeting exercises.
The construction of standard gamble utilities.
van Osch, Sylvie M C; Stiggelbout, Anne M
2008-01-01
Health effects for cost-effectiveness analysis are best measured in life years, with quality of life in each life year expressed in terms of utilities. The standard gamble (SG) has been the gold standard for utility measurement. However, the biases of probability weighting, loss aversion, and scale compatibility have an inconclusive effect on SG utilities. We determined their effect on SG utilities using qualitative data to assess the reference point and the focus of attention. While thinking aloud, 45 healthy respondents provided SG utilities for six rheumatoid arthritis health states. Reference points, goals, and focuses of attention were coded. To assess the effect of scale compatibility, correlations were assessed between focus of attention and mean utility. The certain outcome served most frequently as reference point, the SG was perceived as a mixed gamble. Goals were mostly mentioned with respect to this outcome. Scale compatibility led to a significant upward bias in utilities; attention lay relatively more on the low outcome and this was positively correlated with mean utility. SG utilities should be corrected for loss aversion and probability weighting with the mixed correction formula proposed by prospect theory. Scale compatibility will likely still bias SG utilities, calling for research on a correction. Copyright (c) 2007 John Wiley & Sons, Ltd.
Erwin, Susannah O.; Jacobson, Robert B.; Elliott, Caroline M.
2017-01-01
We present a quantitative analysis of habitat availability in a highly regulated lowland river, comparing a restored reach with two reference reaches: an un-restored, channelized reach, and a least-altered reach. We evaluate the effects of channel modifications in terms of distributions of depth and velocity as well as distributions and availability of habitats thought to be supportive of an endangered fish, the pallid sturgeon (Scaphirhynchus albus). It has been hypothesized that hydraulic conditions that support food production and foraging may limit growth and survival of juvenile pallid sturgeon. To evaluate conditions that support these habitats, we constructed two-dimensional hydrodynamic models for the three study reaches, two located in the Lower Missouri River (channelized and restored reaches) and one in the Yellowstone River (least-altered reach). Comparability among the reaches was improved by scaling by bankfull discharge and bankfull channel area. The analysis shows that construction of side-channel chutes and increased floodplain connectivity increase the availability of foraging habitat, resulting in a system that is more similar to the reference reach on the Yellowstone River. The availability of food-producing habitat is low in all reaches at flows less than bankfull, but the two reaches in the Lower Missouri River – channelized and restored – display a threshold-like response as flows overtop channel banks, reflecting the persistent effects of channelization on hydraulics in the main channel. These high lateral gradients result in punctuated ecological events corresponding to flows in excess of bankfull discharge. This threshold effect in the restored reach remains distinct from that of the least-altered reference reach, where hydraulic changes are less abrupt and overbank flows more gradually inundate the adjacent floodplain. The habitat curves observed in the reference reach on the Yellowstone River may not be attainable within the channelized system on the Missouri River, but the documented hydraulic patterns can be used to inform ongoing channel modifications. Although scaling to bankfull dimensions and discharges provides a basis for comparing the three reaches, implementation of the reference reach concept was complicated by differences in flow-frequency distributions among sites. In particular, habitat availability in the least-altered Yellowstone River reach is affected by increased frequency of low-flow events (less than 0.5 times bankfull flow) and moderately high-flow events (greater than 1.5 times bankfull flow) compared to downstream reaches on the Lower Missouri River.
NASA Astrophysics Data System (ADS)
Carlson, C. W.; Pluhar, C. J.; Glen, J. M.; Farner, M. J.
2012-12-01
Accommodating ~20-25% of the dextral-motion between the Pacific and North American plates the Walker Lane is represented as an elongate, NW oriented, region of active tectonics positioned between the northwesterly-translating Sierra Nevada microplate and the east-west extension of the Basin and Range. This region of transtension is being variably accommodated on regional-scale systems of predominantly strike-slip faulting. At the western edge of the central Walker Lane (ca. 38°-39°N latitude) is a region of crustal-scale blocks bounded by wedge-shaped depositional-basins and normal-fault systems, here defined as the west-central Walker Lane (WCWL). Devoid of obvious strike-slip faulting, the presence of tectonic-block vertical-axis rotations in the WCWL represents unrecognized components of dextral-shearing and/or changes of strain-accommodation over time. We use paleomagnetic reference directions for Eureka Valley Tuff (EVT) members of the late Miocene Stanislaus Group as spatial and temporal markers for documentation of tectonic-block vertical-axis rotations near Bridgeport, CA. Study-site rotations revealed discrete rotational domains of mean vertical-axis rotation ranging from ~10°-30° with heterogeneous regional distribution. Additionally, the highest measured magnitudes of vertical-axis rotation (~50°-60° CW) define a 'Region of High Strain' that includes the wedge-shaped Bridgeport Valley (Basin). This study revealed previously-unrecognized tectonic rotation of reference direction sites from prior studies for two (By-Day and Upper) of the three members of the EVT, resulting in under-estimates of regional strain accommodation by these studies. Mean remanent directions and virtual geomagnetic poles utilized in our study yielded a recalculated reference direction for the By-Day member of: Dec.=353.2°; Inc.= 43.7°; α95=10.1, in agreement with new measurements in the stable Sierra Nevada. This recalculated direction confirmed the presence of previously unrecognized reference site rotations, and provided an additional reference direction for determining vertical-axis rotation magnitudes. We present a kinematic model based on mean rotation magnitudes of ~30° CW for the Sweetwater Mountains and Bodie Hills that accounts for rotational-strain accommodation of dextral shear in the WCWL since the late Miocene. This model considers rotational magnitudes, paleostrain indicators, edge-effects, and strain-accommodating structures of rotating crustal blocks to represent changes in regional strain accommodation over time. The results and models presented here elucidate the complicated and evolving nature of the WCWL, and further understanding of variations in strain accommodation for the Walker Lane.
Self-synchronization for spread spectrum audio watermarks after time scale modification
NASA Astrophysics Data System (ADS)
Nadeau, Andrew; Sharma, Gaurav
2014-02-01
De-synchronizing operations such as insertion, deletion, and warping pose significant challenges for watermarking. Because these operations are not typical for classical communications, watermarking techniques such as spread spectrum can perform poorly. Conversely, specialized synchronization solutions can be challenging to analyze/ optimize. This paper addresses desynchronization for blind spread spectrum watermarks, detected without reference to any unmodified signal, using the robustness properties of short blocks. Synchronization relies on dynamic time warping to search over block alignments to find a sequence with maximum correlation to the watermark. This differs from synchronization schemes that must first locate invariant features of the original signal, or estimate and reverse desynchronization before detection. Without these extra synchronization steps, analysis for the proposed scheme builds on classical SS concepts and allows characterizes the relationship between the size of search space (number of detection alignment tests) and intrinsic robustness (continuous search space region covered by each individual detection test). The critical metrics that determine the search space, robustness, and performance are: time-frequency resolution of the watermarking transform, and blocklength resolution of the alignment. Simultaneous robustness to (a) MP3 compression, (b) insertion/deletion, and (c) time-scale modification is also demonstrated for a practical audio watermarking scheme developed in the proposed framework.
NASA Astrophysics Data System (ADS)
Nippgen, F.; Ross, M. R. V.; Bernhardt, E. S.; McGlynn, B. L.
2017-12-01
Mountaintop mining (MTM) is an especially destructive form of surface coal mining. It is widespread in Central Appalachia and is practiced around the world. In the process of accessing coal seams up to several hundred meters below the surface, mountaintops and ridges are removed via explosives and heavy machinery with the resulting overburden pushed into nearby valleys. This broken up rock and soil material represents a largely unknown amount of storage for incoming precipitation that facilitates enhanced chemical weathering rates and increased dissolved solids exports to streams. However, assessing the independent impact of MTM can be difficult in the presence of other forms of mining, especially underground mining. Here, we evaluate the effect of MTM on water quantity and quality on annual, seasonal, and event time scales in two sets of paired watersheds in southwestern West Virginia impacted by MTM. On an annual timescale, the mined watersheds sustained baseflow throughout the year, while the first order watersheds ceased flowing during the latter parts of the growing season. In fractionally mined watersheds that continued to flow, the water in the stream was exclusively generated from mined portions of the watersheds, leading to elevated total dissolved solids in the stream water. On the event time scale, we analyzed 50 storm events over a water year for a range of hydrologic response metrics. The mined watersheds exhibited smaller runoff ratios and longer response times during the wet dormant season, but responded similarly to rainfall events during the growing season or even exceeded the runoff magnitude of the reference watersheds. Our research demonstrates a clear difference in hydrologic response between mined and unmined watersheds during the growing season and the dormant season that are detectable at annual, seasonal, and event time scales. For larger spatial scales (up to 2,000km2) the effect of MTM on water quantity is not as easily detectable. At these larger scales, other land uses can mask possible alterations in hydrology or the percentage of MTM disturbed areas becomes negligible.
On the functional design of the DTU10 MW wind turbine scale model of LIFES50+ project
NASA Astrophysics Data System (ADS)
Bayati, I.; Belloli, M.; Bernini, L.; Fiore, E.; Giberti, H.; Zasso, A.
2016-09-01
This paper illustrates the mechatronic design of the wind tunnel scale model of the DTU 10MW reference wind turbine, for the LIFES50+ H2020 European project. This model was designed with the final goal of controlling the angle of attack of each blade by means of miniaturized servomotors, for implementing advanced individual pitch control (IPC) laws on a Floating Offshore Wind Turbine (FOWT) 1/75 scale model. Many design constraints were to be respected: among others, the rotor-nacelle overall mass due to aero-elastic scaling, the limited space of the nacelle, where to put three miniaturized servomotors and the main shaft one, with their own inverters/controllers, the slip rings for electrical rotary contacts, the highest stiffness as possible for the nacelle support and the blade-rotor connections, for ensuring the proper kinematic constraint, considering the first flapwise blade natural frequency, the performance of the servomotors to guarantee the wide frequency band due to frequency scale factors, etc. The design and technical solutions are herein presented and discussed, along with an overview of the building and verification process. Also a discussion about the goals achieved and constraints respected for the rigid wind turbine scale model (LIFES50+ deliverable D.3.1) and the further possible improvements for the IPC-aero-elastic scale model, which is being finalized at the time of this paper.
Elasticity-Driven Backflow of Fluid-Driven Cracks
NASA Astrophysics Data System (ADS)
Lai, Ching-Yao; Dressaire, Emilie; Ramon, Guy; Huppert, Herbert; Stone, Howard A.
2016-11-01
Fluid-driven cracks are generated by the injection of pressurized fluid into an elastic medium. Once the injection pressure is released, the crack closes up due to elasticity and the fluid in the crack drains out of the crack through an outlet, which we refer to as backflow. We experimentally study the effects of crack size, elasticity of the matrix, and fluid viscosity on the backflow dynamics. During backflow, the volume of liquid remaining in the crack as a function of time exhibits a transition from a fast decay at early times to a power law behavior at late times. Our results at late times can be explained by scaling arguments balancing elastic and viscous stresses in the crack. This work may relate to the environmental issue of flowback in hydraulic fracturing. This work is supported by National Science Foundation via Grant CBET-1509347 and partially supported by Andlinger Center for Energy and the Environment at Princeton University.
High precision laser ranging by time-of-flight measurement of femtosecond pulses
NASA Astrophysics Data System (ADS)
Lee, Joohyung; Lee, Keunwoo; Lee, Sanghyun; Kim, Seung-Woo; Kim, Young-Jin
2012-06-01
Time-of-flight (TOF) measurement of femtosecond light pulses was investigated for laser ranging of long distances with sub-micrometer precision in the air. The bandwidth limitation of the photo-detection electronics used in timing femtosecond pulses was overcome by adopting a type-II nonlinear second-harmonic crystal that permits the production of a balanced optical cross-correlation signal between two overlapping light pulses. This method offered a sub-femtosecond timing resolution in determining the temporal offset between two pulses through lock-in control of the pulse repetition rate with reference to the atomic clock. The exceptional ranging capability was verified by measuring various distances of 1.5, 60 and 700 m. This method is found well suited for future space missions based on formation-flying satellites as well as large-scale industrial applications for land surveying, aircraft manufacturing and shipbuilding.
Fractal analysis on human dynamics of library loans
NASA Astrophysics Data System (ADS)
Fan, Chao; Guo, Jin-Li; Zha, Yi-Long
2012-12-01
In this paper, the fractal characteristic of human behaviors is investigated from the perspective of time series constructed with the amount of library loans. The values of the Hurst exponent and length of non-periodic cycle calculated through rescaled range analysis indicate that the time series of human behaviors and their sub-series are fractal with self-similarity and long-range dependence. Then the time series are converted into complex networks by the visibility algorithm. The topological properties of the networks such as scale-free property and small-world effect imply that there is a close relationship among the numbers of repetitious behaviors performed by people during certain periods of time. Our work implies that there is intrinsic regularity in the human collective repetitious behaviors. The conclusions may be helpful to develop some new approaches to investigate the fractal feature and mechanism of human dynamics, and provide some references for the management and forecast of human collective behaviors.
Measurement Scales and Standard Systems in Psychology.
ERIC Educational Resources Information Center
Aftanas, Marion S.
Most discussions of measurement theory are focused on "scales" of measurement, but it is not clear whether reference is made to the mechanisms of measurement or the metric information derived from measurement. This emphasis on scales in measurement theory has not always provided a meaningful or fruitful description of measurement activities in…
Linear and Non-linear Information Flows In Rainfall Field
NASA Astrophysics Data System (ADS)
Molini, A.; La Barbera, P.; Lanza, L. G.
The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.
A new framework for climate sensitivity and prediction: a modelling perspective
NASA Astrophysics Data System (ADS)
Ragone, Francesco; Lucarini, Valerio; Lunkeit, Frank
2016-03-01
The sensitivity of climate models to increasing CO2 concentration and the climate response at decadal time-scales are still major factors of uncertainty for the assessment of the long and short term effects of anthropogenic climate change. While the relative slow progress on these issues is partly due to the inherent inaccuracies of numerical climate models, this also hints at the need for stronger theoretical foundations to the problem of studying climate sensitivity and performing climate change predictions with numerical models. Here we demonstrate that it is possible to use Ruelle's response theory to predict the impact of an arbitrary CO2 forcing scenario on the global surface temperature of a general circulation model. Response theory puts the concept of climate sensitivity on firm theoretical grounds, and addresses rigorously the problem of predictability at different time-scales. Conceptually, these results show that performing climate change experiments with general circulation models is a well defined problem from a physical and mathematical point of view. Practically, these results show that considering one single CO2 forcing scenario is enough to construct operators able to predict the response of climatic observables to any other CO2 forcing scenario, without the need to perform additional numerical simulations. We also introduce a general relationship between climate sensitivity and climate response at different time scales, thus providing an explicit definition of the inertia of the system at different time scales. This technique allows also for studying systematically, for a large variety of forcing scenarios, the time horizon at which the climate change signal (in an ensemble sense) becomes statistically significant. While what we report here refers to the linear response, the general theory allows for treating nonlinear effects as well. These results pave the way for redesigning and interpreting climate change experiments from a radically new perspective.
Development and psychometric validation of social cognitive theory scales in an oral health context.
Jones, Kelly; Parker, Eleanor J; Steffens, Margaret A; Logan, Richard M; Brennan, David; Jamieson, Lisa M
2016-04-01
This study aimed to develop and evaluate scales reflecting potentially modifiable social cognitive theory-based risk indicators associated with homeless populations' oral health. The scales are referred to as the social cognitive theory risk scales in an oral health context (SCTOH) and are referred to as SCTOH(SE), SCTOH(K) and SCTOH(F), respectively. The three SCTOH scales assess the key constructs of social cognitive theory: self-efficacy, knowledge and fatalism. The reliability and validity of the three scales were evaluated in a convenience sample of 248 homeless participants (age range 17-78 years, 79% male) located in a metropolitan setting in Australia. The scales were supported by exploratory factor analysis and established three distinct and internally consistent domains of social cognition: oral health-related self-efficacy, oral health-related knowledge and oral health-related fatalism, with Cronbach's alphas of 0.95, 0.85 and Spearman's-Brown ρ of 0.69. Concurrent ability was confirmed by each SCTOH scale's association with oral health status in the expected directions. The three SCTOH scales appear to be internally valid and reliable. If confirmed by further research, these scales could potentially be used for tailored educational and cognitive-behavioural interventions to reduce oral health inequalities among homeless and other vulnerable populations. © 2015 Public Health Association of Australia.
NASA Astrophysics Data System (ADS)
Willmott, Jon R.; Lowe, David; Broughton, Mick; White, Ben S.; Machin, Graham
2016-09-01
A primary temperature scale requires realising a unit in terms of its definition. For high temperature radiation thermometry in terms of the International Temperature Scale of 1990 this means extrapolating from the signal measured at the freezing temperature of gold, silver or copper using Planck’s radiation law. The difficulty in doing this means that primary scales above 1000 °C require specialist equipment and careful characterisation in order to achieve the extrapolation with sufficient accuracy. As such, maintenance of the scale at high temperatures is usually only practicable for National Metrology Institutes, and calibration laboratories have to rely on a scale calibrated against transfer standards. At lower temperatures it is practicable for an industrial calibration laboratory to have its own primary temperature scale, which reduces the number of steps between the primary scale and end user. Proposed changes to the SI that will introduce internationally accepted high temperature reference standards might make it practicable to have a primary high temperature scale in a calibration laboratory. In this study such a scale was established by calibrating radiation thermometers directly to high temperature reference standards. The possible reduction in uncertainty to an end user as a result of the reduced calibration chain was evaluated.
Ehrstedt, Christoffer; Rydell, Ann-Margret; Gabert Hallsten, Marina; Strömberg, Bo; Ahlsten, Gunnar
2018-06-01
The aim of this study was to investigate long-term cognitive outcome, health-related quality of life (HRQoL), and psychiatric symptoms in children and young adults diagnosed with a glioneuronal tumor in childhood. Twenty-eight children and adolescents (0-17.99years) with a minimum postoperative follow-up time of five years were eligible for the study; four persons declined participation. A cross-sectional long-term follow-up evaluation was performed using the following study measures: Wechsler Intelligence Scale for Children (WISC-IV) or Wechsler Adult Intelligence Scale (WAIS-IV), Reys Complex Figure Test (RCFT), Short Form 36 version 2 (SF-36v2), Short Form 10 (SF-10), Quality of Life in Epilepsy 31 (QOLIE-31), Hospital Anxiety Depression Scale (HADS) or Beck Youth Inventory Scales (BYI), and Rosenberg Self-Esteem Scale. Historical WISC-III and RCFT data were used to compare cognitive longitudinal data. Mean follow-up time after surgery was 12.1years. Sixty-three percent (15/24) were seizure-free. Despite a successive postoperative gain in cognitive function, a significant reduction relative to norms was seen in the seizure-free group with respect to perceptual reasoning index (PRI), working memory index (WMI), and full-scale intelligence quotient (FSIQ). Seizure freedom resulted in acceptable HRQoL. Thirty-two percent and 16% exceeded the threshold level of possible anxiety and depression, respectively, despite seizure freedom. Although lower than in corresponding reference groups, cognitive outcome and HRQoL are good provided that seizure freedom or at least a low seizure severity can be achieved. There is a risk of elevated levels of psychiatric symptoms. Long-term clinical follow-up is advisable. Copyright © 2018 Elsevier Inc. All rights reserved.
A coupled synoptic-hydrological model for climate change impact assessment
NASA Astrophysics Data System (ADS)
Wilby, Robert; Greenfield, Brian; Glenny, Cathy
1994-01-01
A coupled atmospheric-hydrological model is presented. Sequences of daily rainfall occurrence for the 20 year period 1971-1990 at sites in the British Isles are related to the Lamb's Weather Types (LWT) by using conditional probabilities. Time series of circulation patterns and hence rainfall were then generated using a Markov representation of matrices of transition probabilities between weather types. The resultant precipitation data were used as input to a semidistributed catchment model to simulate daily flows. The combined model successfully reproduced aspects of the daily weather, precipitation and flow regimes. A range of synoptic scenarios were further investigated with particular reference to low flows in the River Coln, UK. The modelling approach represents a means of translating general circulation model (GCM) climate change predictions at the macro-scale into hydrological concerns at the catchment scale.
Moessner, Anne; Malec, James F; Beveridge, Scott; Reddy, Cara Camiolo; Huffman, Tracy; Marton, Julia; Schmerzler, Audrey J
2016-01-01
To develop and provide initial validation of a measure for accurately determining the need for Constant Visual Observation (CVO) in patients with traumatic brain injury (TBI) admitted to inpatient rehabilitation. Rating scale development and evaluation through Rasch analysis and assessment of concurrent validity. One hundred and thirty-four individuals with moderate-severe TBI were studied in seven inpatient brain rehabilitation units associated with the National Institute for Disability, Independent Living and Rehabilitation Research (NIDILRR) TBI Model System. Participants were rated on the preliminary version of the CVO Needs Assessment scale (CVONA) and, by independent raters, on the Levels of Risk (LoR) and Supervision Rating Scale (SRS) at four time points during inpatient rehabilitation: admission, Days 2-3, Days 5-6 and Days 8-9. After pruning misfitting items, the CVONA showed satisfactory internal consistency (Person Reliability = 0.85-0.88) across time points. With reference to the LoR and SRS, low false negative rates (sensitivity > 90%) were associated with moderate-to-high false positive rates (29-56%). The CVONA may be a useful objective metric to complement clinical judgement regarding the need for CVO; however, further prospective study is desirable to further assess its utility in identifying at-risk patients, reducing adverse events and decreasing CVO costs.
İmamoğlu, Hakan; Doğan, Serap; Erdoğan, Nuri
2018-02-01
The aim of this study was to investigate the tendency of referring physicians to collaborate with radiologists in managing contrast media (CM)-related risk factors. The study was conducted at a single academic hospital. Among 150 referring physicians from various specialties, 51 referring physicians (34%) responded to the invitation letter asking for an interview with a radiologist. During the interview, a modified form of the Control Preferences Scale was administered, in which there were five preferences (each displayed on a separate card) that ranged from the fully active to fully passive involvement of referring physicians in managing CM-related risk factors. A descriptive analysis was performed through categorization of the results depending on the respondents' two most preferred roles. Thirty-six referring physicians (70.5%) preferred a collaborative role, and 15 (29.4%) preferred a noncollaborative role (i.e., remained on either the fully active or fully passive side). Among the referring physicians who preferred a collaborative role, the most common response (n = 15 [29.4%]) was collaborative-active. Referring physicians at the authors' institution have basic cognitive and motivational-affective tone toward collaboration in future teamwork aimed at the management of CM-related risk factors. A modified form of the Control Preferences Scale, as in this study, can be used to investigate the tendency of referring physicians to collaborate with radiologists. The results are discussed from ethical and legal perspectives. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.; Stone, C.M.
A pretest reference calculation for the Overtest for Simulated Defense High-Level Waste (DHLW) or Room B experiment is presented in this report. The overtest is one of several large-scale, in-situ experiments currently under construction near Carlsbad, New Mexico at the site of the Waste Isolation Pilot Plant (WIPP). Room B, a single isolated room in the underground salt formation, is to be subjected to a thermal load of approximately four times the areal heat output anticipated for a future repository with DHLW. The load will be supplied 3 years by canister heaters placed in the floor. Room B is heavilymore » instrumented for monitoring both temperature increases due to the thermal loading and deformations due to creep of the salt. Data from the experiment are not available at the present time, but the measurements will eventually be compared to the results presented to assess and improve thermal and mechanical modeling capabilities for the WIPP. The thermal/structural model used here represents the state of the art at the present time. A large number of plots are included since an appropriate result is presented for every Room B gauge location. 81 figs., 4 tabs.« less
Fink, John
2005-05-06
To determine the safety and efficiency of an acute stroke thrombolysis service in a New Zealand public hospital setting. A 12-month audit of patients referred to the Christchurch Hospital Stroke Thrombolysis Service (STS) between 1 April 2002 and 31 March 2003 was undertaken. Sixty-one patients were referred to the STS during the study period, of whom 16 were treated with tissue plasminogen activator (t-PA). For treated patients, the median time from stroke onset to hospital presentation was 60 minutes, 'door-to-CT' time was 60 minutes, and the 'door-to-needle' time was 99 minutes. Minor protocol violations were recorded in two patients, but did not influence outcome. No patient was treated after 3 hours of stroke onset. Intracerebral haemorrhage occurred in two patients: one patient was significantly improved compared with pre-treatment status; a minor temporary deterioration occurred in the other patient. Eight of 16 patients had improved by 4 or more points on the NIH Stroke Scale Score at 24 hours. Acute stroke thrombolysis can be delivered safely and in accordance with internationally accepted guidelines using the Christchurch Hospital STS model of emergency department screening and acute stroke service treatment. Further improvements in performance of the STS remain possible.
Safafar, Hamed; Hass, Michael Z.; Møller, Per; Holdt, Susan L.; Jacobsen, Charlotte
2016-01-01
Nannochloropsis salina was grown on a mixture of standard growth media and pre-gasified industrial process water representing effluent from a local biogas plant. The study aimed to investigate the effects of enriched growth media and cultivation time on nutritional composition of Nannochloropsis salina biomass, with a focus on eicosapentaenoic acid (EPA). Variations in fatty acid composition, lipids, protein, amino acids, tocopherols and pigments were studied and results compared to algae cultivated on F/2 media as reference. Mixed growth media and process water enhanced the nutritional quality of Nannochloropsis salina in laboratory scale when compared to algae cultivated in standard F/2 medium. Data from laboratory scale translated to the large scale using a 4000 L flat panel photo-bioreactor system. The algae growth rate in winter conditions in Denmark was slow, but results revealed that large-scale cultivation of Nannochloropsis salina at these conditions could improve the nutritional properties such as EPA, tocopherol, protein and carotenoids compared to laboratory-scale cultivated microalgae. EPA reached 44.2% ± 2.30% of total fatty acids, and α-tocopherol reached 431 ± 28 µg/g of biomass dry weight after 21 days of cultivation. Variations in chemical compositions of Nannochloropsis salina were studied during the course of cultivation. Nannochloropsis salina can be presented as a good candidate for winter time cultivation in Denmark. The resulting biomass is a rich source of EPA and also a good source of protein (amino acids), tocopherols and carotenoids for potential use in aquaculture feed industry. PMID:27483291
Safafar, Hamed; Hass, Michael Z; Møller, Per; Holdt, Susan L; Jacobsen, Charlotte
2016-07-29
Nannochloropsis salina was grown on a mixture of standard growth media and pre-gasified industrial process water representing effluent from a local biogas plant. The study aimed to investigate the effects of enriched growth media and cultivation time on nutritional composition of Nannochloropsis salina biomass, with a focus on eicosapentaenoic acid (EPA). Variations in fatty acid composition, lipids, protein, amino acids, tocopherols and pigments were studied and results compared to algae cultivated on F/2 media as reference. Mixed growth media and process water enhanced the nutritional quality of Nannochloropsis salina in laboratory scale when compared to algae cultivated in standard F/2 medium. Data from laboratory scale translated to the large scale using a 4000 L flat panel photo-bioreactor system. The algae growth rate in winter conditions in Denmark was slow, but results revealed that large-scale cultivation of Nannochloropsis salina at these conditions could improve the nutritional properties such as EPA, tocopherol, protein and carotenoids compared to laboratory-scale cultivated microalgae. EPA reached 44.2% ± 2.30% of total fatty acids, and α-tocopherol reached 431 ± 28 µg/g of biomass dry weight after 21 days of cultivation. Variations in chemical compositions of Nannochloropsis salina were studied during the course of cultivation. Nannochloropsis salina can be presented as a good candidate for winter time cultivation in Denmark. The resulting biomass is a rich source of EPA and also a good source of protein (amino acids), tocopherols and carotenoids for potential use in aquaculture feed industry.
NASA Astrophysics Data System (ADS)
Neuburger, Martina; Gurgiser, Wolfgang; Maussion, Fabien; Singer, Katrin; Kaser, Georg
2017-04-01
Natural scientists observe and project changes in precipitation and temperature at different spatio-temporal scales and investigate impacts on glaciers and hydrological regimes. Simultaneously, social groups experience ecological phenomena as linked to climate change and integrate them into their understanding of nature and their logics of action, while political actors refer to scientific results as legitimization to focus on adaptation and mitigation strategies on global, national and regional/local level. However, natural and socio-political changes on various scales (regarding time and space) are not directly interlinked, but are communicated by energy and material flows, by discourses, power relations and institutional regulations. In this context, it remains still unclear how natural dynamics are (dis)entangled with societal processes in their historical dimensions and in their interrelations from global via national to regional and local scales. Considering the Cordillera Blanca region in Peru as an example, we analyze the intertwining of scales (global, national, regional, local) and spheres (natural, political, societal) to detect entanglements and disconnections of observed processes. Using the methodology of a time line, we present precipitation variability and glacier recession at different scales, estimate qualitative water availability and investigate the links to the implementation of international and national political programs on climate change adaptation in the Cordillera Blanca region focusing on water and agrarian programs. Finally, we include supposedly contradictory reports of rural population on climate change and related impacts on water availability and agricultural production to analyze the (dis)entanglement due to changing power relations and dominant discourses.
NASA Astrophysics Data System (ADS)
Mishra, Gaurav; Ghosh, Karabi; Ray, Aditi; Gupta, N. K.
2018-06-01
Radiation hydrodynamic (RHD) simulations for four different potential high-Z hohlraum materials, namely Tungsten (W), Gold (Au), Lead (Pb), and Uranium (U) are performed in order to investigate their performance with respect to x-ray absorption, re-emission and ablation properties, when irradiated by constant temperature drives. A universal functional form is derived for estimating time dependent wall albedo for high-Z materials. Among the high-Z materials studied, it is observed that for a fixed simulation time the albedo is maximum for Au below 250 eV, whereas it is maximum for U above 250 eV. New scaling laws for shock speed vs drive temperature, applicable over a wide temperature range of 100 eV to 500 eV, are proposed based on the physics of x-ray driven stationary ablation. The resulting scaling relation for a reference material Aluminium (Al), shows good agreement with that of Kauffman's power law for temperatures ranging from 100 eV to 275 eV. New scaling relations are also obtained for temperature dependent mass ablation rate and ablation pressure, through RHD simulation. Finally, our study reveals that for temperatures above 250 eV, U serves as a better hohlraum material since it offers maximum re-emission for x-rays along with comparable mass ablation rate. Nevertheless, traditional choice, Au works well for temperatures below 250 eV. Besides inertial confinement fusion (ICF), the new scaling relations may find its application in view-factor codes, which generally ignore atomic physics calculations of opacities and emissivities, details of laser-plasma interaction and hydrodynamic motions.
On the status of IAEA delta-13C stable isotope reference materials.
NASA Astrophysics Data System (ADS)
Assonov, Sergey; Groening, Manfred; Fajgelj, Ales
2016-04-01
For practical reasons all isotope measurements are performed on relative scales realized through the use of international, scale-defining primary standards. In fact these standards were materials (artefacts, similar to prototypes of meter and kg) selected based on their properties. The VPDB delta-13C scale is realised via two highest-level reference materials NBS19 and LSVEC, the first defining the scale and the second aimed to normalise lab-to-lab calibrations. These two reference materials (RMs) have been maintained and distributed by IAEA and NIST. The priority task is to maintain these primary RMs at the required uncertainty level, thus ensuring the long-term scale consistency. The second task is to introduce replacements when needed (currently for exhausted NBS19, work in progress). The next is to produce a family of lower level RMs (secondary, tertiary) addressing needs of various applications (with different delta values, in different physical-chemical forms) and their needs for the uncertainty; these RMs should be traceable to the highest level RMs. Presently three is a need for a range of RMs addressing existing and newly emerging analytical techniques (e.g. optical isotopic analysers) in form of calibrated CO2 gases with different delta-13C values. All that implies creating a family of delta-13C stable isotope reference materials. Presently IAEA works on replacement for NBS19 and planning new RMs. Besides, we found that LSVEC (introduced as second anchor for the VPDB scale in 2006) demonstrate a considerable scatter of its delta-13C value which implies a potential bias of the property value and increased value uncertainty which may conflict with uncertainty requirements for atmospheric monitoring. That is not compatible with the status of LSVEC, and therefore it should be replaced as soon as possible. The presentation will give an overview of the current status, the strategic plan of developments and the near future steps.
Bayley-III Cognitive and Language Scales in Preterm Children.
Spencer-Smith, Megan M; Spittle, Alicia J; Lee, Katherine J; Doyle, Lex W; Anderson, Peter J
2015-05-01
This study aimed to assess the sensitivity and specificity of the Bayley Scales of Infant and Toddler Development, Third Edition (Bayley-III), Cognitive and Language scales at 24 months for predicting cognitive impairments in preterm children at 4 years. Children born <30 weeks' gestation completed the Bayley-III at 24 months and the Differential Ability Scale, Second Edition (DAS-II), at 4 years to assess cognitive functioning. Test norms and local term-born reference data were used to classify delay on the Bayley-III Cognitive and Language scales. Impairment on the DAS-II Global Conceptual Ability, Verbal, and Nonverbal Reasoning indices was classified relative to test norms. Scores < -1 SD relative to the mean were classified as mild/moderate delay or impairment, and scores < -2 SDs were classified as moderate delay or impairment. A total of 105 children completed the Bayley-III and DAS-II. The sensitivity of mild/moderate cognitive delay on the Bayley-III for predicting impairment on DAS-II indices ranged from 29.4% to 38.5% and specificity ranged from 92.3% to 95.5%. The sensitivity of mild/moderate language delay on the Bayley-III for predicting impairment on DAS-II indices ranged from 40% to 46.7% and specificity ranged from 81.1% to 85.7%. The use of local reference data at 24 months to classify delay increased sensitivity but reduced specificity. Receiver operating curve analysis identified optimum cut-point scores for the Bayley-III that were more consistent with using local reference data than Bayley-III normative data. In our cohort of very preterm children, delay on the Bayley-III Cognitive and Language scales was not strongly predictive of future impairments. More children destined for later cognitive impairment were identified by using cut-points based on local reference data than Bayley-III norms. Copyright © 2015 by the American Academy of Pediatrics.
Jena Reference Air Set (JRAS): a multi-point scale anchor for isotope measurements of CO2 in air
NASA Astrophysics Data System (ADS)
Wendeberg, M.; Richter, J. M.; Rothe, M.; Brand, W. A.
2013-03-01
The need for a unifying scale anchor for isotopes of CO2 in air was brought to light at the 11th WMO/IAEA Meeting of Experts on Carbon Dioxide in Tokyo 2001. During discussions about persistent discrepancies in isotope measurements between the worlds leading laboratories, it was concluded that a unifying scale anchor for Vienna Pee Dee Belemnite (VPDB) of CO2 in air was desperately needed. Ten years later, at the 2011 Meeting of Experts on Carbon Dioxide in Wellington, it was recommended that the Jena Reference Air Set (JRAS) become the official scale anchor for isotope measurements of CO2 in air (Brailsford, 2012). The source of CO2 used for JRAS is two calcites. After releasing CO2 by reaction with phosphoric acid, the gases are mixed into CO2-free air. This procedure ensures both isotopic stability and longevity of the CO2. That the reference CO2 is generated from calcites and supplied as an air mixture is unique to JRAS. This is made to ensure that any measurement bias arising from the extraction procedure is eliminated. As every laboratory has its own procedure for extracting the CO2, this is of paramount importance if the local scales are to be unified with a common anchor. For a period of four years, JRAS has been evaluated through the IMECC1 program, which made it possible to distribute sets of JRAS gases to 13 laboratories worldwide. A summary of data from the six laboratories that have reported the full set of results is given here along with a description of the production and maintenance of the JRAS scale anchors. 1 IMECC refers to the EU project "Infrastructure for Measurements of the European Carbon Cycle" (http://imecc.ipsl.jussieu.fr/).
PCTDSE: A parallel Cartesian-grid-based TDSE solver for modeling laser-atom interactions
NASA Astrophysics Data System (ADS)
Fu, Yongsheng; Zeng, Jiaolong; Yuan, Jianmin
2017-01-01
We present a parallel Cartesian-grid-based time-dependent Schrödinger equation (TDSE) solver for modeling laser-atom interactions. It can simulate the single-electron dynamics of atoms in arbitrary time-dependent vector potentials. We use a split-operator method combined with fast Fourier transforms (FFT), on a three-dimensional (3D) Cartesian grid. Parallelization is realized using a 2D decomposition strategy based on the Message Passing Interface (MPI) library, which results in a good parallel scaling on modern supercomputers. We give simple applications for the hydrogen atom using the benchmark problems coming from the references and obtain repeatable results. The extensions to other laser-atom systems are straightforward with minimal modifications of the source code.
Long working distance incoherent interference microscope
Sinclair, Michael B [Albuquerque, NM; De Boer, Maarten P [Albuquerque, NM
2006-04-25
A full-field imaging, long working distance, incoherent interference microscope suitable for three-dimensional imaging and metrology of MEMS devices and test structures on a standard microelectronics probe station. A long working distance greater than 10 mm allows standard probes or probe cards to be used. This enables nanometer-scale 3-dimensional height profiles of MEMS test structures to be acquired across an entire wafer while being actively probed, and, optionally, through a transparent window. An optically identical pair of sample and reference arm objectives is not required, which reduces the overall system cost, and also the cost and time required to change sample magnifications. Using a LED source, high magnification (e.g., 50.times.) can be obtained having excellent image quality, straight fringes, and high fringe contrast.
Assessment of international reference materials for isotope-ratio analysis (IUPAC Technical Report)
Brand, Willi A.; Coplen, Tyler B.; Vogl, Jochen; Rosner, Martin; Prohaska, Thomas
2014-01-01
Since the early 1950s, the number of international measurement standards for anchoring stable isotope delta scales has mushroomed from 3 to more than 30, expanding to more than 25 chemical elements. With the development of new instrumentation, along with new and improved measurement procedures for studying naturally occurring isotopic abundance variations in natural and technical samples, the number of internationally distributed, secondary isotopic reference materials with a specified delta value has blossomed in the last six decades to more than 150 materials. More than half of these isotopic reference materials were produced for isotope-delta measurements of seven elements: H, Li, B, C, N, O, and S. The number of isotopic reference materials for other, heavier elements has grown considerably over the last decade. Nevertheless, even primary international measurement standards for isotope-delta measurements are still needed for some elements, including Mg, Fe, Te, Sb, Mo, and Ge. It is recommended that authors publish the delta values of internationally distributed, secondary isotopic reference materials that were used for anchoring their measurement results to the respective primary stable isotope scale.
Ideas for Future GPS Timing Improvements
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
Having recently met stringent criteria for full operational capability (FOC) certification, the Global Positioning System (GPS) now has higher customer expectations than ever before. In order to maintain customer satisfaction, and the meet the even high customer demands of the future, the GPS Master Control Station (MCS) must play a critical role in the process of carefully refining the performance and integrity of the GPS constellation, particularly in the area of timing. This paper will present an operational perspective on several ideas for improving timing in GPS. These ideas include the desire for improving MCS - US Naval Observatory (USNO) data connectivity, an improved GPS-Coordinated Universal Time (UTC) prediction algorithm, a more robust Kalman Filter, and more features in the GPS reference time algorithm (the GPS composite clock), including frequency step resolution, a more explicit use of the basic time scale equation, and dynamic clock weighting. Current MCS software meets the exceptional challenge of managing an extremely complex constellation of 24 navigation satellites. The GPS community will, however, always seek to improve upon this performance and integrity.
Robust preview control for a class of uncertain discrete-time systems with time-varying delay.
Li, Li; Liao, Fucheng
2018-02-01
This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
French, Jon; Payo, Andres; Murray, Brad; Orford, Julian; Eliot, Matt; Cowell, Peter
2016-03-01
Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called 'reduced complexity' models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed 'appropriate complexity modelling' of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.
Albijanic, Boris; Ozdemir, Orhan; Nguyen, Anh V; Bradshaw, Dee
2010-08-11
Bubble-particle attachment in water is critical to the separation of particles by flotation which is widely used in the recovery of valuable minerals, the deinking of wastepaper, the water treatment and the oil recovery from tar sands. It involves the thinning and rupture of wetting thin films, and the expansion and relaxation of the gas-liquid-solid contact lines. The time scale of the first two processes is referred to as the induction time, whereas the time scale of the attachment involving all the processes is called the attachment time. This paper reviews the experimental studies into the induction and attachment times between minerals and air bubbles, and between oil droplets and air bubbles. It also focuses on the experimental investigations and mathematical modelling of elementary processes of the wetting film thinning and rupture, and the three-phase contact line expansion relevant to flotation. It was confirmed that the time parameters, obtained by various authors, are sensitive enough to show changes in both flotation surface chemistry and physical properties of solid surfaces of pure minerals. These findings should be extended to other systems. It is proposed that measurements of the bubble-particle attachment can be used to interpret changes in flotation behaviour or, in conjunction with other factors, such as particle size and gas dispersion, to predict flotation performance. Copyright 2010 Elsevier B.V. All rights reserved.
Hematological indices of injury to lightly oiled birds from the Deepwater Horizon oil spill
Fallon, Jesse A.; Smith, Eric P.; Schoch, Nina; Paruk, James D.; Adams, Evan A.; Evers, David C.; Jodice, Patrick G. R.; Perkins, Christopher; Schulte, Shiloh A.; Hopkins, William A.
2018-01-01
Avian mortality events are common following large‐scale oil spills. However, the sublethal effects of oil on birds exposed to light external oiling are not clearly understood. We found that American oystercatchers (area of potential impact n = 42, reference n = 21), black skimmers (area of potential impact n = 121, reference n = 88), brown pelicans (area of potential impact n = 91, reference n = 48), and great egrets (area of potential impact n = 57, reference n = 47) captured between 20 June 2010 and 23 February 2011 following the Deepwater Horizon oil spill experienced oxidative injury to erythrocytes, had decreased volume of circulating erythrocytes, and showed evidence of a regenerative hematological response in the form of increased reticulocytes compared with reference populations. Erythrocytic inclusions consistent with Heinz bodies were present almost exclusively in birds from sites impacted with oil, a finding pathognomonic for oxidative injury to erythrocytes. Average packed cell volumes were 4 to 19% lower and average reticulocyte counts were 27 to 40% higher in birds with visible external oil than birds from reference sites. These findings provide evidence that small amounts of external oil exposure are associated with hemolytic anemia. Furthermore, we found that some birds captured from the area impacted by the spill but with no visible oiling also had erythrocytic inclusion bodies, increased reticulocytes, and reduced packed cell volumes when compared with birds from reference sites. Thus, birds suffered hematologic injury despite no visible oil at the time of capture. Together, these findings suggest that adverse effects of oil spills on birds may be more widespread than estimates based on avian mortality or severe visible oiling.
NASA Astrophysics Data System (ADS)
Junghans, Cornelia; Schmitt, Franz-Josef; Vukojević, Vladana; Friedrich, Thomas
2016-12-01
Fluorescence correlation spectroscopy relies on temporal autocorrelation analysis of fluorescence intensity fluctuations that spontaneously arise in systems at equilibrium due to molecular motion and changes of state that cause changes in fluorescence, such as triplet state transition, photoisomerization and other photophysical transformations, to determine the rates of these processes. The stability of a fluorescent molecule against dark state conversion is of particular concern for chromophores intended to be used as reference tags for comparing diffusion processes on multiple time scales. In this work, we analyzed properties of two fluorescent proteins, the photoswitchable Dreiklang and its parental eGFP, in solvents of different viscosity to vary the diffusion time through the observation volume element by several orders of magnitude. In contrast to eGFP, Dreiklang undergoes a dark-state conversion on the time scale of tens to hundreds of microseconds under conditions of intense fluorescence excitation, which results in artificially shortened diffusion times if the diffusional motion through the observation volume is sufficiently slowed down. Such photophysical quenching processes have also been observed in FCS studies on other photoswitchable fluorescent proteins including Citrine, from which Dreiklang was derived by genetic engineering. This property readily explains the discrepancies observed previously between the diffusion times of eGFP- and Dreiklang-labeled plasma membrane protein complexes.
NASA Astrophysics Data System (ADS)
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring.
Moseholm, Ellen; Rydahl-Hansen, Susan; Lindhardt, Bjarne Ørskov
2016-01-01
Aim Undergoing diagnostic evaluation for possible cancer can affect health-related quality of life (HRQoL). The aims of this study were to examine the HRQoL in patients undergoing a diagnostic evaluation for possible cancer due to non-specific symptoms and further to investigate the impact of socio-demographic and medical factors associated with HRQoL at the time of diagnosis. Methods This was a prospective, multicenter survey study that included patients who were referred for a diagnostic evaluation due to non-specific cancer symptoms. Participants completed the EORTC-QLQ-C30 quality of life scale before and after completing the diagnostic evaluation. The baseline and follow-up EORTC-QLQ-C30 scores were compared with reference populations. The impact of socio-demographic and medical factors on HRQoL at follow-up was explored by bootstrapped multivariate linear regression. Results A total of 838 patients participated in the study; 680 (81%) also completed follow-up. Twenty-two percent of the patients received a cancer diagnosis at the end of follow-up. Patients presented initially with a high burden of symptoms, less role and emotional functioning and a lower global health/QoL. Most domains improved after diagnosis and no clinically important difference between baseline and follow-up scores was found. Patients reported effects on HRQoL both at baseline and at follow-up compared with the Danish reference population and had similar scores as a cancer reference population. Co-morbidity, being unemployed and receiving a cancer diagnosis had the greatest effect on HRQoL around the time of diagnosis. Conclusions Patients with non-specific symptoms reported an affected HRQoL while undergoing a diagnostic evaluation for possible cancer. Morbidity, being unemployed and receiving a cancer diagnosis had the greatest effect on HRQoL around the time of diagnosis. PMID:26840866
Moseholm, Ellen; Rydahl-Hansen, Susan; Lindhardt, Bjarne Ørskov
2016-01-01
Undergoing diagnostic evaluation for possible cancer can affect health-related quality of life (HRQoL). The aims of this study were to examine the HRQoL in patients undergoing a diagnostic evaluation for possible cancer due to non-specific symptoms and further to investigate the impact of socio-demographic and medical factors associated with HRQoL at the time of diagnosis. This was a prospective, multicenter survey study that included patients who were referred for a diagnostic evaluation due to non-specific cancer symptoms. Participants completed the EORTC-QLQ-C30 quality of life scale before and after completing the diagnostic evaluation. The baseline and follow-up EORTC-QLQ-C30 scores were compared with reference populations. The impact of socio-demographic and medical factors on HRQoL at follow-up was explored by bootstrapped multivariate linear regression. A total of 838 patients participated in the study; 680 (81%) also completed follow-up. Twenty-two percent of the patients received a cancer diagnosis at the end of follow-up. Patients presented initially with a high burden of symptoms, less role and emotional functioning and a lower global health/QoL. Most domains improved after diagnosis and no clinically important difference between baseline and follow-up scores was found. Patients reported effects on HRQoL both at baseline and at follow-up compared with the Danish reference population and had similar scores as a cancer reference population. Co-morbidity, being unemployed and receiving a cancer diagnosis had the greatest effect on HRQoL around the time of diagnosis. Patients with non-specific symptoms reported an affected HRQoL while undergoing a diagnostic evaluation for possible cancer. Morbidity, being unemployed and receiving a cancer diagnosis had the greatest effect on HRQoL around the time of diagnosis.
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
Universal Linear Scaling of Permeability and Time for Heterogeneous Fracture Dissolution
NASA Astrophysics Data System (ADS)
Wang, L.; Cardenas, M. B.
2017-12-01
Fractures are dynamically changing over geological time scale due to mechanical deformation and chemical reactions. However, the latter mechanism remains poorly understood with respect to the expanding fracture, which leads to a positively coupled flow and reactive transport processes, i.e., as a fracture expands, so does its permeability (k) and thus flow and reactive transport processes. To unravel this coupling, we consider a self-enhancing process that leads to fracture expansion caused by acidic fluid, i.e., CO2-saturated brine dissolving calcite fracture. We rigorously derive a theory, for the first time, showing that fracture permeability increases linearly with time [Wang and Cardenas, 2017]. To validate this theory, we resort to the direct simulation that solves the Navier-Stokes and Advection-Diffusion equations with a moving mesh according to the dynamic dissolution process in two-dimensional (2D) fractures. We find that k slowly increases first until the dissolution front breakthrough the outbound when we observe a rapid k increase, i.e., the linear time-dependence of k occurs. The theory agrees well with numerical observations across a broad range of Peclet and Damkohler numbers through homogeneous and heterogeneous 2D fractures. Moreover, the theory of linear scaling relationship between k and time matches well with experimental observations of three-dimensional (3D) fractures' dissolution. To further attest to our theory's universality for 3D heterogeneous fractures across a broad range of roughness and correlation length of aperture field, we develop a depth-averaged model that simulates the process-based reactive transport. The simulation results show that, regardless of a wide variety of dissolution patterns such as the presence of dissolution fingers and preferential dissolution paths, the linear scaling relationship between k and time holds. Our theory sheds light on predicting permeability evolution in many geological settings when the self-enhancing process is relevant. References: Wang, L., and M. B. Cardenas (2017), Linear permeability evolution of expanding conduits due to feedback between flow and fast phase change, Geophys. Res. Lett., 44(9), 4116-4123, doi: 10.1002/2017gl073161.
Radiology-led Follow-up System for IVC Filters: Effects on Retrieval Rates and Times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, L.; Taylor, J.; Munneke, G.
Purpose: Successful IVC filter retrieval rates fall with time. Serious complications have been reported following attempts to remove filters after 3-18 months. Failed retrieval may be associated with adverse clinical sequelae. This study explored whether retrieval rates are improved if interventional radiologists organize patient follow-up, rather than relying on the referring clinicians. Methods: Proactive follow-up of patients who undergo filter placement was implemented in May 2008. At the time of filter placement, a report was issued to the referring consultant notifying them of the advised timeframe for filter retrieval. Clinicians were contacted to arrange retrieval within 30 days. We comparedmore » this with our practice for the preceding year. Results: The numbers of filters inserted during the two time periods was similar, as were the numbers of retrieval attempts and the time scale at which they occurred. The rate of successful retrievals increased but not significantly. The major changes were better documentation of filter types and better clinical follow-up. After the change in practice, only one patient was lost to follow-up compared with six the preceding year. Conclusions: Although there was no significant improvement in retrieval rates, the proactive, radiology-led approach improved follow-up and documentation, ensuring that a clinical decision was made about how long the filter was required and whether retrieval should be attempted and ensuring that patients were not lost to follow-up.« less
NASA Astrophysics Data System (ADS)
Ritsema, Jeroen; Garnero, Edward; Lay, Thorne
1997-01-01
A new approach for constraining the seismic shear velocity structure above the core-mantle boundary is introduced, whereby SH-SKS differential travel times, amplitude ratios of SV/SKS, and Sdiff waveshapes are simultaneously modeled. This procedure is applied to the lower mantle beneath the central Pacific using da.ta from numerous deep-focus southwest Pacific earthquakes recorded in North America. We analyze 90 broadband and 248 digitized analog recordings for this source-receiver geometry. SH-SKS times are highly variable and up to 10 s larger than standard reference model predictions, indicating the presence of laterally varying low shear velocities in the study area. The travel times, however, do not constrain the depth extent or velocity gradient of the low-velocity region. SV/SKS amplitude ratios and SH waveforms are sensitive to the radial shear velocity profile, and when analyzed simultaneously with SH-SKS times, rnveal up to 3% shear velocity reductions restricted to the lowermost 190±50 km of the mantle. Our preferred model for the central-eastern Pacific region (Ml) has a strong negative gradient (with 0.5% reduction in velocity relative to the preliminary reference Earth model (PREM) at 2700 km depth and 3% reduction at 2891 km depth) and slight velocity reductions from 2000 to 2700 km depth (0-0.5% lower than PREM). Significant small-scale (100-500 km) shear velocity heterogeneity (0.5%-1%) is required to explain scatter in the differential times and amplitude ratios.
An approach to an acute emotional stress reference scale.
Garzon-Rey, J M; Arza, A; de-la-Camara, C; Lobo, A; Armario, A; Aguilo, J
2017-06-16
The clinical diagnosis aims to identify the degree of affectation of the psycho-physical state of the patient as a guide to therapeutic intervention. In stress, the lack of a measurement tool based on a reference makes it difficult to quantitatively assess this degree of affectation. To define and perform a primary assessment of a standard reference in order to measure acute emotional stress from the markers identified as indicators of the degree. Psychometric tests and biochemical variables are, in general, the most accepted stress measurements by the scientific community. Each one of them probably responds to different and complementary processes related to the reaction to a stress stimulus. The reference that is proposed is a weighted mean of these indicators by assigning them relative weights in accordance with a principal components analysis. An experimental study was conducted on 40 healthy young people subjected to the psychosocial stress stimulus of the Trier Social Stress Test in order to perform a primary assessment and consistency check of the proposed reference. The proposed scale clearly differentiates between the induced relax and stress states. Accepting the subjectivity of the definition and the lack of a subsequent validation with new experimental data, the proposed standard differentiates between a relax state and an emotional stress state triggered by a moderate stress stimulus, as it is the Trier Social Stress Test. The scale is robust. Although the variations in the percentage composition slightly affect the score, but they do not affect the valid differentiation between states.
Example-Based Image Colorization Using Locality Consistent Sparse Representation.
Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L
2017-11-01
Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.
ERIC Educational Resources Information Center
Dorans, Neil J.
2002-01-01
The history of SAT® score scales is summarized, and the need for realigning SAT score scales is demonstrated. The process employed to produce the conversions that take scores from the original SAT scales to recentered scales in which reference group scores are centered near the midpoint of the score-reporting range is laid out. For the purposes of…
Analytical approach to impurity transport studies: Charge state dynamics in tokamak plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shurygin, V. A.
2006-08-15
Ionization and recombination of plasma impurities govern their charge state kinetics, which is imposed upon the dynamics of ions that implies a superposition of the appropriate probabilities and causes an impurity charge state dynamics. The latter is considered in terms of a vector field of conditional probabilities and presented by a vector charge state distribution function with coupled equations of the Kolmogorov type. Analytical solutions of a diffusion problem are derived with the basic spatial and temporal dimensionless parameters. Analysis shows that the empirical scaling D{sub A}{proportional_to}n{sub e}{sup -1} [K. Krieger, G. Fussmann, and the ASDEX Upgrade Team, Nucl. Fusionmore » 30, 2392 (1990)] can be explained by the ratio of the diffusive and kinetic terms, D{sub A}/(n{sub e}a{sup 2}), being used instead of diffusivity, D{sub A}. The derived time scales of charge state dynamics are given by a sum of the diffusive and kinetic times. Detailed simulations of charge state dynamics are performed for argon impurity and compared with the reference modeling.« less
NASA Astrophysics Data System (ADS)
Connell, J. R.
1982-01-01
The results of anemometer, hot-wire anemometer, and laser anemometer array and crosswind sampling of wind speed and turbulence in an area swept by intermediate-to-large wind turbine blades are presented, with comparisons made with a theoretical model for the wind fluctuations. A rotating frame of reference was simulated by timing the anemometric readings at different points of the actuator disk area to coincide with the moment a turbine blade would pass through the point. The hot-wire sensors were mounted on an actual rotating boom, while the laser scanned the wind velocity field in a vertical crosswind circle. The midfrequency region of the turbulence spectrum was found to be depleted, with energy shifted to the high end of the spectrum, with an additional peak at the rotation frequency of the rotor. A model is developed, assuming homogeneous, isotropic turbulence, to reproduce the observed spectra and verify and extend scaling relations using turbine and atmospheric length and time scales. The model is regarded as useful for selecting wind turbine hub heights and rotor rotation rates.
Lebon, G S Bruno; Tzanakis, I; Djambazov, G; Pericleous, K; Eskin, D G
2017-07-01
To address difficulties in treating large volumes of liquid metal with ultrasound, a fundamental study of acoustic cavitation in liquid aluminium, expressed in an experimentally validated numerical model, is presented in this paper. To improve the understanding of the cavitation process, a non-linear acoustic model is validated against reference water pressure measurements from acoustic waves produced by an immersed horn. A high-order method is used to discretize the wave equation in both space and time. These discretized equations are coupled to the Rayleigh-Plesset equation using two different time scales to couple the bubble and flow scales, resulting in a stable, fast, and reasonably accurate method for the prediction of acoustic pressures in cavitating liquids. This method is then applied to the context of treatment of liquid aluminium, where it predicts that the most intense cavitation activity is localised below the vibrating horn and estimates the acoustic decay below the sonotrode with reasonable qualitative agreement with experimental data. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Turbulence as a contributor to intermediate energy storage during solar flares
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornmann, P.L.
Turbulence is considered as a method for converting the energy observed as mass motions during the impulsive phase into thermal energy observed during the gradual phase of solar flares. The kinetic energy of the large-scale eddies driven by the upflowing material continuously cascades to smaller scale eddies until viscosity is able to convert it into thermal energy. The general properties of steady state, homogeneous, fluid turbulence is a nonmagnetic plasma and the properties of turbulent decay are reviewed. The time-dependent behavior of the velocities and energies observed by the X-Ray Polychromator (XRP) instrument on the SMM during the November 5,more » 1980 flare are compared with the properties of turbulence. This study indicates that turbulence may play a role in flare energies and may account for a fraction of the total amount of thermal energy observed during the gradual phase. The rate at which the observed flare velocities decrease is consistent with the decay of turbulent energy but may be too rapid to account for the entire time delay between the impulsive and gradual phases. 19 references.« less
NASA Astrophysics Data System (ADS)
Massei, N.; Fournier, M.
2010-12-01
Daily Seine river flow from 1950 to 2008 was analyzed using Hilbert-Huang Tranform (HHT). For the last ten years, this method which combines the so-called Empirical Mode Decomposition (EMD) multiresolution analysis and the Hilbert transform has proven its efficiency for the analysis of transient oscillatory signals, although the mathematical definition of the EMD is not totally established yet. HHT also provides an interesting alternative to other time-frequency or time-scale analysis of non-stationary signals, the most famous of which being wavelet-based approaches. In this application of HHT to the analysis of the hydrological variability of the Seine river, we seek to characterize the interannual patterns of daily flow, differenciate them from the short-term dynamics and eventually interpret them in the context of regional climate regime fluctuations. In this aim, HHT is also applied to the North-Atlantic Oscillation (NAO) through the annual winter-months NAO index time series. For both hydrological and climatic signals, dominant variability scales are extracted and their temporal variations analyzed by determination of the intantaneous frequency of each component. When compared to previous ones obtained from continuous wavelet transform (CWT) on the same data, HHT results highlighted the same scales and somewhat the same internal components for each signal. However, HHT allowed the identification and extraction of much more similar features during the 1950-2008 period (e.g., around 7-yr, between NAO and Seine flow than what was obtained from CWT, which comes to say that variability scales in flow likely to originate from climatic regime fluctuations were much properly identified in river flow. In addition, a more accurate determination of singularities in the natural processes analyzed were authorized by HHT compared to CWT, in which case the time-frequency resolution partly depends on the basic properties of the filter (i.e., the reference wavelet chosen initially). Compared to CWT or even to discrete wavelet multiresolution analysis, HHT is auto-adaptive, non-parametric, allows an orthogonal decomposition of the signal analyzed and provides a more accurate estimation of changing variability scales across time for highly transient signals.
ERIC Educational Resources Information Center
Myers, Carl L.; Bour, Jennifer L.; Sidebottom, Kristina J.; Murphy, Sara B.; Hakman, Melissa
2010-01-01
Broad-band or multidimensional behavior-rating scales are common tools for evaluating children. Two popular behavior-rating scales, the Behavior Assessment System for Children, Second Edition (BASC-2; Reynolds & Kamphaus, 2004) and the Child Behavior Checklist (CBCL; Achenbach & Rescorla, 2000), have undergone downward extensions so that…
Evaluation of Icing Scaling on Swept NACA 0012 Airfoil Models
NASA Technical Reports Server (NTRS)
Tsao, Jen-Ching; Lee, Sam
2012-01-01
Icing scaling tests in the NASA Glenn Icing Research Tunnel (IRT) were performed on swept wing models using existing recommended scaling methods that were originally developed for straight wing. Some needed modifications on the stagnation-point local collection efficiency (i.e., beta(sub 0) calculation and the corresponding convective heat transfer coefficient for swept NACA 0012 airfoil models have been studied and reported in 2009, and the correlations will be used in the current study. The reference tests used a 91.4-cm chord, 152.4-cm span, adjustable sweep airfoil model of NACA 0012 profile at velocities of 100 and 150 knot and MVD of 44 and 93 mm. Scale-to-reference model size ratio was 1:2.4. All tests were conducted at 0deg angle of attack (AoA) and 45deg sweep angle. Ice shape comparison results were presented for stagnation-point freezing fractions in the range of 0.4 to 1.0. Preliminary results showed that good scaling was achieved for the conditions test by using the modified scaling methods developed for swept wing icing.
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
New guidelines for δ13C measurements
Coplen, Tyler B.; Brand, Willi A.; Gehre, Matthias; Groning, Manfred; Meijer, Harro A. J.; Toman, Blaza; Verkouteren, R. Michael
2006-01-01
Consistency of δ13C measurements can be improved 39−47% by anchoring the δ13C scale with two isotopic reference materials differing substantially in 13C/12C. It is recommended thatδ13C values of both organic and inorganic materials be measured and expressed relative to VPDB (Vienna Peedee belemnite) on a scale normalized by assigning consensus values of −46.6‰ to L-SVEC lithium carbonate and +1.95‰ to NBS 19 calcium carbonate. Uncertainties of other reference material values on this scale are improved by factors up to two or more, and the values of some have been notably shifted: the δ13C of NBS 22 oil is −30.03%.
Differential item functioning magnitude and impact measures from item response theory models.
Kleinman, Marjorie; Teresi, Jeanne A
2016-01-01
Measures of magnitude and impact of differential item functioning (DIF) at the item and scale level, respectively are presented and reviewed in this paper. Most measures are based on item response theory models. Magnitude refers to item level effect sizes, whereas impact refers to differences between groups at the scale score level. Reviewed are magnitude measures based on group differences in the expected item scores and impact measures based on differences in the expected scale scores. The similarities among these indices are demonstrated. Various software packages are described that provide magnitude and impact measures, and new software presented that computes all of the available statistics conveniently in one program with explanations of their relationships to one another.
Zhao, Yonggui; Fang, Yang; Jin, Yanling; Huang, Jun; Bao, Shu; He, Zhiming; Wang, Feng; Zhao, Hai
2014-01-01
The effects of water depth, coverage rate and harvest regime on nutrient removal from wastewater and high-protein biomass production were assessed in a duckweed-based (Lemna aequinoctialis) pilot-scale wastewater treatment system (10 basins × 12 m(2)) that is located near Dianchi Lake in China. The results indicated that a water depth of 50 cm, a coverage rate of 150% and a harvest regime of 4 days were preferable conditions, under which excellent records of high-protein duckweed (dry matter production of 6.65 g/m(2)/d with crude protein content of 36.16% and phosphorus content of 1.46%) were obtained at a temperature of 12-21 °C. At the same time, the system achieved a removal efficiency of 66.16, 23.1, 48.3 and 76.52% for NH4(+)-N, TN, TP and turbidity, respectively, with the considerable removal rate of 0.465 g/m(2)/d for TN and 0.134 g/m(2)/d for TP at a hydraulic retention time of 6 days. In additionally, it was found that a lower duckweed density could lead to higher dissolved oxygen in the water and then a higher removal percentage of NH4(+)-N by nitrobacteria. This study obtains the preferable operation conditions for wastewater treatment and high-protein biomass production in a duckweed-based pilot-scale system, supplying an important reference for further large-scale applications of duckweed.
Chen, Ruifeng; Zhu, Lijun; Lv, Lihuo; Yao, Su; Li, Bin; Qian, Junqing
2017-06-01
Optimization of compatible solutes (ectoine) extraction and purification from Halomonas elongata cell fermentation had been investigated in the laboratory tests of a large scale commercial production project. After culturing H. elongata cells in developed medium at 28 °C for 23-30 h, we obtained an average yield and biomass of ectoine for 15.9 g/L and 92.9 (OD 600 ), respectively. Cell lysis was performed with acid treatment at moderate high temperature (60-70 °C). The downstream processing operations were designed to be as follows: filtration, desalination, cation exchange, extraction of crude product and three times of refining. Among which the cation exchange and extraction of crude product acquired a high average recovery rate of 95 and 96%; whereas a great loss rate of 19 and 15% was observed during the filtration and desalination, respectively. Combined with the recovering of ectoine from the mother liquor of the three times refining, the average of overall yield (referring to the amount of ectoine synthesized in cells) and purity of final product obtained were 43% and over 98%, respectively. However, key factors that affected the production efficiency were not yields but the time used in the extraction of crude product, involving the crystallization step from water, which spended 24-72 h according to the production scale. Although regarding to the productivity and simplicity on laboratory scale, the method described here can not compete with other investigations, in this study we acquired higher purity of ectoine and provided downstream processes that are capable of operating on industrial scale.
NASA Astrophysics Data System (ADS)
Fortiz, V.; Thirumalai, K.; Richey, J. N.; Quinn, T. M.
2014-12-01
We present a replicated record of paired foraminiferal δ18O and Mg/Ca variations in multi-cores collected from the Garrison Basin (26º43'N, 93º55'W) in the northern Gulf of Mexico (GOM). Using δ18O (sea surface temperature, SST; sea surface salinity, SSS proxy) and Mg/Ca (SST proxy) variations in non-encrusted planktic foraminifer Globorotalia truncatulinoides we produce time series spanning the last two millennia that is characterized by centennial-scale climate variability. We interpret geochemical variations in G. truncatulinoides to reflect winter climate variability because data from a sediment trap, located ~350 km east of the core site, reveal that annual flux of G. truncatulinoides is heavily weighted towards winter (peak production in January-February; Spear et al., 2011). Similar centennial-scale variability is also observed in the foraminiferal geochemistry of Globigerinoides ruber in the same multi-cores, which likely reflect mean annual climate variations. Our replicated results and comparisons to other SST reconstructions from the region lend confidence that the northern GOM surface ocean underwent large, centennial-scale variability, most likely dominated by changes in winter climate. This variability occurred in a time period where climate forcing is small and background conditions are similar to pre-industrial times. References: Spear, J.W.; Poore, R.Z., and Quinn, T.M., 2011, Globorotalia truncatulinoides (dextral) Mg/Ca as a proxy for Gulf of Mexico winter mixed-layer temperature: Evidence from a sediment trap in the northern Gulf of Mexico. Marine Micropaleontology, 80, 53-61.
Coplen, T.B.; Qi, H.
2012-01-01
Because there are no internationally distributed stable hydrogen and oxygen isotopic reference materials of human hair, the U.S. Geological Survey (USGS) has prepared two such materials, USGS42 and USGS43. These reference materials span values commonly encountered in human hair stable isotope analysis and are isotopically homogeneous at sample sizes larger than 0.2 mg. USGS42 and USGS43 human-hair isotopic reference materials are intended for calibration of δ(2)H and δ(18)O measurements of unknown human hair by quantifying (1) drift with time, (2) mass-dependent isotopic fractionation, and (3) isotope-ratio-scale contraction. While they are intended for measurements of the stable isotopes of hydrogen and oxygen, they also are suitable for measurements of the stable isotopes of carbon, nitrogen, and sulfur in human and mammalian hair. Preliminary isotopic compositions of the non-exchangeable fractions of these materials are USGS42(Tibetan hair)δ(2)H(VSMOW-SLAP) = -78.5 ± 2.3‰ (n = 62) and δ(18)O(VSMOW-SLAP) = +8.56 ± 0.10‰ (n = 18) USGS42(Indian hair)δ(2)H(VSMOW-SLAP) = -50.3 ± 2.8‰ (n = 64) and δ(18)O(VSMOW-SLAP) = +14.11 ± 0.10‰ (n = 18). Using recommended analytical protocols presented herein for δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurements, the least squares fit regression of 11 human hair reference materials is δ(2)H(VSMOW-SLAP) = 6.085δ(2)O(VSMOW-SLAP) - 136.0‰ with an R-square value of 0.95. The δ(2)H difference between the calibrated results of human hair in this investigation and a commonly accepted human-hair relationship is a remarkable 34‰. It is critical that readers pay attention to the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) of isotopic reference materials in publications, and they need to adjust the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurement results of human hair in previous publications, as needed, to ensure all results on are on the same scales.
An update to the analysis of the Canadian Spatial Reference System
NASA Astrophysics Data System (ADS)
Ferland, R.; Piraszewski, M.; Craymer, M.
2015-12-01
The primary objective of the Canadian Spatial Reference System (CSRS) is to provide users access to a consistent geo-referencing infrastructure over the Canadian landmass. Global Navigation Satellite System (GNSS) positioning accuracy requirements ranges from meter level to mm level (e.g.: crustal deformation). The highest level of the Canadian infrastructure consist of a network of continually operating GPS and GNSS receivers, referred to as active control stations. The network includes all Canadian public active control stations, some bordering US CORS and Alaska stations, Greenland active control stations, as well as a selection of IGS reference frame stations. The Bernese analysis software is used for the daily processing and the combination into weekly solutions which form the basis for this analysis. IGS weekly final orbit, Earth Rotation parameters (ERP's) and coordinates products are used in the processing. For the more demanding users, the time dependant changes of station coordinates is often more important.All station coordinate estimates and related covariance information is used in this analysis. For each input solution, variance factor, translation, rotation and scale (and if needed their rates) or subsets of these are estimated. In the combination of these weekly solutions, station positions and velocities are estimated. Since the time series from the stations in these networks often experience changes in behavior, new (or reuse of) parameters are generally used in these situations. As is often the case with real data, unrealistic coordinates may occur. Automatic detection and removal of outliers is used in these cases. For the transformation, position and velocity parameters loose apriori estimates and uncertainties are provided. Alignment using the usual Helmert transformation to the latest IGb08 realization of ITRF is also performed during the adjustment.
Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald
2007-05-01
(R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).
Realizing a terrestrial reference frame using the Global Positioning System
NASA Astrophysics Data System (ADS)
Haines, Bruce J.; Bar-Sever, Yoaz E.; Bertiger, Willy I.; Desai, Shailen D.; Harvey, Nate; Sibois, Aurore E.; Weiss, Jan P.
2015-08-01
We describe a terrestrial reference frame (TRF) realization based on Global Positioning System (GPS) data alone. Our approach rests on a highly dynamic, long-arc (9 day) estimation strategy and on GPS satellite antenna calibrations derived from Gravity Recovery and Climate Experiment and TOPEX/Poseidon low Earth orbit receiver GPS data. Based on nearly 17 years of data (1997-2013), our solution for scale rate agrees with International Terrestrial Reference Frame (ITRF)2008 to 0.03 ppb yr-1, and our solution for 3-D origin rate agrees with ITRF2008 to 0.4 mm yr-1. Absolute scale differs by 1.1 ppb (7 mm at the Earth's surface) and 3-D origin by 8 mm. These differences lie within estimated error levels for the contemporary TRF.
Visentin, G; Penasa, M; Gottardo, P; Cassandro, M; De Marchi, M
2016-10-01
Milk minerals and coagulation properties are important for both consumers and processors, and they can aid in increasing milk added value. However, large-scale monitoring of these traits is hampered by expensive and time-consuming reference analyses. The objective of the present study was to develop prediction models for major mineral contents (Ca, K, Mg, Na, and P) and milk coagulation properties (MCP: rennet coagulation time, curd-firming time, and curd firmness) using mid-infrared spectroscopy. Individual milk samples (n=923) of Holstein-Friesian, Brown Swiss, Alpine Grey, and Simmental cows were collected from single-breed herds between January and December 2014. Reference analysis for the determination of both mineral contents and MCP was undertaken with standardized methods. For each milk sample, the mid-infrared spectrum in the range from 900 to 5,000cm(-1) was stored. Prediction models were calibrated using partial least squares regression coupled with a wavenumber selection technique called uninformative variable elimination, to improve model accuracy, and validated both internally and externally. The average reduction of wavenumbers used in partial least squares regression was 80%, which was accompanied by an average increment of 20% of the explained variance in external validation. The proportion of explained variance in external validation was about 70% for P, K, Ca, and Mg, and it was lower (40%) for Na. Milk coagulation properties prediction models explained between 54% (rennet coagulation time) and 56% (curd-firming time) of the total variance in external validation. The ratio of standard deviation of each trait to the respective root mean square error of prediction, which is an indicator of the predictive ability of an equation, suggested that the developed models might be effective for screening and collection of milk minerals and coagulation properties at the population level. Although prediction equations were not accurate enough to be proposed for analytic purposes, mid-infrared spectroscopy predictions could be evaluated as phenotypic information to genetically improve milk minerals and MCP on a large scale. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.