Day, Douglas G; Walters, Thomas R; Schwartz, Gail F; Mundorf, Thomas K; Liu, Charlie; Schiffman, Rhett M; Bejanian, Marina
2013-01-01
Background/Aim To evaluate efficacy and safety of bimatoprost 0.03% preservative-free (PF) ophthalmic solution versus bimatoprost 0.03% (Lumigan) ophthalmic solution for glaucoma or ocular hypertension. Methods In this double-masked, parallel-group study, patients were randomised to bimatoprost PF or bimatoprost for 12 weeks. The primary analysis for non-inferiority was change from baseline in worse eye intraocular pressure (IOP) in the per-protocol population at week 12. For equivalence, it was average eye IOP in the intent-to-treat population at each time point at weeks 2, 6 and 12. Results 597 patients were randomised (bimatoprost PF, n=302 and bimatoprost, n=295). The 95% CI upper limit for worse eye IOP change from baseline was <1.5 mm Hg at each week 12 time point, meeting prespecified non-inferiority criteria. The 95% CI upper limit for the treatment difference for average IOP was 0.69 mm Hg and the lower limit was −0.50 mm Hg at all follow-up time points (hours 0, 2 and 8 at weeks 2, 6 and 12), meeting equivalence criteria. Both treatments showed decreases in mean average eye IOP at all follow-up time points (p<0.001), were safe and well tolerated. Conclusions Bimatoprost PF is non-inferior and equivalent to bimatoprost in its ability to reduce IOP-lowering with a safety profile similar to bimatoprost. PMID:23743437
Normal aging reduces motor synergies in manual pointing.
Verrel, Julius; Lövdén, Martin; Lindenberger, Ulman
2012-01-01
Depending upon its organization, movement variability may reflect poor or flexible control of a motor task. We studied adult age-related differences in the structure of postural variability in manual pointing using the uncontrolled manifold (UCM) method. Participants from 2 age groups (younger: 20-30 years; older: 70-80 years; 12 subjects per group) completed a total of 120 pointing trials to 2 different targets presented according to 3 schedules: blocked, alternating, and random. The age groups were similar with respect to basic kinematic variables, end point precision, as well as the accuracy of the biomechanical forward model of the arm. Following the uncontrolled manifold approach, goal-equivalent and nongoal-equivalent components of postural variability (goal-equivalent variability [GEV] and nongoal-equivalent variability [NGEV]) were determined for 5 time points of the movements (start, 10%, 50%, 90%, and end) and used to define a synergy index reflecting the flexibility/stability aspect of motor synergies. Toward the end of the movement, younger adults showed higher synergy indexes than older adults. Effects of target schedule were not reliable. We conclude that normal aging alters the organization of common multidegree-of-freedom movements, with older adults making less flexible use of motor abundance than younger adults. Copyright © 2012 Elsevier Inc. All rights reserved.
Spatial and temporal distribution of trunk-injected (14) C-imidacloprid in Fraxinus trees.
Tanis, Sara R; Cregg, Bert M; Mota-Sanchez, David; McCullough, Deborah G; Poland, Therese M
2012-04-01
Since the discovery of Agrilus planipennis Fairmaire (emerald ash borer) in 2002, researchers have tested several methods of chemical control. Soil drench or trunk injection products containing imidacloprid are commonly used to control adults. However, efficacy can be highly variable and may be due to uneven translocation of systemic insecticides. The purpose of this study was to determine whether sectored xylem anatomy might influence imidacloprid distribution in tree crowns. Imidacloprid equivalent concentrations were higher in leaves from branches in the plane of the injection point (0°) than in leaves from branches on the opposite side of the injection point (180°). Leaves from branches 90° to the right of injection points had higher imidacloprid equivalent concentrations than leaves from branches 90° to the left of injection points. Leaves and shoots had higher imidacloprid equivalent concentrations than roots and trunk cores, indicating that imidacloprid moves primarily through the xylem. Imidacloprid equivalent concentration in leaves varied over time and in relation to injection points. It is concluded that ash trees have sectored 'zigzag' xylem architecture patterns consistent with sectored flow distribution. This could lead to variable distribution of imidacloprid in tree crowns and therefore to variable control of A. planipennis. Copyright © 2012 Society of Chemical Industry.
ERIC Educational Resources Information Center
Green, Leonard; Myerson, Joel; Shah, Anuj K.; Estle, Sara J.; Holt, Daniel D.
2007-01-01
The current experiment examined whether adjusting-amount and adjusting-delay procedures provide equivalent measures of discounting. Pigeons' discounting on the two procedures was compared using a within-subject yoking technique in which the indifference point (number of pellets or time until reinforcement) obtained with one procedure determined…
Modelling and structural analysis of skull/cranial implant: beyond mid-line deformities.
Bogu, V Phanindra; Kumar, Y Ravi; Kumar Khanara, Asit
2017-01-01
This computational study explores modelling and finite element study of the implant under Intracranial pressure (ICP) conditions with normal ICP range (7 mm Hg to 15 mm Hg) or increased ICP (>I5 mm Hg). The implant fixation points allow implant behaviour with respect to intracranial pressure conditions. However, increased fixation points lead to variation in deformation and equivalent stress. Finite element analysis is providing a valuable insight to know the deformation and equivalent stress. The patient CT data (Computed Tomography) is processed in Mimics software to get the mesh model. The implant is modelled by using modified reverse engineering technique with the help of Rhinoceros software. This modelling method is applicable for all types of defects including those beyond the middle line and multiple ones. It is designed with eight fixation points and ten fixation points to fix an implant. Consequently, the mechanical deformation and equivalent stress (von Mises) are calculated in ANSYS 15 software with distinctive material properties such as Titanium alloy (Ti6Al4V), Polymethyl methacrylate (PMMA) and polyether-ether-ketone (PEEK). The deformation and equivalent stress results are obtained through ANSYS 15 software. It is observed that Ti6Al4V material shows low deformation and PEEK material shows less equivalent stress. Among all materials PEEK shows noticeably good result. Hence, a concept was established and more clinically relevant results can be expected with implementation of realistic 3D printed model in the future. This will allow physicians to gain knowledge and decrease surgery time with proper planning.
Thermal averages in a quantum point contact with a single coherent wave packet.
Heller, E J; Aidala, K E; LeRoy, B J; Bleszynski, A C; Kalben, A; Westervelt, R M; Maranowski, K D; Gossard, A C
2005-07-01
A novel formal equivalence between thermal averages of coherent properties (e.g., conductance) and time averages of a single wave packet arises for Fermi gases and certain geometries. In the case of one open channel in a quantum point contact (QPC), only one wave packet history, with the wave packet width equal to the thermal length, completely determines the thermally averaged conductance. The formal equivalence moreover allows very simple physical interpretations of interference features surviving under thermal averaging. Simply put, pieces of the thermal wave packet returning to the QPC along independent paths must arrive at the same time in order to interfere. Remarkably, one immediate result of this approach is that higher temperature leads to narrower wave packets and therefore better resolution of events in the time domain. In effect, experiments at 4.2 K are performing time-gated experiments at better than a gigahertz. Experiments involving thermally averaged ballistic conductance in 2DEGS are presented as an application of this picture.
Method for detecting water equivalent of snow using secondary cosmic gamma radiation
Condreva, K.J.
1997-01-14
Water equivalent of accumulated snow determination by measurement of secondary background cosmic radiation attenuation by the snowpack. By measuring the attenuation of 3-10 MeV secondary gamma radiation it is possible to determine the water equivalent of snowpack. The apparatus is designed to operate remotely to determine the water equivalent of snow in areas which are difficult or hazardous to access during winter, accumulate the data as a function of time and transmit, by means of an associated telemetry system, the accumulated data back to a central data collection point for analysis. The electronic circuitry is designed so that a battery pack can be used to supply power. 4 figs.
Method for detecting water equivalent of snow using secondary cosmic gamma radiation
Condreva, Kenneth J.
1997-01-01
Water equivalent of accumulated snow determination by measurement of secondary background cosmic radiation attenuation by the snowpack. By measuring the attentuation of 3-10 MeV secondary gamma radiation it is possible to determine the water equivalent of snowpack. The apparatus is designed to operate remotely to determine the water equivalent of snow in areas which are difficult or hazardous to access during winter, accumulate the data as a function of time and transmit, by means of an associated telemetry system, the accumulated data back to a central data collection point for analysis. The electronic circuitry is designed so that a battery pack can be used to supply power.
A simple calculation method for determination of equivalent square field.
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-04-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.
Quality factor and dose equivalent investigations aboard the Soviet Space Station Mir
NASA Astrophysics Data System (ADS)
Bouisset, P.; Nguyen, V. D.; Parmentier, N.; Akatov, Ia. A.; Arkhangel'Skii, V. V.; Vorozhtsov, A. S.; Petrov, V. M.; Kovalev, E. E.; Siegrist, M.
1992-07-01
Since Dec 1988, date of the French-Soviet joint space mission 'ARAGATZ', the CIRCE device, had recorded dose equivalent and quality factor values inside the Mir station (380-410 km, 51.5 deg). After the initial gas filling two years ago, the low pressure tissue equivalent proportional counter is still in good working conditions. Some results of three periods are presented. The average dose equivalent rates measured are respectively 0.6, 0.8 and 0.6 mSv/day with a quality factor equal to 1.9. Some detailed measurements show the increasing of the dose equivalent rates through the SAA and near polar horns. The real time determination of the quality factors allows to point out high linear energy transfer events with quality factors in the range 10-20.
Periodic equivalence ratio modulation method and apparatus for controlling combustion instability
Richards, George A.; Janus, Michael C.; Griffith, Richard A.
2000-01-01
The periodic equivalence ratio modulation (PERM) method and apparatus significantly reduces and/or eliminates unstable conditions within a combustion chamber. The method involves modulating the equivalence ratio for the combustion device, such that the combustion device periodically operates outside of an identified unstable oscillation region. The equivalence ratio is modulated between preselected reference points, according to the shape of the oscillation region and operating parameters of the system. Preferably, the equivalence ratio is modulated from a first stable condition to a second stable condition, and, alternatively, the equivalence ratio is modulated from a stable condition to an unstable condition. The method is further applicable to multi-nozzle combustor designs, whereby individual nozzles are alternately modulated from stable to unstable conditions. Periodic equivalence ratio modulation (PERM) is accomplished by active control involving periodic, low frequency fuel modulation, whereby low frequency fuel pulses are injected into the main fuel delivery. Importantly, the fuel pulses are injected at a rate so as not to affect the desired time-average equivalence ratio for the combustion device.
Experimental measurement and modeling of snow accumulation and snowmelt in a mountain microcatchment
NASA Astrophysics Data System (ADS)
Danko, Michal; Krajčí, Pavel; Hlavčo, Jozef; Kostka, Zdeněk; Holko, Ladislav
2016-04-01
Fieldwork is a very useful source of data in all geosciences. This naturally applies also to the snow hydrology. Snow accumulation and snowmelt are spatially very heterogeneous especially in non-forested, mountain environments. Direct field measurements provide the most accurate information about it. Quantification and understanding of processes, that cause these spatial differences are crucial in prediction and modelling of runoff volumes in spring snowmelt period. This study presents possibilities of detailed measurement and modeling of snow cover characteristics in a mountain experimental microcatchment located in northern part of Slovakia in Western Tatra mountains. Catchment area is 0.059 km2 and mean altitude is 1500 m a.s.l. Measurement network consists of 27 snow poles, 3 small snow lysimeters, discharge measurement device and standard automatic weather station. Snow depth and snow water equivalent (SWE) were measured twice a month near the snow poles. These measurements were used to estimate spatial differences in accumulation of SWE. Snowmelt outflow was measured by small snow lysimeters. Measurements were performed in winter 2014/2015. Snow water equivalent variability was very high in such a small area. Differences between particular measuring points reached 600 mm in time of maximum SWE. The results indicated good performance of a snow lysimeter in case of snowmelt timing identification. Increase of snowmelt measured by the snow lysimeter had the same timing as increase in discharge at catchment's outlet and the same timing as the increase in air temperature above the freezing point. Measured data were afterwards used in distributed rainfall-runoff model MIKE-SHE. Several methods were used for spatial distribution of precipitation and snow water equivalent. The model was able to simulate snow water equivalent and snowmelt timing in daily step reasonably well. Simulated discharges were slightly overestimated in later spring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor-Pashow, Kathryn M. L.; Jones, Daniel H.
A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration wasmore » determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.« less
Multiply Degenerate Exceptional Points and Quantum Phase Transitions
NASA Astrophysics Data System (ADS)
Borisov, Denis I.; Ružička, František; Znojil, Miloslav
2015-12-01
The realization of a genuine phase transition in quantum mechanics requires that at least one of the Kato's exceptional-point parameters becomes real. A new family of finite-dimensional and time-parametrized quantum-lattice models with such a property is proposed and studied. All of them exhibit, at a real exceptional-point time t = 0, the Jordan-block spectral degeneracy structure of some of their observables sampled by Hamiltonian H( t) and site-position Q( t). The passes through the critical instant t = 0 are interpreted as schematic simulations of non-equivalent versions of the Big-Bang-like quantum catastrophes.
Unified dead-time compensation structure for SISO processes with multiple dead times.
Normey-Rico, Julio E; Flesch, Rodolfo C C; Santos, Tito L M
2014-11-01
This paper proposes a dead-time compensation structure for processes with multiple dead times. The controller is based on the filtered Smith predictor (FSP) dead-time compensator structure and it is able to control stable, integrating, and unstable processes with multiple input/output dead times. An equivalent model of the process is first computed in order to define the predictor structure. Using this equivalent model, the primary controller and the predictor filter are tuned to obtain an internally stable closed-loop system which also attempts some closed-loop specifications in terms of set-point tracking, disturbance rejection, and robustness. Some simulation case studies are used to illustrate the good properties of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Albers, D. J.; Hripcsak, George
2012-01-01
A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database. PMID:22536009
A simple calculation method for determination of equivalent square field
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-01-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801
Relevance and limits of the principle of "equivalence of care" in prison medicine.
Niveau, Gérard
2007-10-01
The principle of "equivalence of care" in prison medicine is a principle by which prison health services are obliged to provide prisoners with care of a quality equivalent to that provided for the general public in the same country. It is cited in numerous national and international directives and recommendations. The principle of equivalence is extremely relevant from the point of view of normative ethics but requires adaptation from the point of view of applied ethics. From a clinical point of view, the principle of equivalence is often insufficient to take account of the adaptations necessary for the organization of care in a correctional setting. The principle of equivalence is cost-effective in general, but has to be overstepped to ensure the humane management of certain special cases.
77 FR 12764 - POSTNET Barcode Discontinuation
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
... routing code appears in the lower right corner. * * * * * [Delete current 5.6, DPBC Numeric Equivalent, in... correct ZIP Code, ZIP+4 code, or numeric equivalent to the delivery point routing code and which meets... equivalent to the delivery point routing code is formed by [[Page 12766
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Nocon, Robert S; Sharma, Ravi; Birnberg, Jonathan M; Ngo-Metzger, Quyen; Lee, Sang Mee; Chin, Marshall H
2012-07-04
Little is known about the cost associated with a health center's rating as a patient-centered medical home (PCMH). To determine whether PCMH rating is associated with operating cost among health centers funded by the US Health Resources and Services Administration. Cross-sectional study of PCMH rating and operating cost in 2009. PCMH rating was assessed through surveys of health center administrators conducted by Harris Interactive of all 1009 Health Resources and Services Administration–funded community health centers. The survey provided scores from 0 (worst) to 100 (best) for total PCMH score and 6 subscales: access/communication, care management, external coordination, patient tracking, test/referral tracking, and quality improvement. Costs were obtained from the Uniform Data System reports submitted to the Health Resources and Services Administration. We used generalized linear models to determine the relationship between PCMH rating and operating cost. Operating cost per physician full-time equivalent, operating cost per patient per month, and medical cost per visit. Six hundred sixty-nine health centers (66%) were included in the study sample, with 340 excluded because of nonresponse or incomplete data. Mean total PCMH score was 60 (SD, 12; range, 21-90). For the average health center, a 10-point higher total PCMH score was associated with a $2.26 (4.6%) higher operating cost per patient per month (95% CI, $0.86-$4.12). Among PCMH subscales, a 10-point higher score for patient tracking was associated with higher operating cost per physician full-time equivalent ($27,300; 95% CI, $3047-$57,804) and higher operating cost per patient per month ($1.06; 95% CI, $0.29-$1.98). A 10-point higher score for quality improvement was also associated with higher operating cost per physician full-time equivalent ($32,731; 95% CI, $1571-$73,670) and higher operating cost per patient per month ($1.86; 95% CI, $0.54-$3.61). A 10-point higher PCMH subscale score for access/communication was associated with lower operating cost per physician full-time equivalent ($39,809; 95% CI, $1893-$63,169). According to a survey of health center administrators, higher scores on a scale that assessed 6 aspects of the PCMH were associated with higher health center operating costs. Two subscales of the medical home were associated with higher cost and 1 with lower cost.
NASA Technical Reports Server (NTRS)
Hinterkeuser, E. G.; Sternfeld, H., Jr.
1974-01-01
A study was conducted to forecast the noise restrictions which may be imposed on civil transport helicopters in the 1975-1985 time period. Certification and community acceptance criteria were predicted. A 50 passenger tandem rotor helicopter based on the Boeing-Vertol Model 347 was studied to determine the noise reductions required, and the means of achieving them. Some of the important study recommendations are: (1) certification limits should be equivalent to 95 EPNdb at data points located at 500 feet to each side of the touchdown/takeoff point, and 1000 feet from this point directly under the approach and departure flight path. (2) community acceptance should be measured as Equivalent Noise Level (Leq), based on dBA, with separate limits for day and night operations, and (3) in order to comply with the above guidelines, the Model 347 helicopter will require studies and tests leading to several modifications.
Relevance and limits of the principle of “equivalence of care” in prison medicine
Niveau, Gérard
2007-01-01
The principle of “equivalence of care” in prison medicine is a principle by which prison health services are obliged to provide prisoners with care of a quality equivalent to that provided for the general public in the same country. It is cited in numerous national and international directives and recommendations. The principle of equivalence is extremely relevant from the point of view of normative ethics but requires adaptation from the point of view of applied ethics. From a clinical point of view, the principle of equivalence is often insufficient to take account of the adaptations necessary for the organization of care in a correctional setting. The principle of equivalence is cost‐effective in general, but has to be overstepped to ensure the humane management of certain special cases. PMID:17906061
NASA Astrophysics Data System (ADS)
Smith, J. Torquil; Morrison, H. Frank; Doolittle, Lawrence R.; Tseng, Hung-Wen
2007-03-01
Equivalent dipole polarizabilities are a succinct way to summarize the inductive response of an isolated conductive body at distances greater than the scale of the body. Their estimation requires measurement of secondary magnetic fields due to currents induced in the body by time varying magnetic fields in at least three linearly independent (e.g., orthogonal) directions. Secondary fields due to an object are typically orders of magnitude smaller than the primary inducing fields near the primary field sources (transmitters). Receiver coils may be oriented orthogonal to primary fields from one or two transmitters, nulling their response to those fields, but simultaneously nulling to fields of additional transmitters is problematic. If transmitter coils are constructed symmetrically with respect to inversion in a point, their magnetic fields are symmetric with respect to that point. If receiver coils are operated in pairs symmetric with respect to inversion in the same point, then their differenced output is insensitive to the primary fields of any symmetrically constructed transmitters, allowing nulling to three (or more) transmitters. With a sufficient number of receivers pairs, object equivalent dipole polarizabilities can be estimated in situ from measurements at a single instrument sitting, eliminating effects of inaccurate instrument location on polarizability estimates. The method is illustrated with data from a multi-transmitter multi-receiver system with primary field nulling through differenced receiver pairs, interpreted in terms of principal equivalent dipole polarizabilities as a function of time.
Olsen, Joy E; Allinson, Leesa G; Doyle, Lex W; Brown, Nisha C; Lee, Katherine J; Eeles, Abbey L; Cheong, Jeanie L Y; Spittle, Alicia J
2018-01-01
To examine the associations between Prechtl's General Movements Assessment (GMA), conducted from birth to term-equivalent age, and neurodevelopmental outcomes at 12 months corrected age, in infants born very preterm. One hundred and thirty-seven infants born before 30 weeks' gestation had serial GMA (categorized as 'normal' or 'abnormal') before term and at term-equivalent age. At 12 months corrected age, neurodevelopment was assessed using the Alberta Infant Motor Scale (AIMS); Neurological, Sensory, Motor, Developmental Assessment (NSMDA); and Touwen Infant Neurological Examination (TINE). The relationships between GMA at four time points and 12-month neurodevelopmental assessments were examined using regression models. Abnormal GMA at all time points were associated with worse continuous scores on the AIMS, NSMDA, and TINE (p<0.05). Abnormal GMA before term and at term-equivalent age were associated with increased odds of mild-severe dysfunction on the NSMDA (odds ratio [OR] 4.26, 95% confidence interval [CI] 1.55-11.71, p<0.01; and OR 4.16, 95% CI 1.55-11.17, p<0.01 respectively) and abnormal GMA before term with increased odds of suboptimal-abnormal motor function on the TINE (OR 2.75, 95% CI 1.10-6.85, p=0.03). Abnormal GMA before term and at term-equivalent age were associated with worse neurodevelopment at 12 months corrected age in children born very preterm. Abnormal general movements before term predict developmental deficits at 1 year in infants born very preterm. General Movements Assessment before term identifies at-risk infants born very preterm. © 2017 Mac Keith Press.
NASA Astrophysics Data System (ADS)
Viviani, M.; Glisic, B.; Smith, I. F. C.
2006-12-01
This article presents an experimental system developed to determine the kinetic parameters of hardening materials. Kinetic parameters allow computation of the degree of reaction indices (DRIs). DRIs are used in predictive formulae for strength and are used to decouple the autogenous deformation (AD) and thermal deformation (TD). Although there are several methods to determine values for kinetic reaction parameters, most require extensive testing and large databases. A measurement system has been developed in order to determine kinetic parameters. The measurement system consists of optical fiber sensors embedded in specimens that are cured at varying temperatures and conditions. Sensors are used in pairs inside each specimen, and each pair has two deformation sensors that, aside from their axial stiffness, have the same characteristics. The study of the interaction between sensors and hardening material leads to establishment of a link between the deformations measured and the degree of reaction, by means of the newly developed concept of the equivalency point. The equivalency point is assumed to be an indicator of the degree of reaction and it allows the determination of the apparent activation energy (Ea) which defines the equivalent time. Equivalent time is a degree of reaction index (DRI) and it accounts for the combined effect of time and temperature in concrete. This new methodology has been used to predict the compressive strength and separate the AD and thermal expansion coefficient (TEC) in seven types of concrete. The measurement system allows gathering of data necessary for fast and efficient predictions. Due to its robustness and reduced dimensions it also has potential for in situ application.
Dependence of the pour point of diesel fuels on the properties of the initial components
NASA Technical Reports Server (NTRS)
Ostashov, V. M.; Bobrovskiy, S. A.
1979-01-01
An analytical expression is obtained for the dependence of the pour point of diesel fuels on the pour point and weight relationship of the initial components. For determining the pour point of a multicomponent fuel mixture, it is assumed that the mixture of two components has the pour point of a separate equivalent component, then calculating the pour point of this equivalent component mixed with a third component, etc.
Predict or classify: The deceptive role of time-locking in brain signal classification
NASA Astrophysics Data System (ADS)
Rusconi, Marco; Valleriani, Angelo
2016-06-01
Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.
A passive pendulum wobble damper for a low spin rate Jupiter flyby spacecraft
NASA Technical Reports Server (NTRS)
Fowler, R. C.
1972-01-01
When the spacecraft has a low spin rate and precise pointing requirements, the wobble angle must be damped in a time period equivalent to a very few wobble cycles. The design, analysis, and test of a passive pendulum wobble damper are described.
On the design of a radix-10 online floating-point multiplier
NASA Astrophysics Data System (ADS)
McIlhenny, Robert D.; Ercegovac, Milos D.
2009-08-01
This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.
Transitions between refrigeration regions in extremely short quantum cycles
NASA Astrophysics Data System (ADS)
Feldmann, Tova; Kosloff, Ronnie
2016-05-01
The relation between the geometry of refrigeration cycles and their performance is explored. The model studied is based on a coupled spin system. Small cycle times, termed sudden refrigerators, develop coherence and inner friction. We explore the interplay between coherence and energy of the working medium employing a family of sudden cycles with decreasing cycle times. At the point of maximum coherence the cycle changes geometry. This region of cycle times is characterized by a dissipative resonance where heat is dissipated both to the hot and cold baths. We rationalize the change of geometry of the cycle as a result of a half-integer quantization which maximizes coherence. From this point on, increasing or decreasing the cycle time, eventually leads to refrigeration cycles. The transition point between refrigerators and short circuit cycles is characterized by a transition from finite to singular dynamical temperature. Extremely short cycle times reach a universal limit where all cycles types are equivalent.
Nocon, Robert S.; Sharma, Ravi; Birnberg, Jonathan M.; Ngo-Metzger, Quyen; Lee, Sang Mee; Chin, Marshall H.
2013-01-01
Context Little is known about the cost associated with a health center’s rating as a patient-centered medical home (PCMH). Objective To determine whether PCMH rating is associated with operating cost among health centers funded by the US Health Resources and Services Administration. Design, Setting, and Participants Cross-sectional study of PCMH rating and operating cost in 2009. PCMH rating was assessed through surveys of health center administrators conducted by Harris Interactive of all 1009 Health Resources and Services Administration–funded community health centers. The survey provided scores from 0 (worst) to 100 (best) for total PCMH score and 6 subscales: access/communication, care management, external coordination, patient tracking, test/referral tracking, and quality improvement. Costs were obtained from the Uniform Data System reports submitted to the Health Resources and Services Administration. We used generalized linear models to determine the relationship between PCMH rating and operating cost. Main Outcome Measures Operating cost per physician full-time equivalent, operating cost per patient per month, and medical cost per visit. Results Six hundred sixty-nine health centers (66%) were included in the study sample, with 340 excluded because of nonresponse or incomplete data. Mean total PCMH score was 60 (SD,12; range, 21–90). For the average health center, a 10-point higher total PCMH score was associated with a $2.26 (4.6%) higher operating cost per patient per month (95% CI, $0.86–$4.12). Among PCMH subscales, a 10-point higher score for patient tracking was associated with higher operating cost per physician full-time equivalent ($27 300; 95% CI,$3047–$57 804) and higher operating cost per patient per month ($1.06;95%CI,$0.29–$1.98). A 10-point higher score for quality improvement was also associated with higher operating cost per physician full-time equivalent ($32 731; 95% CI, $1571–$73 670) and higher operating cost per patient per month ($1.86; 95% CI, $0.54–$3.61). A 10-point higher PCMH subscale score for access/communication was associated with lower operating cost per physician full-time equivalent ($39 809; 95% CI, $1893–$63 169). Conclusions According to a survey of health center administrators, higher scores on a scale that assessed 6 aspects of the PCMH were associated with higher health center operating costs. Two subscales of the medical home were associated with higher cost and 1 with lower cost. PMID:22729481
Equivalence studies for complex active ingredients and dosage forms.
Bhattycharyya, Lokesh; Dabbah, Roger; Hauck, Walter; Sheinin, Eric; Yeoman, Lynn; Williams, Roger
2005-11-17
This article examines the United States Pharmacopeia (USP) and its role in assessing the equivalence and inequivalence of biological and biotechnological drug substances and products-a role USP has played since its founding in 1820. A public monograph in the United States Pharmacopeia-National Formulary helps practitioners and other interested parties understand how an article's strength, quality, and purity should be controlled. Such a monograph is a standard to which all manufactured ingredients and products should conform, and it is a starting point for subsequent-entry manufacturers, recognizing that substantial additional one-time characterization studies may be needed to document equivalence. Review of these studies is the province of the regulatory agency, but compendial tests can provide clarity and guidance in the process.
Adegbija, Odewumi; Hoy, Wendy E; Wang, Zhiqiang
2015-11-13
There have been suggestions that currently recommended waist circumference (WC) cut-off points for Australians of European origin may not be applicable to Aboriginal people who have different body habitus profiles. We aimed to generate equivalent WC values that correspond to body mass index (BMI) points for identifying absolute cardiovascular disease (CVD) risks. Prospective cohort study. An Aboriginal community in Australia's Northern Territory. From 1992 to 1998, 920 adults without CVD, with age, WC and BMI measurements were followed-up for up to 20 years. Incident CVD, coronary artery disease (CAD) and heart failure (HF) events during the follow-up period ascertained from hospitalisation data. We generated WC values with 10-year absolute risks equivalent for the development of CVD as BMI values (20-34 kg/m(2)) using the Weibull accelerated time-failure model. There were 211 incident cases of CVD over 13,669 person-years of follow-up. At the average age of 35 years, WC values with absolute CVD, CAD and HF risks equivalent to BMI of 25 kg/m(2) were 91.5, 91.8 and 91.7 cm, respectively, for males, and corresponding WC values were 92.5, 92.7 and 93 cm for females. WC values with equal absolute CVD, CAD and HF risks to BMI of 30 kg/m(2) were 101.7, 103.1 and 102.6 cm, respectively, for males, and corresponding values were 99.2, 101.6 and 101.5 cm for females. Association between WC and CVD did not depend on gender (p=0.54). WC ranging from 91 to 93 cm was equivalent to BMI 25 kg/m(2) for overweight, and 99 to 103 cm was equivalent to BMI of 30 kg/m(2) for obesity in terms of predicting 10-year absolute CVD risk. Replicating the absolute risk method in other Aboriginal communities will further validate the WC values generated for future development of WC cut-off points for Aboriginal people. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Membrane voltage changes in passive dendritic trees: a tapering equivalent cylinder model.
Poznański, R R
1988-01-01
An exponentially tapering equivalent cylinder model is employed in order to approximate the loss of the dendritic trunk parameter observed from anatomical data on apical and basilar dendrites of CA1 and CA3 hippocampal pyramidal neurons. This model allows dendritic trees with a relative paucity of branching to be treated. In particular, terminal branches are not required to end at the same electrotonic distance. The Laplace transform method is used to obtain analytic expressions for the Green's function corresponding to an instantaneous pulse of current injected at a single point along a tapering equivalent cylinder with sealed ends. The time course of the voltage in response to an arbitrary input is computed using the Green's function in a convolution integral. Examples of current input considered are (1) an infinitesimally brief (Dirac delta function) pulse and (2) a step pulse. It is demonstrated that inputs located on a tapering equivalent cylinder are more effective at the soma than identically placed inputs on a nontapering equivalent cylinder. Asymptotic solutions are derived to enable the voltage response behaviour over both relatively short and long time periods to be analysed. Semilogarithmic plots of these solutions provide a basis for estimating the membrane time constant tau m from experimental transients. Transient voltage decrement from a clamped soma reveals that tapering tends to reduce the error associated with inadequate voltage clamping of the dendritic membrane. A formula is derived which shows that tapering tends to increase the estimate of the electrotonic length parameter L.
Yang, Qing; Fan, Liu-Yin; Huang, Shan-Sheng; Zhang, Wei; Cao, Cheng-Xi
2011-04-01
In this paper, we developed a novel method of acid-base titration, viz. the electromigration acid-base titration (EABT), via a moving neutralization boundary (MNR). With HCl and NaOH as the model strong acid and base, respectively, we conducted the experiments on the EABT via the method of moving neutralization boundary for the first time. The experiments revealed that (i) the concentration of agarose gel, the voltage used and the content of background electrolyte (KCl) had evident influence on the boundary movement; (ii) the movement length was a function of the running time under the constant acid and base concentrations; and (iii) there was a good linearity between the length and natural logarithmic concentration of HCl under the optimized conditions, and the linearity could be used to detect the concentration of acid. The experiments further manifested that (i) the RSD values of intra-day and inter-day runs were less than 1.59 and 3.76%, respectively, indicating similar precision and stability in capillary electrophoresis or HPLC; (ii) the indicators with different pK(a) values had no obvious effect on EABT, distinguishing strong influence on the judgment of equivalence-point titration in the classic one; and (iii) the constant equivalence-point titration always existed in the EABT, rather than the classic volumetric analysis. Additionally, the EABT could be put to good use for the determination of actual acid concentrations. The experimental results achieved herein showed a new general guidance for the development of classic volumetric analysis and element (e.g. nitrogen) content analysis in protein chemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
von Oertzen, Timo; Brandmaier, Andreas M
2013-06-01
Structural equation models have become a broadly applied data-analytic framework. Among them, latent growth curve models have become a standard method in longitudinal research. However, researchers often rely solely on rules of thumb about statistical power in their study designs. The theory of power equivalence provides an analytical answer to the question of how design factors, for example, the number of observed indicators and the number of time points assessed in repeated measures, trade off against each other while holding the power for likelihood-ratio tests on the latent structure constant. In this article, we present applications of power-equivalent transformations on a model with data from a previously published study on cognitive aging, and highlight consequences of participant attrition on power. PsycINFO Database Record (c) 2013 APA, all rights reserved.
The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and ...
NASA Astrophysics Data System (ADS)
Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna
2018-05-01
As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.
Wardlaw, Bruce R.; Ellwood, Brooks B.; Lambert, Lance L.; Tomkin, Jonathan H.; Bell, Gordon L.; Nestell, Galina P.
2012-01-01
Here we establish a magnetostratigraphy susceptibility zonation for the three Middle Permian Global boundary Stratotype Sections and Points (GSSPs) that have recently been defined, located in Guadalupe Mountains National Park, West Texas, USA. These GSSPs, all within the Middle Permian Guadalupian Series, define (1) the base of the Roadian Stage (base of the Guadalupian Series), (2) the base of the Wordian Stage and (3) the base of the Capitanian Stage. Data from two additional stratigraphic successions in the region, equivalent in age to the Kungurian–Roadian and Wordian–Capitanian boundary intervals, are also reported. Based on low-field, mass specific magnetic susceptibility (χ) measurements of 706 closely spaced samples from these stratigraphic sections and time-series analysis of one of these sections, we (1) define the magnetostratigraphy susceptibility zonation for the three Guadalupian Series Global boundary Stratotype Sections and Points; (2) demonstrate that χ datasets provide a proxy for climate cyclicity; (3) give quantitative estimates of the time it took for some of these sediments to accumulate; (4) give the rates at which sediments were accumulated; (5) allow more precise correlation to equivalent sections in the region; (6) identify anomalous stratigraphic horizons; and (7) give estimates for timing and duration of geological events within sections.
POLARBEAR constraints on cosmic birefringence and primordial magnetic fields
Ade, Peter A. R.; Arnold, Kam; Atlas, Matt; ...
2015-12-08
Here, we constrain anisotropic cosmic birefringence using four-point correlations of even-parity E-mode and odd-parity B-mode polarization in the cosmic microwave background measurements made by the POLARization of the Background Radiation (POLARBEAR) experiment in its first season of observations. We find that the anisotropic cosmic birefringence signal from any parity-violating processes is consistent with zero. The Faraday rotation from anisotropic cosmic birefringence can be compared with the equivalent quantity generated by primordial magnetic fields if they existed. The POLARBEAR nondetection translates into a 95% confidence level (C.L.) upper limit of 93 nanogauss (nG) on the amplitude of an equivalent primordial magneticmore » field inclusive of systematic uncertainties. This four-point correlation constraint on Faraday rotation is about 15 times tighter than the upper limit of 1380 nG inferred from constraining the contribution of Faraday rotation to two-point correlations of B-modes measured by Planck in 2015. Metric perturbations sourced by primordial magnetic fields would also contribute to the B-mode power spectrum. Using the POLARBEAR measurements of the B-mode power spectrum (two-point correlation), we set a 95% C.L. upper limit of 3.9 nG on primordial magnetic fields assuming a flat prior on the field amplitude. This limit is comparable to what was found in the Planck 2015 two-point correlation analysis with both temperature and polarization. Finally, we perform a set of systematic error tests and find no evidence for contamination. This work marks the first time that anisotropic cosmic birefringence or primordial magnetic fields have been constrained from the ground at subdegree scales.« less
A Nitration Reaction Puzzle for the Organic Chemistry Laboratory
ERIC Educational Resources Information Center
Wieder, Milton J.; Barrows, Russell
2008-01-01
Treatment of phenylacetic acid with 90% HNO[subscript 3] yields a product, I, whose observed melting point is 175-179 degrees C and whose equivalent weight is approximately 226 grams. Treatment of phenylacetic acid with 70% HNO[subscript 3] yields a product, II, whose observed melting point is 106-111 degrees C and whose equivalent weight is…
5 CFR 9901.345 - Accelerated Compensation for Developmental Positions (ACDP).
Code of Federal Regulations, 2011 CFR
2011-01-01
... applicable control point, unless the criteria for exceeding the control point are met. (f) To qualify for an ACDP, an employee must have a rating of record of Level 3 (or equivalent non-NSPS rating of record) or... performing at the equivalent of Level 3 or higher. This performance assessment does not constitute a rating...
5 CFR 9901.345 - Accelerated Compensation for Developmental Positions (ACDP).
Code of Federal Regulations, 2010 CFR
2010-01-01
... applicable control point, unless the criteria for exceeding the control point are met. (f) To qualify for an ACDP, an employee must have a rating of record of Level 3 (or equivalent non-NSPS rating of record) or... performing at the equivalent of Level 3 or higher. This performance assessment does not constitute a rating...
Referent control and motor equivalence of reaching from standing
Tomita, Yosuke; Feldman, Anatol G.
2016-01-01
Motor actions may result from central changes in the referent body configuration, defined as the body posture at which muscles begin to be activated or deactivated. The actual body configuration deviates from the referent configuration, particularly because of body inertia and environmental forces. Within these constraints, the system tends to minimize the difference between these configurations. For pointing movement, this strategy can be expressed as the tendency to minimize the difference between the referent trajectory (RT) and actual trajectory (QT) of the effector (hand). This process may underlie motor equivalent behavior that maintains the pointing trajectory regardless of the number of body segments involved. We tested the hypothesis that the minimization process is used to produce pointing in standing subjects. With eyes closed, 10 subjects reached from a standing position to a remembered target located beyond arm length. In randomly chosen trials, hip flexion was unexpectedly prevented, forcing subjects to take a step during pointing to prevent falling. The task was repeated when subjects were instructed to intentionally take a step during pointing. In most cases, reaching accuracy and trajectory curvature were preserved due to adaptive condition-specific changes in interjoint coordination. Results suggest that referent control and the minimization process associated with it may underlie motor equivalence in pointing. NEW & NOTEWORTHY Motor actions may result from minimization of the deflection of the actual body configuration from the centrally specified referent body configuration, in the limits of neuromuscular and environmental constraints. The minimization process may maintain reaching trajectory and accuracy regardless of the number of body segments involved (motor equivalence), as confirmed in this study of reaching from standing in young healthy individuals. Results suggest that the referent control process may underlie motor equivalence in reaching. PMID:27784802
ERIC Educational Resources Information Center
Fletcher, Edward C., Jr.
2018-01-01
The purpose of this article was to examine faculty characteristics of CTE programs across the nation as well as identify the challenges and successes of implementing programs. Findings pointed to the overall decline of CTE full-time-equivalent faculty and the increase of adjunct faculty. In addition, findings demonstrated a lack of ethnic and…
The Geological Grading Scale: Every million Points Counts!
NASA Astrophysics Data System (ADS)
Stegman, D. R.; Cooper, C. M.
2006-12-01
The concept of geological time, ranging from thousands to billions of years, is naturally quite difficult for students to grasp initially, as it is much longer than the timescales over which they experience everyday life. Moreover, universities operate on a few key timescales (hourly lectures, weekly assignments, mid-term examinations) to which students' maximum attention is focused, largely driven by graded assessment. The geological grading scale exploits the overwhelming interest students have in grades as an opportunity to instill familiarity with geological time. With the geological grading scale, the number of possible points/marks/grades available in the course is scaled to 4.5 billion points --- collapsing the entirety of Earth history into one semester. Alternatively, geological time can be compressed into each assignment, with scores for weekly homeworks not worth 100 points each, but 4.5 billion! Homeworks left incomplete with questions unanswered lose 100's of millions of points - equivalent to missing the Paleozoic era. The expected quality of presentation for problem sets can be established with great impact in the first week by docking assignments an insignificant amount points for handing in messy work; though likely more points than they've lost in their entire schooling history combined. Use this grading scale and your students will gradually begin to appreciate exactly how much time represents a geological blink of the eye.
Effect of virtual reality training on laparoscopic surgery: randomised controlled trial
Soerensen, Jette L; Grantcharov, Teodor P; Dalsgaard, Torur; Schouenborg, Lars; Ottosen, Christian; Schroeder, Torben V; Ottesen, Bent S
2009-01-01
Objective To assess the effect of virtual reality training on an actual laparoscopic operation. Design Prospective randomised controlled and blinded trial. Setting Seven gynaecological departments in the Zeeland region of Denmark. Participants 24 first and second year registrars specialising in gynaecology and obstetrics. Interventions Proficiency based virtual reality simulator training in laparoscopic salpingectomy and standard clinical education (controls). Main outcome measure The main outcome measure was technical performance assessed by two independent observers blinded to trainee and training status using a previously validated general and task specific rating scale. The secondary outcome measure was operation time in minutes. Results The simulator trained group (n=11) reached a median total score of 33 points (interquartile range 32-36 points), equivalent to the experience gained after 20-50 laparoscopic procedures, whereas the control group (n=10) reached a median total score of 23 (22-27) points, equivalent to the experience gained from fewer than five procedures (P<0.001). The median total operation time in the simulator trained group was 12 minutes (interquartile range 10-14 minutes) and in the control group was 24 (20-29) minutes (P<0.001). The observers’ inter-rater agreement was 0.79. Conclusion Skills in laparoscopic surgery can be increased in a clinically relevant manner using proficiency based virtual reality simulator training. The performance level of novices was increased to that of intermediately experienced laparoscopists and operation time was halved. Simulator training should be considered before trainees carry out laparoscopic procedures. Trial registration ClinicalTrials.gov NCT00311792. PMID:19443914
40 CFR 86.1804-01 - Acronyms and abbreviations.
Code of Federal Regulations, 2012 CFR
2012-07-01
...—Nonmethane Hydrocarbons. NMHCE—Non-Methane Hydrocarbon Equivalent. NMOG—Non-methane organic gases. NO—nitric....—Degree(s). DNPH—2,4-dinitrophenylhydrazine. EDV—Emission Data Vehicle. EP—End point. ETW—Equivalent test...—dispensed fuel temperature. THC—Total Hydrocarbons. THCE—Total Hydrocarbon Equivalent. TLEV—Transitional Low...
40 CFR 86.1804-01 - Acronyms and abbreviations.
Code of Federal Regulations, 2014 CFR
2014-07-01
...—Nonmethane Hydrocarbons. NMHCE—Non-Methane Hydrocarbon Equivalent. NMOG—Non-methane organic gases. NO—nitric....—Degree(s). DNPH—2,4-dinitrophenylhydrazine. EDV—Emission Data Vehicle. EP—End point. ETW—Equivalent test...—dispensed fuel temperature. THC—Total Hydrocarbons. THCE—Total Hydrocarbon Equivalent. TLEV—Transitional Low...
NASA Astrophysics Data System (ADS)
Zheng, Zhen-Yu; Li, Peng
2018-04-01
We consider the time evolution of two-point correlation function in the transverse-field Ising chain (TFIC) with ring frustration. The time-evolution procedure we investigated is equivalent to a quench process in which the system is initially prepared in a classical kink state and evolves according to the time-dependent Schrödinger equation. Within a framework of perturbative theory (PT) in the strong kink phase, the evolution of the correlation function is disclosed to demonstrate a qualitatively new behavior in contrast to the traditional case without ring frustration.
Analysis and design of wedge projection display system based on ray retracing method.
Lee, Chang-Kun; Lee, Taewon; Sung, Hyunsik; Min, Sung-Wook
2013-06-10
A design method for the wedge projection display system based on the ray retracing method is proposed. To analyze the principle of image formation on the inclined surface of the wedge-shaped waveguide, the bundle of rays is retraced from an imaging point on the inclined surface to the aperture of the waveguide. In consequence of ray retracing, we obtain the incident conditions of the ray, such as the position and the angle at the aperture, which provide clues for image formation. To illuminate the image formation, the concept of the equivalent imaging point is proposed, which is the intersection where the incident rays are extended over the space regardless of the refraction and reflection in the waveguide. Since the initial value of the rays arriving at the equivalent imaging point corresponds to that of the rays converging into the imaging point on the inclined surface, the image formation can be visualized by calculating the equivalent imaging point over the entire inclined surface. Then, we can find image characteristics, such as their size and position, and their degree of blur--by analyzing the distribution of the equivalent imaging point--and design the optimized wedge projection system by attaching the prism structure at the aperture. The simulation results show the feasibility of the ray retracing analysis and characterize the numerical relation between the waveguide parameters and the aperture structure for on-axis configuration. The experimental results verify the designed system based on the proposed method.
Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E
2017-09-01
Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Porter, Charlotte A; Bradley, Kevin M; McGowan, Daniel R
2018-05-01
The aim of this study was to verify, with a large dataset of 1394 Cr-EDTA glomerular filtration rate (GFR) studies, the equivalence of slope-intercept and single-sample GFR. Raw data from 1394 patient studies were used to calculate four-sample slope-intercept GFR in addition to four individual single-sample GFR values (blood samples taken at 90, 150, 210 and 270 min after injection). The percentage differences between the four-sample slope-intercept and each of the single-sample GFR values were calculated, to identify the optimum single-sample time point. Having identified the optimum time point, the percentage difference between the slope-intercept and optimal single-sample GFR was calculated across a range of GFR values to investigate whether there was a GFR value below which the two methodologies cannot be considered equivalent. It was found that the lowest percentage difference between slope-intercept and single-sample GFR was for the third blood sample, taken at 210 min after injection. The median percentage difference was 2.5% and only 6.9% of patient studies had a percentage difference greater than 10%. Above a GFR value of 30 ml/min/1.73 m, the median percentage difference between the slope-intercept and optimal single-sample GFR values was below 10%, and so it was concluded that, above this value, the two techniques are sufficiently equivalent. This study supports the recommendation of performing single-sample GFR measurements for GFRs greater than 30 ml/min/1.73 m.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-01
... of a Supported Direct FDA Work Hour for FY 2013 FDA is required to estimate 100 percent of its costs... operating costs. A. Estimating the Full Cost per Direct Work Hour in FY 2011 In general, the starting point for estimating the full cost per direct work hour is to estimate the cost of a full-time-equivalent...
Galactic Shapiro delay to the Crab pulsar and limit on weak equivalence principle violation
NASA Astrophysics Data System (ADS)
Desai, Shantanu; Kahya, Emre
2018-02-01
We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Weak equivalence principle by using observations of "nano-shot" giant pulses from the Crab pulsar with time-delay <0.4 ns, as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Weak equivalence principle in terms of the PPN parameter Δ γ < 2.41× 10^{-15}. From the time-difference between simultaneous optical and radio observations, we get Δ γ < 1.54× 10^{-9}. We also point out differences in our calculation of Shapiro delay and that from two recent papers (Yang and Zhang, Phys Rev D 94(10):101501, 2016; Zhang and Gong, Astrophys J 837:134, 2017), which used the same observations to obtain a corresponding limit on Δ γ.
Delayed benefit of naps on motor learning in preschool children.
Desrochers, Phillip C; Kurdziel, Laura B F; Spencer, Rebecca M C
2016-03-01
Sleep benefits memory consolidation across a variety of domains in young adults. However, while declarative memories benefit from sleep in young children, such improvements are not consistently seen for procedural skill learning. Here we examined whether performance improvements on a procedural task, although not immediately observed, are evident after a longer delay when augmented by overnight sleep (24 h after learning). We trained 47 children, aged 33-71 months, on a serial reaction time task and, using a within-subject design, evaluated performance at three time points: immediately after learning, after a daytime nap (nap condition) or equivalent wake opportunity (wake condition), and 24 h after learning. Consistent with previous studies, performance improvements following the nap did not differ from performance improvements following an equivalent interval spent awake. However, significant benefits of the nap were found when performance was assessed 24 h after learning. This research demonstrates that motor skill learning is benefited by sleep, but that this benefit is only evident after an extended period of time.
Sohrabi, Mehdi; Hakimi, Amir
2018-02-01
Photoneutron (PN) dosimetry in fast, epithermal and thermal energy ranges originated from the beam and albedo neutrons in high-energy X-ray medical accelerators is highly important from scientific, technical, radiation protection and medical physics points of view. Detailed dose equivalents in the fast, epithermal and thermal PN energy ranges in air up to 2m as well as at 35 positions from the central axis of 12 cross sections of the phantom at different depths were determined in 18MV X-ray beams of a Siemens ONCOR accelerator. A novel dosimetry method based on polycarbonate track dosimeters (PCTD)/ 10 B (with/without cadmium cover) was used to determine and separate different PN dose equivalents in air and in a multilayer polyethylene phantom. Dose equivalent distributions of PNs, as originated from the main beam and/or albedo PNs, on cross-plane, in-plane and diagonal axes in 10cm×10cm fields are reported. PN dose equivalent distributions on the 3 axes have their maxima at the isocenter. Epithermal and thermal PN depth dose equivalent distributions in the phantom for different positions studied peak at ∼3cm depth. The neutron dosimeters used for the first time in such studies are highly effective for separating dose equivalents of PNs in the studied energy ranges (beam and/or albedo). The PN dose equivalent data matrix made available in this paper is highly essential for detailed patient dosimetry in general and for estimating secondary cancer risks in particular. Copyright © 2017. Published by Elsevier GmbH.
Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.
Renner, Ian W; Warton, David I
2013-03-01
Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.
Exploring the reference point in prospect theory: gambles for length of life.
van Osch, Sylvie M C; van den Hout, Wilbert B; Stiggelbout, Anne M
2006-01-01
Attitude toward risk is an important factor determining patient preferences. Risk behavior has been shown to be strongly dependent on the perception of the outcome as either a gain or a loss. According to prospect theory, the reference point determines how an outcome is perceived. However, no theory on the location of the reference point exists, and for the health domain, there is no direct evidence for the location of the reference point. This article combines qualitative with quantitative data to provide evidence of the reference point in life-year certainty equivalent (CE) gambles and to explore the psychology behind the reference point. The authors argue that goals (aspirations) in life influence the reference point. While thinking aloud, 45 healthy respondents gave certainty equivalents for life-year CE gambles with long and short durations of survival. Contrary to suggestions from the literature, qualitative data argued that the offered certainty equivalent most frequently served as the reference point. Thus, respondents perceived life-year CE gambles as mixed. Framing of the question and goals set in life appeared to be important factors behind the psychology of the reference point. On the basis of the authors' quantitative and qualitative data, they argue that goals alter the perception of outcomes as described by prospect theory by influencing the reference point. This relationship is more apparent for the near future as opposed to the remote future, as goals are mostly set for the near future.
Increasing radiographer productivity by an incentive point system.
Williams, B; Chacko, P T
1982-01-01
Because of a very low technologist productivity in their Radiology Department, the authors describe a Productive Point System they developed and implemented to solve this personnel problem. After establishing the average time required to perform all exams, point credits (one point for every ten minutes utilized) were assigned to each exam performed, thereby determining an index of production. A Productive Index of 80% was considered realistic and was the equivalent of 192 points for a 40-hour work week. From 1975 to 1978 personal productivity increased from 79% to 113%. This resulted in an average yearly fiscal savings of over $20,000.00 for this three-year period. There was also a significant improvement in exam efficiency and quality, job attitude, personnel morale, and public relations. This program was highly successful because technologist acceptance and cooperation was complete, and this occurred mainly because the system supports the normal occupational goals and expectations of technologists.
Increasing radiographer productivity by an incentive point system.
Williams, B; Chacko, P T
1983-01-01
Because of a very low technologist productivity in their Radiology Department, the authors describe a Productive Point System they developed and implemented to solve this personnel problem. After establishing the average time required to perform all exams, point credits (one point for every ten minutes utilized) were assigned to each exam performed, thereby determining an index of production. A Productive Index of 80% was considered realistic and was the equivalent of 192 points for a 40-hour work week. From 1975 to 1978 personal productivity increased from 79% to 113%. This resulted in an average yearly fiscal savings of over $20,000.00 for this three-year period. There was also a significant improvement in exam efficiency and quality, job attitude, personnel morale, and public relations. This program was highly successful because technologist acceptance and cooperation was complete, and this occurred mainly because the system supports the normal occupational goals and expectations of technologists.
NASA Astrophysics Data System (ADS)
Bin Mansoor, Saad; Sami Yilbas, Bekir
2015-08-01
Laser short-pulse heating of an aluminum thin film is considered and energy transfer in the film is formulated using the Boltzmann equation. Since the heating duration is short and the film thickness is considerably small, thermal separation of electron and lattice sub-systems is incorporated in the analysis. The electron-phonon coupling is used to formulate thermal communication of both sub-systems during the heating period. Equivalent equilibrium temperature is introduced to account for the average energy of all phonons around a local point when they redistribute adiabatically to an equilibrium state. Temperature predictions of the Boltzmann equation are compared with those obtained from the two-equation model. It is found that temperature predictions from the Boltzmann equation differ slightly from the two-equation model results. Temporal variation of equivalent equilibrium temperature does not follow the laser pulse intensity in the electron sub-system. The time occurrence of the peak equivalent equilibrium temperature differs for electron and lattice sub-systems, which is attributed to phonon scattering in the irradiated field in the lattice sub-system. In this case, time shift is observed for occurrence of the peak temperature in the lattice sub-system.
Paint-only is equivalent to scrub-and-paint in preoperative preparation of abdominal surgery sites.
Ellenhorn, Joshua D I; Smith, David D; Schwarz, Roderich E; Kawachi, Mark H; Wilson, Timothy G; McGonigle, Kathryn F; Wagman, Lawrence D; Paz, I Benjamin
2005-11-01
Antiseptic preoperative skin site preparation is used to prepare the operative site before making a surgical incision. The goal of this preparation is a reduction in postoperative wound infection. The most straightforward technique necessary to achieve this goal remains controversial. A prospective randomized trial was designed to prove equivalency for two commonly used techniques of surgical skin site preparation. Two hundred thirty-four patients undergoing nonlaparoscopic abdominal operations were consented for the trial. Exclusion criteria included presence of active infection at the time of operation, neutropenia, history of skin reaction to iodine, or anticipated insertion of prosthetic material at the time of operation. Patients were randomized to receive either a vigorous 5-minute scrub with povidone-iodine soap, followed by absorption with a sterile towel, and a paint with aqueous povidone-iodine or surgical site preparation with a povidone-iodine paint only. The primary end point of the study was wound infection rate at 30 days, defined as presence of clinical signs of infection requiring therapeutic intervention. Patients randomized to the scrub-and-paint arm (n = 115) and the paint-only arm (n = 119) matched at baseline with respect to age, comorbidity, wound classification, mean operative time, placement of drains, prophylactic antibiotic use, and surgical procedure (all p > 0.09). Wound infection occurred in 12 (10%) scrub-and-paint patients, and 12 (10%) paint-only patients. Based on our predefined equivalency parameters, we conclude equivalence of infection rates between the two preparations. Preoperative preparation of the abdomen with a scrub with povidone-iodine soap followed by a paint with aqueous povidone-iodine can be abandoned in favor of a paint with aqueous povidone-iodine alone. This change will result in reductions in operative times and costs.
Unsteady steady-states: Central causes of unintentional force drift
Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M.; Latash, Mark L.
2016-01-01
We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely, the fingertip referent coordinate (RFT) and its apparent stiffness (CFT). The system's state is defined by a point in the {RFT; CFT} space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback, and attempted to maintain this force for 15 s after the feedback was removed. We used the “inverse piano” apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of RFT and CFT showed that force drop was mostly due to a drift in RFT towards the actual fingertip position. Three analysis techniques, namely, hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong co-variation in RFT and CFT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {RFT; CFT} relative to their average trends also displayed covariation. On the whole the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system towards a low-energy state, and (b) a faster synergic motion of RFT and CFT that tends to stabilize the output fingertip force about the slowly-drifting equilibrium point. PMID:27540726
Unsteady steady-states: central causes of unintentional force drift.
Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M; Latash, Mark L
2016-12-01
We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely the fingertip referent coordinate (R FT ) and its apparent stiffness (C FT ). The system's state is defined by a point in the {R FT ; C FT } space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback and attempted to maintain this force for 15 s after the feedback was removed. We used the "inverse piano" apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of R FT and C FT showed that force drop was mostly due to a drift in R FT toward the actual fingertip position. Three analysis techniques, namely hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong covariation in R FT and C FT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {R FT ; C FT } relative to their average trends also displayed covariation. On the whole, the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system toward a low-energy state and (b) a faster synergic motion of R FT and C FT that tends to stabilize the output fingertip force about the slowly drifting equilibrium point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podkaminer, Kara; Xie, Fei; Lin, Zhenhong
This analysis represents the biogas-to-electricity pathway under the Renewable Fuel Standard (RFS) as a point of purchase incentive and tests the impact of this incentive on EV deployment using a vehicle consumer choice model. The credit value generated under this policy was calculated in a number of scenarios based on electricity use of each power train choice on a yearly basis over the 15 year vehicle lifetime, accounting for the average electric vehicle miles travelled and vehicle efficiency, competition for biogas-derived electricity among electric vehicles (EVs), the RIN equivalence value and the time value of money. The credit value calculationmore » in each of these scenarios is offered upfront as a point of purchase incentive for EVs using the Market Acceptance of Advanced Automotive Technologies (MA3T) vehicle choice model, which tracks sales, fleet size and energy use over time. The majority of the scenarios use a proposed RIN equivalence value, which increases the credit value as a way to explore the analysis space. Additional model runs show the relative impact of the equivalence value on EV deployment. The MA3T model output shows that a consumer incentive accelerates the deployment of EVs for all scenarios relative to the baseline (no policy) case. In the scenario modeled to represent the current biogas-to-electricity generation capacity (15 TWh/year) with a 5.24kWh/RIN equivalence value, the policy leads to an additional 1.4 million plug-in hybrid electric vehicles (PHEVs) and 3.5 million battery electric vehicles (BEVs) in 2025 beyond the no-policy case of 1.3 million PHEVs and 2.1 million BEVs when the full value of the credit is passed on to the consumer. In 2030, this increases to 2.4 million PHEVs and 7.3 million BEVs beyond the baseline. This larger impact on BEVs relative to PHEVs is due in part to the larger credit that BEVs receive in the model based on the greater percentage of electric vehicle miles traveled by BEVs relative to PHEVs. In this scenario 2025 also represents the last year in which biogas-derived electricity is able to fully supply the transportation electricity demand in the model. After 2025, the credit value declines on a per vehicle basis. At the same time a larger fraction of the credit may shift towards biogas producers in order to incent additional biogas production. The expanded 41 TWh/year biogas availability scenarios represent an increase beyond today s generation capacity and allow greater eRIN generation. With a 5.24kWh/RIN equivalence value, when all of the credit is directed towards reducing vehicle purchase prices, the 41 TWh/year biogas scenario results in 4.1 million additional PHEVs and 12.2 million additional BEVs on the road in 2030 beyond the baseline of 2.5 million PHEVs and 6.1 million BEVs. Under this expanded biogas capacity, biogas-derived electricity generation is able to fully supply electricity for a fleet of over 21 million EVs (15.6 million BEVs and 5.8 million PHEVs) on a yearly basis. In addition to assessing the full value credit scenarios described above, multiple scenarios were analyzed to determine the impact if only a fraction of the credit value was passed on to the consumer. In all of these cases, the EV deployment was scaled back as the fraction of the credit that was passed on to the consumer was reduced. These scenarios can be used to estimate the impact if the credit value is reduced in other ways as well, as demonstrated by the scenarios where the current (22.6 kWh/RIN) equivalence value was used. The EV deployment that results from an equivalence value of 22.6 kWh/RIN equivalence value is roughly equivalent to the EV deployment observed in the 25% case using the 5.24 kWh/RIN equivalence value. A higher equivalence value means that a smaller number of credits, and therefore value, is created for each kWh, and therefore the impact on EV deployment is reduced. This analysis shows several of the drivers that will impact eRIN generation and credit value, and tests the impact of an eRIN point of purchase incentive on EV deployment. This additional incentive can accelerate the deployment of EVs when it is used to reduce vehicle purchase prices. However, the ultimate impact of this policy, as modeled here, will be determined by future RIN prices, the extent to which eRIN credit value can be passed on to the consumer as a point of purchase incentive and the equivalence value.« less
A restricted proof that the weak equivalence principle implies the Einstein equivalence principle
NASA Technical Reports Server (NTRS)
Lightman, A. P.; Lee, D. L.
1973-01-01
Schiff has conjectured that the weak equivalence principle (WEP) implies the Einstein equivalence principle (EEP). A proof is presented of Schiff's conjecture, restricted to: (1) test bodies made of electromagnetically interacting point particles, that fall from rest in a static, spherically symmetric gravitational field; (2) theories of gravity within a certain broad class - a class that includes almost all complete relativistic theories that have been found in the literature, but with each theory truncated to contain only point particles plus electromagnetic and gravitational fields. The proof shows that every nonmentric theory in the class (every theory that violates EEP) must violate WEP. A formula is derived for the magnitude of the violation. It is shown that WEP is a powerful theoretical and experimental tool for constraining the manner in which gravity couples to electromagnetism in gravitation theories.
Gauge equivalence of two different IAnsaaumlItze Rfor non-Abelian charged vortices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, S.K.
1987-05-15
Recently the existence of non-Abelian charged vortices has been established by taking two different Ansa$uml: tze in SU(2) gauge theories. We point out that these two Ansa$uml: tze are in two topologically equivalent prescriptions. We show that they are gauge equivalent only at infinity. We also show that this gauge equivalence is not possible for Z/sub N/ vortices in SU(N) gauge theories for Ngreater than or equal to3.
Prabhu, Malavika; Clapp, Mark A; McQuaid-Hanson, Emily; Ona, Samsiya; OʼDonnell, Taylor; James, Kaitlyn; Bateman, Brian T; Wylie, Blair J; Barth, William H
2018-07-01
To evaluate whether a liposomal bupivacaine incisional block decreases postoperative pain and represents an opioid-minimizing strategy after scheduled cesarean delivery. In a single-blind, randomized controlled trial among opioid-naive women undergoing cesarean delivery, liposomal bupivacaine or placebo was infiltrated into the fascia and skin at the surgical site, before fascial closure. Using an 11-point numeric rating scale, the primary outcome was pain score with movement at 48 hours postoperatively. A sample size of 40 women per group was needed to detect a 1.5-point reduction in pain score in the intervention group. Pain scores and opioid consumption, in oral morphine milligram equivalents, at 48 hours postoperatively were summarized as medians (interquartile range) and compared using the Wilcoxon rank-sum test. Between March and September 2017, 249 women were screened, 103 women enrolled, and 80 women were randomized. One woman in the liposomal bupivacaine group was excluded after randomization as a result of a vertical skin incision, leaving 39 patients in the liposomal bupivacaine group and 40 in the placebo group. Baseline characteristics between groups were similar. The median (interquartile range) pain score with movement at 48 hours postoperatively was 4 (2-5) in the liposomal bupivacaine group and 3.5 (2-5.5) in the placebo group (P=.72). The median (interquartile range) opioid use was 37.5 (7.5-60) morphine milligram equivalents in the liposomal bupivacaine group and 37.5 (15-75) morphine milligram equivalents in the placebo group during the first 48 hours postoperatively (P=.44). Compared with placebo, a liposomal bupivacaine incisional block at the time of cesarean delivery resulted in similar postoperative pain scores in the first 48 hours postoperatively. ClinicalTrials.gov, NCT02959996.
Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.
1981-01-01
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.
Tillin, T; Sattar, N; Godsland, I F; Hughes, A D; Chaturvedi, N; Forouhi, N G
2015-01-01
Aims Conventional definitions of obesity, e.g. body mass index (BMI) ≥ 30 kg/m2 or waist circumference cut-points of 102 cm (men) and 88 cm (women), may underestimate metabolic risk in non-Europeans. We prospectively identified equivalent ethnicity-specific obesity cut-points for the estimation of diabetes risk in British South Asians, African-Caribbeans and Europeans. Methods We studied a population-based cohort from London, UK (1356 Europeans, 842 South Asians, 335 African-Caribbeans) who were aged 40–69 years at baseline (1988–1991), when they underwent anthropometry, fasting and post-load (75 g oral glucose tolerance test) blood tests. Incident Type 2 diabetes was identified from primary care records, participant recall and/or follow-up biochemistry. Ethnicity-specific obesity cut-points in association with diabetes incidence were estimated using negative binomial regression. Results Diabetes incidence rates (per 1000 person years) at a median follow-up of 19 years were 20.8 (95% CI: 18.4, 23.6) and 12.0 (8.3, 17.2) in South Asian men and women, 16.5 (12.7, 21.4) and 17.5 (13.0, 23.7) in African-Caribbean men and women, and 7.4 (6.3, 8.7), and 7.2 (5.3, 9.8) in European men and women. For incidence rates equivalent to those at a BMI of 30 kg/m2 in European men and women, age- and sex-adjusted cut-points were: South Asians, 25.2 (23.4, 26.6) kg/m2; and African-Caribbeans, 27.2 (25.2, 28.6) kg/m2. For South Asian and African-Caribbean men, respectively, waist circumference cut-points of 90.4 (85.0, 94.5) and 90.6 (85.0, 94.5) cm were equivalent to a value of 102 cm in European men. Waist circumference cut-points of 84.0 (74.0, 90.0) cm in South Asian women and 81.2 (71.4, 87.4) cm in African-Caribbean women were equivalent to a value of 88 cm in European women. Conclusions In prospective analyses, British South Asians and African-Caribbeans had equivalent diabetes incidence rates at substantially lower obesity levels than the conventional European cut-points. PMID:25186015
Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.
Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael
2014-09-01
In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.
Process for Assessing the Stability of HAN (Hydroxylammonium Nitrate)-Based Liquid Propellants
1989-02-09
Scholz, Guidelines by Messrs. Riedel - de Haen for Titration according to the Karl Fischer Method ), 3. Auflage/3rd Edition 1982 /22/ JANDER; G. and... Potentiometric determination of the equivalence point is the most suitable method /15/. Time is saved by using automatically recording titration 33...propellant. The water content of liquid propellants on the basis of HAN according to Fig. 6 can be determined directly by Karl Fischer titration. This
NASA Astrophysics Data System (ADS)
Lusanna, Luca; Pauri, Massimo
"The last remnant of physical objectivity of space-time" is disclosed in the case of a continuous family of spatially non-compact models of general relativity (GR). The physical individuation of point-events is furnished by the autonomous degrees of freedom of the gravitational field (viz., the Dirac observables) which represent-as it were-the ontic part of the metric field. The physical role of the epistemic part (viz. the gauge variables) is likewise clarified as embodying the unavoidable non-inertial aspects of GR. At the end the philosophical import of the Hole Argument is substantially weakened and in fact the Argument itself dissolved, while a specific four-dimensional holistic and structuralist view of space-time (called point-structuralism) emerges, including elements common to the tradition of both substantivalism and relationism. The observables of our models undergo real temporal change: this gives new evidence to the fact that statements like the frozen-time character of evolution, as other ontological claims about GR, are model dependent.
Development and validation of a canine radius replica for mechanical testing of orthopedic implants.
Little, Jeffrey P; Horn, Timothy J; Marcellin-Little, Denis J; Harrysson, Ola L A; West, Harvey A
2012-01-01
To design and fabricate fiberglass-reinforced composite (FRC) replicas of a canine radius and compare their mechanical properties with those of radii from dog cadavers. Replicas based on 3 FRC formulations with 33%, 50%, or 60% short-length discontinuous fiberglass by weight (7 replicas/group) and 5 radii from large (> 30-kg) dog cadavers. Bones and FRC replicas underwent nondestructive mechanical testing including 4-point bending, axial loading, and torsion and destructive testing to failure during 4-point bending. Axial, internal and external torsional, and bending stiffnesses were calculated. Axial pullout loads for bone screws placed in the replicas and cadaveric radii were also assessed. Axial, internal and external torsional, and 4-point bending stiffnesses of FRC replicas increased significantly with increasing fiberglass content. The 4-point bending stiffness of 33% and 50% FRC replicas and axial and internal torsional stiffnesses of 33% FRC replicas were equivalent to the cadaveric bone stiffnesses. Ultimate 4-point bending loads did not differ significantly between FRC replicas and bones. Ultimate screw pullout loads did not differ significantly between 33% or 50% FRC replicas and bones. Mechanical property variability (coefficient of variation) of cadaveric radii was approximately 2 to 19 times that of FRC replicas, depending on loading protocols. Within the range of properties tested, FRC replicas had mechanical properties equivalent to and mechanical property variability less than those of radii from dog cadavers. Results indicated that FRC replicas may be a useful alternative to cadaveric bones for biomechanical testing of canine bone constructs.
Non-minimally coupled tachyon field in teleparallel gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fazlpour, Behnaz; Banijamali, Ali, E-mail: b.fazlpour@umz.ac.ir, E-mail: a.banijamali@nit.ac.ir
2015-04-01
We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P{sub 4}), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of thismore » point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacGregor, B.R.; McCoy, A.E.; Wickramasekara, S., E-mail: wickrama@grinnell.edu
2012-09-15
We present a formalism of Galilean quantum mechanics in non-inertial reference frames and discuss its implications for the equivalence principle. This extension of quantum mechanics rests on the Galilean line group, the semidirect product of the real line and the group of analytic functions from the real line to the Euclidean group in three dimensions. This group provides transformations between all inertial and non-inertial reference frames and contains the Galilei group as a subgroup. We construct a certain class of unitary representations of the Galilean line group and show that these representations determine the structure of quantum mechanics in non-inertialmore » reference frames. Our representations of the Galilean line group contain the usual unitary projective representations of the Galilei group, but have a more intricate cocycle structure. The transformation formula for the Hamiltonian under the Galilean line group shows that in a non-inertial reference frame it acquires a fictitious potential energy term that is proportional to the inertial mass, suggesting the equivalence of inertial mass and gravitational mass in quantum mechanics. - Highlights: Black-Right-Pointing-Pointer A formulation of Galilean quantum mechanics in non-inertial reference frames is given. Black-Right-Pointing-Pointer The key concept is the Galilean line group, an infinite dimensional group. Black-Right-Pointing-Pointer Unitary, cocycle representations of the Galilean line group are constructed. Black-Right-Pointing-Pointer A non-central extension of the group underlies these representations. Black-Right-Pointing-Pointer Quantum equivalence principle and gravity emerge from these representations.« less
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
Investigation of the Parameters of Sealed Triple-Point Cells for Cryogenic Gases
NASA Astrophysics Data System (ADS)
Fellmuth, B.; Wolber, L.
2011-01-01
An overview of the parameters of a large number of sealed triple-point cells for the cryogenic gases hydrogen, oxygen, neon, and argon is given that have been determined within the framework of an international star intercomparison to optimize the measurement of melting curves as well as to establish complete and reliable uncertainty budgets for the realization of temperature fixed points. Special emphasis is given to the question, whether the parameters are primarily influenced by the cell design or the properties of the fixed-point samples. For explaining surprisingly large periods of the thermal recovery after the heat pulses of the intermittent heating through the melting range, a simple model is developed based on a newly defined heat-capacity equivalent, which considers the heat of fusion and a melting-temperature inhomogeneity. The analysis of the recovery using a graded set of exponential functions containing different time constants is also explained in detail.
Aaron, Stacey E; Gregory, Chris M; Simpson, Annie N
2016-08-01
One-third of individuals with stroke report symptoms of depression, which has a negative impact on recovery. Physical activity (PA) is a potentially effective therapy. Our objective was to examine the associations of subjectively assessed PA levels and symptoms of depression in a nationally representative stroke sample. We conducted a cross-sectional study of 175 adults in the National Health and Nutrition Examination Survey 2011-2012 cycle. Moderate, vigorous, and combination equivalent PA metabolic equivalent (MET)-minutes per week averages were derived from the Global Physical Activity Questionnaire, and .the 2008 Physical Activity Guidelines/American College of Sports Medicine recommendations of ≥500 MET-minutes per week of moderate, vigorous, or combination equivalent PA were used as cut points. Depression symptoms were measured using the Patient Health Questionnaire-9. Meeting moderate PA guidelines resulted in 74% lower odds of having depression symptoms (P < .0001) and 89% lower odds of major symptoms of depression (P = .0003). Meeting vigorous guidelines showed a 91% lower odds of having mild symptoms of depression (P = .04). Participating in some moderate, vigorous, or combination equivalent PA revealed the odds of depression symptoms 13 times greater compared with meeting guidelines (P = .005); odds of mild symptoms of depression were 9 times greater (P = .01); and odds of major symptoms of depression were 15 times greater (P = .006). There is a lower risk of developing mild symptoms of depression when vigorous guidelines for PA are met and developing major symptoms of depression when moderate guidelines met. Participating in some PA is not enough to reduce the risk of depression symptoms.
Bhaskaran, Krishnan; Forbes, Harriet J; Douglas, Ian; Leon, David A; Smeeth, Liam
2013-01-01
Objectives To assess the completeness and representativeness of body mass index (BMI) data in the Clinical Practice Research Datalink (CPRD), and determine an optimal strategy for their use. Design Descriptive study. Setting Electronic healthcare records from primary care. Participants A million patient random sample from the UK CPRD primary care database, aged ≥16 years. Primary and secondary outcome measures BMI completeness in CPRD was evaluated by age, sex and calendar period. CPRD-based summary BMI statistics for each calendar year (2003–2010) were age-standardised and sex-standardised and compared with equivalent statistics from the Health Survey for England (HSE). Results BMI completeness increased over calendar time from 37% in 1990–1994 to 77% in 2005–2011, was higher among females and increased with age. When BMI at specific time points was assigned based on the most recent record, calendar–year-specific mean BMI statistics underestimated equivalent HSE statistics by 0.75–1.1 kg/m2. Restriction to those with a recent (≤3 years) BMI resulted in mean BMI estimates closer to HSE (≤0.28 kg/m2 underestimation), but excluded up to 47% of patients. An alternative strategy of imputing up-to-date BMI based on modelled changes in BMI over time since the last available record also led to mean BMI estimates that were close to HSE (≤0.37 kg/m2 underestimation). Conclusions Completeness of BMI in CPRD increased over time and varied by age and sex. At a given point in time, a large proportion of the most recent BMIs are unlikely to reflect current BMI; consequent BMI misclassification might be reduced by employing model-based imputation of current BMI. PMID:24038008
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Goodrich, J Marc; Lonigan, Christopher J; Kleuver, Cherie G; Farver, Joann M
2016-09-01
In this study we evaluated the predictive validity of conceptual scoring. Two independent samples of Spanish-speaking language minority preschoolers (Sample 1: N = 96, mean age = 54·51 months, 54·3% male; Sample 2: N = 116, mean age = 60·70 months, 56·0% male) completed measures of receptive, expressive, and definitional vocabulary in their first (L1) and second (L2) languages at two time points approximately 9-12 months apart. We examined whether unique L1 and L2 vocabulary at time 1 predicted later L2 and L1 vocabulary, respectively. Results indicated that unique L1 vocabulary did not predict later L2 vocabulary after controlling for initial L2 vocabulary. An identical pattern of results emerged for L1 vocabulary outcomes. We also examined whether children acquired translational equivalents for words known in one language but not the other. Results indicated that children acquired translational equivalents, providing partial support for the transfer of vocabulary knowledge across languages.
NASA Astrophysics Data System (ADS)
Engdahl, N.
2017-12-01
Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.
Interferometric Creep Testing.
1985-03-01
33 3 FIGURES (Continued) 16. Temperature of Zerodur sample and apparent strain * as a function of time with PZT-modulated mirror (point b...moves vertically if all mirrors are at 45 deg. The lower beam path et remains constant if the prism moves up or down or if the Zerodur plate expands...using a 2-in. Zerodur test sample at room temperature and no load except that from the weight of the top steel mirror disk, equivalent to 0.5 psi
Epitope Specificity Delimits the Functional Capabilities of Vaccine-Induced CD8 T Cell Populations
Hill, Brenna J.; Darrah, Patricia A.; Ende, Zachary; Ambrozak, David R.; Quinn, Kylie M.; Darko, Sam; Gostick, Emma; Wooldridge, Linda; van den Berg, Hugo A.; Venturi, Vanessa; Larsen, Martin; Davenport, Miles P.; Seder, Robert A.
2014-01-01
Despite progress toward understanding the correlates of protective T cell immunity in HIV infection, the optimal approach to Ag delivery by vaccination remains uncertain. We characterized two immunodominant CD8 T cell populations generated in response to immunization of BALB/c mice with a replication-deficient adenovirus serotype 5 vector expressing the HIV-derived Gag and Pol proteins at equivalent levels. The Gag-AI9/H-2Kd epitope elicited high-avidity CD8 T cell populations with architecturally diverse clonotypic repertoires that displayed potent lytic activity in vivo. In contrast, the Pol-LI9/H-2Dd epitope elicited motif-constrained CD8 T cell repertoires that displayed lower levels of physical avidity and lytic activity despite equivalent measures of overall clonality. Although low-dose vaccination enhanced the functional profiles of both epitope-specific CD8 T cell populations, greater polyfunctionality was apparent within the Pol-LI9/H-2Dd specificity. Higher proportions of central memory-like cells were present after low-dose vaccination and at later time points. However, there were no noteworthy phenotypic differences between epitope-specific CD8 T cell populations across vaccine doses or time points. Collectively, these data indicate that the functional and phenotypic properties of vaccine-induced CD8 T cell populations are sensitive to dose manipulation, yet constrained by epitope specificity in a clonotype-dependent manner. PMID:25348625
Celeste, Roger Keller; Bastos, João Luiz
2013-12-01
To estimate the mid-point of an open-ended income category and to assess the impact of two equivalence scales on income-health associations. Data were obtained from the 2010 Brazilian Oral Health Survey (Pesquisa Nacional de Saúde Bucal--SBBrasil 2010). Income was converted from categorical to two continuous variables (per capita and equivalized) for each mid-point. The median mid-point was R$ 14,523.50 and the mean, R$ 24,507.10. When per capita income was applied, 53% of the population were below the poverty line, compared with 15% with equivalized income. The magnitude of income-health associations was similar for continuous income, but categorized equivalized income tended to decrease the strength of association.
Dimension improvement in Dhar's refutation of the Eden conjecture
NASA Astrophysics Data System (ADS)
Bertrand, Quentin; Pertinand, Jules
2018-03-01
We consider the Eden model on the d-dimensional hypercubical unoriented lattice, for large d. Initially, every lattice point is healthy, except the origin which is infected. Then, each infected lattice point contaminates any of its neighbours with rate 1. The Eden model is equivalent to first passage percolation, with exponential passage times on edges. The Eden conjecture states that the limit shape of the Eden model is a Euclidean ball. By pushing the computations of Dhar [5] a little further with modern computers and efficient implementation we obtain improved bounds for the speed of infection. This shows that the Eden conjecture does not hold in dimension superior to 22 (the lowest known dimension was 35).
Sprays and Cartan projective connections
NASA Astrophysics Data System (ADS)
Saunders, D. J.
2004-10-01
Around 80 years ago, several authors (for instance H. Weyl, T.Y. Thomas, J. Douglas and J.H.C. Whitehead) studied the projective geometry of paths, using the methods of tensor calculus. The principal object of study was a spray, namely a homogeneous second-order differential equation, or more generally a projective equivalence class of sprays. At around the same time, E. Cartan studied the same topic from a different point of view, by imagining a projective space attached to a manifold, or, more generally, attached to a `manifold of elements'; the infinitesimal `glue' may be interpreted in modern language as a Cartan projective connection on a principal bundle. This paper describes the geometrical relationship between these two points of view.
Acoustic field in unsteady moving media
NASA Technical Reports Server (NTRS)
Bauer, F.; Maestrello, L.; Ting, L.
1995-01-01
In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.
20 CFR 332.5 - Equivalent of full-time work.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Equivalent of full-time work. 332.5 Section... INSURANCE ACT MILEAGE OR WORK RESTRICTIONS AND STAND-BY OR LAY-OVER RULES § 332.5 Equivalent of full-time work. An employee who has the equivalent of full-time work with respect to service on days within a...
20 CFR 332.5 - Equivalent of full-time work.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Equivalent of full-time work. 332.5 Section... INSURANCE ACT MILEAGE OR WORK RESTRICTIONS AND STAND-BY OR LAY-OVER RULES § 332.5 Equivalent of full-time work. An employee who has the equivalent of full-time work with respect to service on days within a...
20 CFR 332.5 - Equivalent of full-time work.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Equivalent of full-time work. 332.5 Section... INSURANCE ACT MILEAGE OR WORK RESTRICTIONS AND STAND-BY OR LAY-OVER RULES § 332.5 Equivalent of full-time work. An employee who has the equivalent of full-time work with respect to service on days within a...
20 CFR 332.5 - Equivalent of full-time work.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Equivalent of full-time work. 332.5 Section... INSURANCE ACT MILEAGE OR WORK RESTRICTIONS AND STAND-BY OR LAY-OVER RULES § 332.5 Equivalent of full-time work. An employee who has the equivalent of full-time work with respect to service on days within a...
20 CFR 332.5 - Equivalent of full-time work.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Equivalent of full-time work. 332.5 Section... INSURANCE ACT MILEAGE OR WORK RESTRICTIONS AND STAND-BY OR LAY-OVER RULES § 332.5 Equivalent of full-time work. An employee who has the equivalent of full-time work with respect to service on days within a...
NASA Technical Reports Server (NTRS)
Butler, Thomas G.
1990-01-01
In modeling a complex structure the researcher was faced with a component that would have logical appeal if it were modeled as a beam. The structure was a mast of a robot controlled gantry crane. The structure up to this point already had a large number of degrees of freedom, so the idea of conserving grid points by modeling the mast as a beam was attractive. The researcher decided to make a separate problem of of the mast and model it in three dimensions with plates, then extract the equivalent beam properties by setting up the loading to simulate beam-like deformation and constraints. The results could then be used to represent the mast as a beam in the full model. A comparison was made of properties derived from models of different constraints versus manual calculations. The researcher shows that the three-dimensional model is ineffective in trying to conform to the requirements of an equivalent beam representation. If a full 3-D plate model were used in the complete representation of the crane structure, good results would be obtained. Since the attempt is to economize on the size of the model, a better way to achieve the same results is to use substructuring and condense the mast to equivalent end boundary and intermediate mass points.
Samsonov, Boris F
2013-04-28
One of the simplest non-Hermitian Hamiltonians, first proposed by Schwartz in 1960, that may possess a spectral singularity is analysed from the point of view of the non-Hermitian generalization of quantum mechanics. It is shown that the η operator, being a second-order differential operator, has supersymmetric structure. Asymptotic behaviour of the eigenfunctions of a Hermitian Hamiltonian equivalent to the given non-Hermitian one is found. As a result, the corresponding scattering matrix and cross section are given explicitly. It is demonstrated that the possible presence of a spectral singularity in the spectrum of the non-Hermitian Hamiltonian may be detected as a resonance in the scattering cross section of its Hermitian counterpart. Nevertheless, just at the singular point, the equivalent Hermitian Hamiltonian becomes undetermined.
Loh, Yince; Duckwiler, Gary R
2010-10-01
The Onyx liquid embolic system (Onyx) was approved in the European Union in 1999 for embolization of lesions in the intracranial and peripheral vasculature, including brain arteriovenous malformations (AVMs) and hypervascular tumors. In 2001 a prospective, equivalence, multicenter, randomized controlled trial was initiated to support a submission for FDA approval. The objective of this study was to verify the safety and efficacy of Onyx compared with N-butyl cyanoacrylate (NBCA) for the presurgical treatment of brain AVMs. One hundred seventeen patients with brain AVMs were treated with either Onyx (54 patients) or NBCA (63 patients) for presurgical endovascular embolization between May 2001 and April 2003. The primary end point was technical success in achieving ≥ 50% reduction in AVM volume. Secondary end points were operative blood loss and resection time. All adverse events (AEs) were reported and assigned a relationship to the Onyx or NBCA system, treatment, disease, surgery, or other/unknown. The Data Safety Monitoring Board adjudicated AEs, and a blinded, independent core lab assessed volume measurements. Patients were monitored through discharge after the final surgery or through a 3- and/or 12-month follow-up if resection had not been performed or was incomplete. The use of Onyx led to ≥ 50% AVM volume reduction in 96% of cases versus 85% for NBCA (p = not significant). The secondary end points of resection time and blood loss were similar. Serious AEs were also similar between the 2 treatment groups. Onyx is equivalent to NBCA in safety and efficacy as a preoperative embolic agent in reducing brain AVM volume by at least 50%.
Code of Federal Regulations, 2012 CFR
2012-10-01
... pounds applied within two inches of the top edge, in any outward or downward direction, at any point along the top edge. (3) Top edge height of toprails, or equivalent guardrail system member, shall be 42..., solid panels, and equivalent structural members shall be capable of withstanding, without failure, a...
Code of Federal Regulations, 2013 CFR
2013-10-01
... pounds applied within two inches of the top edge, in any outward or downward direction, at any point along the top edge. (3) Top edge height of toprails, or equivalent guardrail system member, shall be 42..., solid panels, and equivalent structural members shall be capable of withstanding, without failure, a...
Code of Federal Regulations, 2014 CFR
2014-10-01
... pounds applied within two inches of the top edge, in any outward or downward direction, at any point along the top edge. (3) Top edge height of toprails, or equivalent guardrail system member, shall be 42..., solid panels, and equivalent structural members shall be capable of withstanding, without failure, a...
Code of Federal Regulations, 2011 CFR
2011-10-01
... pounds applied within two inches of the top edge, in any outward or downward direction, at any point along the top edge. (3) Top edge height of toprails, or equivalent guardrail system member, shall be 42..., solid panels, and equivalent structural members shall be capable of withstanding, without failure, a...
Application of closed-form solutions to a mesh point field in silicon solar cells
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1985-01-01
A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.
Boundary stress tensor and asymptotically AdS3 non-Einstein spaces at the chiral point
NASA Astrophysics Data System (ADS)
Giribet, Gaston; Goya, Andrés; Leston, Mauricio
2011-09-01
Chiral gravity admits asymptotically AdS3 solutions that are not locally equivalent to AdS3; meaning that solutions do exist which, while obeying the strong boundary conditions usually imposed in general relativity, happen not to be Einstein spaces. In topologically massive gravity (TMG), the existence of non-Einstein solutions is particularly connected to the question about the role played by complex saddle points in the Euclidean path integral. Consequently, studying (the existence of) nonlocally AdS3 solutions to chiral gravity is relevant to understanding the quantum theory. Here, we discuss a special family of nonlocally AdS3 solutions to chiral gravity. In particular, we show that such solutions persist when one deforms the theory by adding the higher-curvature terms of the so-called new massive gravity. Moreover, the addition of higher-curvature terms to the gravity action introduces new nonlocally AdS3 solutions that have no analogues in TMG. Both stationary and time-dependent, axially symmetric solutions that asymptote AdS3 space without being locally equivalent to it appear. Defining the boundary stress tensor for the full theory, we show that these non-Einstein geometries have associated vanishing conserved charges.
2014-09-01
we contacted pointed out that their catchment area covers 147,000 square miles, and some of their caregivers live over 8 hours away, requiring...geographical area . A caregiver whose veteran is rated tier 2 receives the equivalent of 25 hours per week of the wage for a home health aide, and a...location we contacted told us that home visits to remote areas require long driving times, which are challenging to accommodate. Staff at one VAMC
DORIS-based point mascons for the long term stability of precise orbit solutions
NASA Astrophysics Data System (ADS)
Cerri, L.; Lemoine, J. M.; Mercier, F.; Zelensky, N. P.; Lemoine, F. G.
2013-08-01
In recent years non-tidal Time Varying Gravity (TVG) has emerged as the most important contributor in the error budget of Precision Orbit Determination (POD) solutions for altimeter satellites' orbits. The Gravity Recovery And Climate Experiment (GRACE) mission has provided POD analysts with static and time-varying gravity models that are very accurate over the 2002-2012 time interval, but whose linear rates cannot be safely extrapolated before and after the GRACE lifespan. One such model based on a combination of data from GRACE and Lageos from 2002-2010, is used in the dynamic POD solutions developed for the Geophysical Data Records (GDRs) of the Jason series of altimeter missions and the equivalent products from lower altitude missions such as Envisat, Cryosat-2, and HY-2A. In order to accommodate long-term time-variable gravity variations not included in the background geopotential model, we assess the feasibility of using DORIS data to observe local mass variations using point mascons. In particular, we show that the point-mascon approach can stabilize the geographically correlated orbit errors which are of fundamental interest for the analysis of regional Mean Sea Level trends based on altimeter data, and can therefore provide an interim solution in the event of GRACE data loss. The time series of point-mass solutions for Greenland and Antarctica show good agreement with independent series derived from GRACE data, indicating a mass loss at rate of 210 Gt/year and 110 Gt/year respectively.
[Job-sharing in postgraduate medical training: not automatically a nice duet].
Levi, M
2004-02-14
Part-time work is an increasingly common phenomenon amongst medical professionals. Therefore many postgraduate training programmes for resident physicians also offer the opportunity of part-time work, which is usually in the form of an 80% full-time equivalent post. A new initiative has created the possibility of job-sharing, in which each of the participants fulfills 50% of one training position. Although the experience of the participants is mainly positive, it is unclear how this development will impact the quality of patient care and how it will affect the fulfillment of the training objectives. A more systematic evaluation of job-sharing in postgraduate medical training programmes is required to clarify these points.
The tropopause cold trap in the Australian Monsoon during STEP/AMEX 1987
NASA Technical Reports Server (NTRS)
Selkirk, Henry B.
1993-01-01
The relationship between deep convection and tropopause cold trap conditions is examined for the tropical northern Australia region during the 1986-87 summer monsoon season, emphasizing the Australia Monsoon Experiment (AMEX) period when the NASA Stratosphere-Troposphere Exchange Project (STEP) was being conducted. The factors related to the spatial and temporal variability of the cold point potential temperature (CPPT) are investigated. A framework is developed for describing the relationships among surface average equivalent potential temperature in the surface layer (AEPTSL) the height of deep convection, and stratosphere-troposphere exchange. The time-mean pattern of convection, large-scale circulation, and surface AEPTSL in the Australian monsoon and the evolution of the convective environment during the monsoon period and the extended transition season which preceded it are described. The time-mean fields of cold point level variables are examined and the statistical relationships between mean CPPT, surface AEPTSL, and deep convection are described. Day-to-day variations of CPPT are examined in terms of these time mean relationships.
NASA Astrophysics Data System (ADS)
Calvert, Nick; Betcke, Marta M.; Cresswell, John R.; Deacon, Alick N.; Gleeson, Anthony J.; Judson, Daniel S.; Mason, Peter; McIntosh, Peter A.; Morton, Edward J.; Nolan, Paul J.; Ollier, James; Procter, Mark G.; Speller, Robert D.
2015-05-01
Using a short pulse width x-ray source and measuring the time-of-flight of photons that scatter from an object under inspection allows for the point of interaction to be determined, and a profile of the object to be sampled along the path of the beam. A three dimensional image can be formed by interrogating the entire object. Using high energy x rays enables the inspection of cargo containers with steel walls, in the search for concealed items. A longer pulse width x-ray source can also be used with deconvolution techniques to determine the points of interaction. We present time-of-flight results from both short (picosecond) width and long (hundreds of nanoseconds) width x-ray sources, and show that the position of scatter can be localised with a resolution of 2 ns, equivalent to 30 cm, for a 3 cm thick plastic test object.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.
78 FR 76126 - Application for New Awards; High School Equivalency Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-16
... DEPARTMENT OF EDUCATION Application for New Awards; High School Equivalency Program AGENCY: Office... an application can receive under this competition is 15 points. This priority is: Prior Experience of... in Grants.gov and before you can submit an application through Grants.gov . If you are currently...
NASA Astrophysics Data System (ADS)
Heinonen, Martti; Anagnostou, Miltiadis; Bell, Stephanie; Stevens, Mark; Benyon, Robert; Bergerud, Reidun Anita; Bojkovski, Jovan; Bosma, Rien; Nielsen, Jan; Böse, Norbert; Cromwell, Plunkett; Kartal Dogan, Aliye; Aytekin, Seda; Uytun, Ali; Fernicola, Vito; Flakiewicz, Krzysztof; Blanquart, Bertrand; Hudoklin, Domen; Jacobson, Per; Kentved, Anders; Lóio, Isabel; Mamontov, George; Masarykova, Alexandra; Mitter, Helmut; Mnguni, Regina; Otych, Jan; Steiner, Anton; Szilágyi Zsófia, Nagyné; Zvizdic, Davor
2012-09-01
In the field of humidity quantities, the first CIPM key comparison, CCT-K6 is at its end. The corresponding European regional key comparison, EUROMET.T-K6, was completed in early 2008, about 4 years after the starting initial measurements in the project. In total, 24 NMIs from different countries took part in the comparison. This number includes 22 EURAMET countries, and Russia and South Africa. The comparison covered the dew-point temperature range from -50 °C to +20 °C. It was carried out in three parallel loops, each with two chilled mirror hygrometers as transfer standards in each loop. The comparison scheme was designed to ensure high quality results with evenly spread workload for the participants. It is shown that the standard uncertainty due to the long-term instability was smaller than 0.008 °C in all loops. The standard uncertainties due to links between the loops were found to be smaller than 0.025 °C at -50 °C and 0.010 °C elsewhere. Conclusions on the equivalence of the dew-point temperature standards are drawn on the basis of calculated bilateral degrees of equivalence and deviations from the EURAMET comparison reference values (ERV). Taking into account 16 different primary dew-point realizations and 8 secondary realizations, the results demonstrate the equivalence of a large number of laboratories at an uncertainty level that is better than achieved in other multilateral comparisons so far in the humidity field.
Malik, Azhar H; Shimazoe, Kenji; Takahashi, Hiroyuki
2013-01-01
In order to obtain plasma time activity curve (PTAC), input function for almost all quantitative PET studies, patient blood is sampled manually from the artery or vein which has various drawbacks. Recently a novel compact Time over Threshold (ToT) based Pr:LuAG-APD animal PET tomograph is developed in our laboratory which has 10% energy resolution, 4.2 ns time resolution and 1.76 mm spatial resolution. The measured value of spatial resolution shows much promise for imaging the blood vascular, i.e; artery of diameter 2.3-2.4mm, and hence, to measure PTAC for quantitative PET studies. To find the measurement time required to obtain reasonable counts for image reconstruction, the most important parameter is the sensitivity of the system. Usually small animal PET systems are characterized by using a point source in air. We used Electron Gamma Shower 5 (EGS5) code to simulate a point source at different positions inside the sensitive volume of tomograph and the axial and radial variations in the sensitivity are studied in air and phantom equivalent water cylinder. An average sensitivity difference of 34% in axial direction and 24.6% in radial direction is observed when point source is displaced inside water cylinder instead of air.
The Use of Pro/Engineer CAD Software and Fishbowl Tool Kit in Ray-tracing Analysis
NASA Technical Reports Server (NTRS)
Nounu, Hatem N.; Kim, Myung-Hee Y.; Ponomarev, Artem L.; Cucinotta, Francis A.
2009-01-01
This document is designed as a manual for a user who wants to operate the Pro/ENGINEER (ProE) Wildfire 3.0 with the NASA Space Radiation Program's (SRP) custom-designed Toolkit, called 'Fishbowl', for the ray tracing of complex spacecraft geometries given by a ProE CAD model. The analysis of spacecraft geometry through ray tracing is a vital part in the calculation of health risks from space radiation. Space radiation poses severe risks of cancer, degenerative diseases and acute radiation sickness during long-term exploration missions, and shielding optimization is an important component in the application of radiation risk models. Ray tracing is a technique in which 3-dimensional (3D) vehicle geometry can be represented as the input for the space radiation transport code and subsequent risk calculations. In ray tracing a certain number of rays (on the order of 1000) are used to calculate the equivalent thickness, say of aluminum, of the spacecraft geometry seen at a point of interest called the dose point. The rays originate at the dose point and terminate at a homogenously distributed set of points lying on a sphere that circumscribes the spacecraft and that has its center at the dose point. The distance a ray traverses in each material is converted to aluminum or other user-selected equivalent thickness. Then all equivalent thicknesses are summed up for each ray. Since each ray points to a direction, the aluminum equivalent of each ray represents the shielding that the geometry provides to the dose point from that particular direction. This manual will first list for the user the contact information for help in installing ProE and Fishbowl in addition to notes on the platform support and system requirements information. Second, the document will show the user how to use the software to ray trace a Pro/E-designed 3-D assembly and will serve later as a reference for troubleshooting. The user is assumed to have previous knowledge of ProE and CAD modeling.
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.
Lévêque, E; Koudella, C R
2001-04-30
An eddy-viscous term is added to Navier-Stokes dynamics at wave numbers k greater than the inflection point kc of the energy flux F(log(k)). The eddy viscosity is fixed so that the energy spectrum satisfies E(k) = E(kc) (k/kc)(-3) for k>kc. This resulting forcing induces a rapid depletion of the energy cascade at k>kc. It is observed numerically that the model reproduces turbulence energetics at k< or =kc and statistics of two-point velocity correlations at scales r>lambda (Taylor microscale). Compared to a direct numerical simulation of R(lambda) = 130 an equivalent run with the present model results in a gain of a factor 20 in CPU time.
Regularity results for the minimum time function with Hörmander vector fields
NASA Astrophysics Data System (ADS)
Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa
2018-03-01
In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.
Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.
1981-01-01
Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.
Water equivalent path length measurement in proton radiotherapy using time resolved diode dosimetry
Gottschalk, B.; Tang, S.; Bentefour, E. H.; Cascio, E. W.; Prieels, D.; Lu, H.-M.
2011-01-01
Purpose: To verify water equivalent path length (WEPL) before treatment in proton radiotherapy using time resolved in vivo diode dosimetry. Methods: Using a passively scattered range modulated proton beam, the output of a diode driving a fast current-to-voltage amplifier is recorded at a number of depths in a water tank. At each depth, a burst of overlapping single proton pulses is observed. The rms duration of the burst is computed and the resulting data set is fitted with a cubic polynomial. Results: When the diode is subsequently set to an arbitrary depth and the polynomial is used as a calibration curve, the “unknown” depth is determined within 0.3 mm rms. Conclusions: A diode or a diode array, placed (for instance) in the rectum in conjunction with a rectal balloon, can potentially determine the WEPL at that point, just prior to treatment, with submillimeter accuracy, allowing the beam energy to be adjusted. The associated unwanted dose is about 0.2% of a typical single fraction treatment dose. PMID:21626963
NAIRAS aircraft radiation model development, dose climatology, and initial validation.
Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing
2013-10-01
[1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests that these single-point differences will be within 30% when a new deterministic pion-initiated electromagnetic cascade code is integrated into NAIRAS, an effort which is currently underway.
NAIRAS aircraft radiation model development, dose climatology, and initial validation
NASA Astrophysics Data System (ADS)
Mertens, Christopher J.; Meier, Matthias M.; Brown, Steven; Norman, Ryan B.; Xu, Xiaojing
2013-10-01
The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests that these single-point differences will be within 30% when a new deterministic pion-initiated electromagnetic cascade code is integrated into NAIRAS, an effort which is currently underway.
NAIRAS aircraft radiation model development, dose climatology, and initial validation
Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing
2013-01-01
[1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests that these single-point differences will be within 30% when a new deterministic pion-initiated electromagnetic cascade code is integrated into NAIRAS, an effort which is currently underway. PMID:26213513
NASA Astrophysics Data System (ADS)
Jakoby, Bjoern W.; Bercier, Yanic; Watson, Charles C.; Bendriem, Bernard; Townsend, David W.
2009-06-01
A new combined lutetium oxyorthosilicate (LSO) PET/CT scanner with an extended axial field-of-view (FOV) of 21.8 cm has been developed (Biograph TruePoint PET/CT with TrueV; Siemens Molecular Imaging) and introduced into clinical practice. The scanner includes the recently announced point spread function (PSF) reconstruction algorithm. The PET components incorporate four rings of 48 detector blocks, 5.4 cm times 5.4 cm in cross-section. Each block comprises a 13 times 13 matrix of 4 times 4 times 20 mm3 elements. Data are acquired with a 4.5 ns coincidence time window and an energy window of 425-650 keV. The physical performance of the new scanner has been evaluated according to the recently revised National Electrical Manufacturers Association (NEMA) NU 2-2007 standard and the results have been compared with a previous PET/CT design that incorporates three rings of block detectors with an axial coverage of 16.2 cm (Biograph TruePoint PET/CT; Siemens Molecular Imaging). In addition to the phantom measurements, patient Noise Equivalent Count Rates (NECRs) have been estimated for a range of patients with different body weights (42-154 kg). The average spatial resolution is the same for both scanners: 4.4 mm (FWHM) and 5.0 mm (FWHM) at 1 cm and 10 cm respectively from the center of the transverse FOV. The scatter fractions of the Biograph TruePoint and Biograph TruePoint TrueV are comparable at 32%. Compared to the three ring design, the system sensitivity and peak NECR with smoothed randoms correction (1R) increase by 82% and 73%, respectively. The increase in sensitivity from the extended axial coverage of the Biograph TruePoint PET/CT with TrueV should allow a decrease in either scan time or injected dose without compromising diagnostic image quality. The contrast improvement with the PSF reconstruction potentially offers enhanced detectability for small lesions.
Beekman, Christopher R.; Matta, Murali K.; Thomas, Christopher D.; Mohammad, Adil; Stewart, Sharron; Xu, Lin; Chockalingam, Ashok; Shea, Katherine; Sun, Dajun; Jiang, Wenlei; Patel, Vikram; Rouse, Rodney
2017-01-01
Relative biodistribution of FDA-approved innovator and generic sodium ferric gluconate (SFG) drug products was investigated to identify differences in tissue distribution of iron after intravenous dosing to rats. Three equal cohorts of 42 male Sprague-Dawley rats were created with each cohort receiving one of three treatments: (1) the innovator SFG product dosed intravenously at a concentration of 40 mg/kg; (2) the generic SFG product dosed intravenously at a concentration of 40 mg/kg; (3) saline dosed intravenously at equivalent volume to SFG products. Sampling time points were 15 min, 1 h, 8 h, 1 week, two weeks, four weeks, and six weeks post-treatment. Six rats from each group were sacrificed at each time point. Serum, femoral bone marrow, lungs, brain, heart, kidneys, liver, and spleen were harvested and evaluated for total iron concentration by ICP-MS. The ICP-MS analytical method was validated with linearity, range, accuracy, and precision. Results were determined for mean iron concentrations (µg/g) and mean total iron (whole tissue) content (µg/tissue) for each tissue of all groups at each time point. A percent of total distribution to each tissue was calculated for both products. At any given time point, the overall percent iron concentration distribution did not vary between the two SFG drugs by more than 7% in any tissue. Overall, this study demonstrated similar tissue biodistribution for the two SFG products in the examined tissues. PMID:29283393
Song, Hee-eun; Kirmaier, Christine; Taniguchi, Masahiko; Diers, James R; Bocian, David F; Lindsey, Jonathan S; Holten, Dewey
2008-11-19
Excited-state charge separation in molecular architectures has been widely explored, yet ground-state hole (or electron) transfer, particularly involving equivalent pigments, has been far less studied, and direct quantitation of the rate of transfer often has proved difficult. Prior studies of ground-state hole transfer between equivalent zinc porphyrins using electron paramagnetic resonance techniques give a lower limit of approximately (50 ns)(-1) on the rates. Related transient optical studies of hole transfer between inequivalent sites [zinc porphyrin (Zn) and free base porphyrin (Fb)] give an upper limit of approximately (20 ps)(-1). Thus, a substantial window remains for the unknown rates of ground-state hole transfer between equivalent sites. Herein, the ground-state hole-transfer processes are probed in a series of oxidized porphyrin triads (ZnZnFb) with the focus being on determination of the rates between the nominally equivalent sites (Zn/Zn). The strategy builds upon recent time-resolved optical studies of the photodynamics of dyads wherein a zinc porphyrin is electrochemically oxidized and the attached free base porphyrin is photoexcited. The resulting energy- and hole-transfer processes in the oxidized ZnFb dyads are typically complete within 100 ps of excitation. Such processes are also present in the triads and serve as a starting point for determining the rates of ground-state hole transfer between equivalent sites in the triads. The rate constant of the Zn/Zn hole transfer is found to be (0.8 ns)(-1) for diphenylethyne-linked zinc porphyrins and increases only slightly to (0.6 ns)(-1) when a shorter phenylene linker is utilized. The rate decreases slightly to (1.1 ns)(-1) when steric constraints are introduced in the diarylethyne linker. In general, the rate constants for ground-state Zn/Zn hole transfer in oxidized arrays are a factor of 40 slower than those for Zn/Fb transfer. Collectively, the findings should aid the design of next-generation molecular architectures for applications in solar-energy conversion.
Spatial traffic noise pollution assessment - A case study.
Monazzam, Mohammad Reza; Karimi, Elham; Abbaspour, Majid; Nassiri, Parvin; Taghavi, Lobat
2015-01-01
Spatial assessment of traffic noise pollution intensity will provide urban planners with approximate estimation of citizens exposure to impermissible sound levels. They could identify critical noise pollution areas wherein noise barriers should be embedded. The present study aims at using the Geographic Information System (GIS) to assess spatial changes in traffic noise pollution in Tehran, the capital of Iran, and the largest city in the Middle East. For this purpose, while measuring equivalent sound levels at different time periods of a day and different days of a week in District 14 of Tehran, wherein there are highways and busy streets, the geographic coordination of the measurement points was recorded at the stations. The obtained results indicated that the equivalent sound level did not show a statistically significant difference between weekdays, and morning, afternoon and evening hours as well as time intervals of 10 min, 15 min and 30 min. Then, 91 stations were selected in the target area and equivalent sound level was measured for each station on 3 occasions of the morning (7:00-9:00 a.m.), afternoon (12.00-3:00 p.m.) and evening (5:00-8:00 p.m.) on Saturdays to Wednesdays. As the results suggest, the maximum equivalent sound level (Leq) was reported from Basij Highway, which is a very important connecting thoroughfare in the district, and was equal to 84.2 dB(A), while the minimum equivalent sound level (Leq), measured in the Fajr Hospital, was equal to 59.9 dB(A). The average equivalent sound level was higher than the national standard limit at all stations. The use of sound walls in Highways Basij and Mahallati as well as widening the Streets 17th Shahrivar, Pirouzi and Khavaran, benchmarked on a map, were recommended as the most effective mitigation measures. Additionally, the research findings confirm the outstanding applicability of the Geographic Information System in handling noise pollution data towards depicting noise pollution intensity caused by traffic. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
Interpreting the handling qualities of aircraft with stability and control augmentation
NASA Technical Reports Server (NTRS)
Hodgkinson, J.; Potsdam, E. H.; Smith, R. E.
1990-01-01
The general process of designing an aircraft for good flying qualities is first discussed. Lessons learned are pointed out, with piloted evaluation emerging as a crucial element. Two sources of rating variability in performing these evaluations are then discussed. First, the finite endpoints of the Cooper-Harper scale do not bias parametric statistical analyses unduly. Second, the wording of the scale does introduce some scatter. Phase lags generated by augmentation systems, as represented by equivalent time delays, often cause poor flying qualities. An analysis is introduced which allows a designer to relate any level of time delay to a probability of loss of aircraft control. This view of time delays should, it is hoped, allow better visibility of the time delays in the design process.
Armstrong, Tatum; Wagner, Marika C; Cheema, Jagjit; Pang, Daniel Sj
2018-02-01
Objectives The primary study objective was to assess two injectable anesthetic protocols, given to facilitate castration surgery in cats, for equivalence in terms of postoperative analgesia. A secondary objective was to evaluate postoperative eating behavior. Methods Male cats presented to a local clinic were randomly assigned to receive either intramuscular ketamine (5 mg/kg, n = 26; KetHD) or alfaxalone (2 mg/kg, n = 24; AlfHD) in combination with dexmedetomidine (25 μg/kg) and hydromorphone (0.05 mg/kg). All cats received meloxicam (0.3 mg/kg SC) and intratesticular lidocaine (2 mg/kg). Species-specific pain and sedation scales were applied at baseline, 1, 2 and 4 h postoperatively. Time taken to achieve sternal recumbency and begin eating were also recorded postoperatively. Results Pain scale scores were low and showed equivalence between the treatment groups at all time points (1 h, P = 0.38, 95% confidence interval [CI] of the difference between group scores 0-0; 2 h, P = 0.71, 95% CI 0-0; 4 h, P = 0.97, 95% CI 0-0). Four cats crossed the threshold for rescue analgesia (KetHD, n = 1; AlfHD, n = 3). At 1 h, more cats in the KetHD (65%) group than in the AlfHD (42%) group were sedated, but statistical significance was not detected ( P = 0.15, 95% CI -1 to 0). Most AlfHD cats (88%) began eating by 1 h vs 65% of KetHD cats ( P = 0.039). Time to recover sternal recumbency did not differ between groups ( P = 0.86, 95% CI -14.1 to 11.8). Conclusions and relevance These results show that AlfHD and KetHD provide equivalent analgesia as part of a multimodal injectable anesthetic protocol. Alfaxalone is associated with an earlier return to eating.
NASA Technical Reports Server (NTRS)
Leskovar, B.; Turko, B.
1977-01-01
The development of a high precision time interval digitizer is described. The time digitizer is a 10 psec resolution stop watch covering a range of up to 340 msec. The measured time interval is determined as a separation between leading edges of a pair of pulses applied externally to the start input and the stop input of the digitizer. Employing an interpolation techniques and a 50 MHz high precision master oscillator, the equivalent of a 100 GHz clock frequency standard is achieved. Absolute accuracy and stability of the digitizer are determined by the external 50 MHz master oscillator, which serves as a standard time marker. The start and stop pulses are fast 1 nsec rise time signals, according to the Nuclear Instrument means of tunnel diode discriminators. Firing level of the discriminator define start and stop points between which the time interval is digitized.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-08
... Domestic Liquefied Natural Gas to Non-Free Trade Agreement Nations AGENCY: Office of Fossil Energy, DOE... metric tons per year of domestically produced liquefied natural gas (LNG) (equivalent to approximately... equivalent to approximately 1 Bcf per day of natural gas. DATES: Protests, motions to intervene or notices of...
Effect of biometric characteristics on biomechanical properties of the cornea in cataract patient.
Song, Xue-Fei; Langenbucher, Achim; Gatzioufas, Zisis; Seitz, Berthold; El-Husseiny, Moatasem
2016-01-01
To determine the impact of biometric characteristics on the biomechanical properties of the human cornea using the ocular response analyzer (ORA) and standard comprehensive ophthalmic examinations before and after standard phacoemulsification. This study comprised 54 eyes with cataract with significant lens opacification in stages I or II that underwent phacoemulsification (2.8 mm incision). Corneal hysteresis (CH), corneal resistance factor (CRF), Goldmann-correlated intraocular pressure (IOPg), and corneal-compensated intraocular pressure (IOPcc) were measured by ORA preoperatively and at 1mo postoperatively. Biometric characteristics were derived from corneal topography [TMS-5, anterior equivalent (EQTMS) and cylindric (CYLTMS) power], corneal tomography [Casia, anterior and posterior equivalent (EQaCASIC, EQpCASIA) and cylindric (CYLaCASIA, CYLpCASIA) power], keratometry [IOLMaster, anterior equivalent (EQIOL) and cylindric (CYLIOL) power] and autorefractor [anterior equivalent (EQAR)]. Results from ORA were analyzed and correlated with those from all other examinations taken at the same time point. Preoperatively, CH correlated with EQpCASIA and CYLpCASIA only (P=0.001, P=0.002). Postoperatively, IOPg and IOPcc correlated with all equivalent powers (EQTMS, EQIOL, EQAR, EQaCASIA and EQpCASIA) (P=0.001, P=0.007, P=0.001, P=0.015, P=0.03 for IOPg and P<0.001, P=0.003, P<0.001, P=0.009, P=0.014 for IOPcc). CH correlated postoperatively with EQaCASIA and EQpCASIC only (P=0.021, P=0.022). Biometric characteristics may significantly affect biomechanical properties of the cornea in terms of CH, IOPcc and IOPg before, but even more after cataract surgery.
Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor
NASA Astrophysics Data System (ADS)
Hristov, Ivan; Goranov, Goran; Hristova, Radoslava
2018-02-01
We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.
Symmetrical polyhedra (simple crystal forms) as orbits of noncrystallographic point symmetry groups
NASA Astrophysics Data System (ADS)
Ovsetsina, T. I.; Chuprunov, E. V.
2017-09-01
Simple crystal forms are analyzed as the orbits of noncrystallographic point symmetry groups on a set of smooth or structured ("hatched") planes of crystal space. Polyhedra with symmetrically equivalent faces, obtained using noncrystallographic point symmetry groups, are considered. All possible versions of simple forms for all noncrystallographic groups are listed in a unified table.
Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.
Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D
2013-01-01
We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.
A drop in performance on a fluid intelligence test due to instructed-rule mindset.
ErEl, Hadas; Meiran, Nachshon
2017-09-01
A 'mindset' is a configuration of processing resources that are made available for the task at hand as well as their suitable tuning for carrying it out. Of special interest, remote-relation abstract mindsets are introduced by activities sharing only general control processes with the task. To test the effect of a remote-relation mindset on performance on a Fluid Intelligence test (Raven's Advanced Progressive Matrices, RAPM), we induced a mindset associated with little usage of executive processing by requiring participants to execute a well-defined classification rule 12 times, a manipulation known from previous work to drastically impair rule-generation performance and associated cognitive processes. In Experiment 1, this manipulation led to a drop in RAPM performance equivalent to 10.1 IQ points. No drop was observed in a General Knowledge task. In Experiment 2, a similar drop in RAPM performance was observed (equivalent to 7.9 and 9.2 IQ points) regardless if participants were pre-informed about the upcoming RAPM test. These results indicate strong (most likely, transient) adverse effects of a remote-relation mindset on test performance. They imply that although the trait of Fluid Intelligence has probably not changed, mindsets can severely distort estimates of this trait.
Tests of Gravity Using Lunar Laser Ranging.
Merkowitz, Stephen M
2010-01-01
Lunar laser ranging (LLR) has been a workhorse for testing general relativity over the past four decades. The three retroreflector arrays put on the Moon by the Apollo astronauts and the French built arrays on the Soviet Lunokhod rovers continue to be useful targets, and have provided the most stringent tests of the Strong Equivalence Principle and the time variation of Newton's gravitational constant. The relatively new ranging system at the Apache Point 3.5 meter telescope now routinely makes millimeter level range measurements. Incredibly, it has taken 40 years for ground station technology to advance to the point where characteristics of the lunar retroreflectors are limiting the precision of the range measurements. In this article, we review the gravitational science and technology of lunar laser ranging and discuss prospects for the future.
Tests of gravity Using Lunar Laser Ranging
NASA Technical Reports Server (NTRS)
Merkowitz, Stephen M.
2010-01-01
Lunar laser ranging (LLR) has been a workhorse for testing general relativity over the pat four decades. The three retrorefiector arrays put on the Moon by the Apollo astronauts and the French built array on the second Soviet Lunokhod rover continue to be useful targets, and have provided the most stringent tests of the Strong Equivalence Principle and the time variation of Newton's gravitational constant. The relatively new ranging system at the Apache Point :3.5 meter telescope now routinely makes millimeter level range measurements. Incredibly. it has taken 40 years for ground station technology to advance to the point where characteristics of the lunar retrorefiectors are limiting the precision of the range measurements. In this article. we review the gravitational science and technology of lunar laser ranging and discuss prospects for the future.
Durango delta: Complications on San Juan basin Cretaceous linear strandline theme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zech, R.S.; Wright, R.
1989-09-01
The Upper Cretaceous Point Lookout Sandstone generally conforms to a predictable cyclic shoreface model in which prograding linear strandline lithosomes dominate formation architecture. Multiple transgressive-regressive cycles results in systematic repetition of lithologies deposited in beach to inner shelf environments. Deposits of approximately five cycles are locally grouped into bundles. Such bundles extend at least 20 km along depositional strike and change from foreshore sandstone to offshore, time-equivalent Mancos mud rock in a downdip distance of 17 to 20 km. Excellent hydrocarbon reservoirs exist where well-sorted shoreface sandstone bundles stack and the formation thickens. This depositional model breaks down in themore » vicinity of Durango, Colorado, where a fluvial-dominated delta front and associated large distributary channels characterize the Point Lookout Sandstone and overlying Menefee Formation.« less
NASA Astrophysics Data System (ADS)
Lüpke, Felix; Cuma, David; Korte, Stefan; Cherepanov, Vasily; Voigtländer, Bert
2018-02-01
We present a four-point probe resistance measurement technique which uses four equivalent current measuring units, resulting in minimal hardware requirements and corresponding sources of noise. Local sample potentials are measured by a software feedback loop which adjusts the corresponding tip voltage such that no current flows to the sample. The resulting tip voltage is then equivalent to the sample potential at the tip position. We implement this measurement method into a multi-tip scanning tunneling microscope setup such that potentials can also be measured in tunneling contact, allowing in principle truly non-invasive four-probe measurements. The resulting measurement capabilities are demonstrated for \
Physical scales in the Wigner-Boltzmann equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nedjalkov, M., E-mail: mixi@iue.tuwien.ac.at; Selberherr, S.; Ferry, D.K.
2013-01-15
The Wigner-Boltzmann equation provides the Wigner single particle theory with interactions with bosonic degrees of freedom associated with harmonic oscillators, such as phonons in solids. Quantum evolution is an interplay of two transport modes, corresponding to the common coherent particle-potential processes, or to the decoherence causing scattering due to the oscillators. Which evolution mode will dominate depends on the scales of the involved physical quantities. A dimensionless formulation of the Wigner-Boltzmann equation is obtained, where these scales appear as dimensionless strength parameters. A notion called scaling theorem is derived, linking the strength parameters to the coupling with the oscillators. Itmore » is shown that an increase of this coupling is equivalent to a reduction of both the strength of the electric potential, and the coherence length. Secondly, the existence of classes of physically different, but mathematically equivalent setups of the Wigner-Boltzmann evolution is demonstrated. - Highlights: Black-Right-Pointing-Pointer Dimensionless parameters determine the ratio of quantum or classical WB evolution. Black-Right-Pointing-Pointer The scaling theorem evaluates the decoherence effect due to scattering. Black-Right-Pointing-Pointer Evolution processes are grouped into classes of equivalence.« less
Accelerated antioxidant bioavailability of OPC-3 bioflavonoids administered as isotonic solution.
Cesarone, Maria R; Grossi, Maria Giovanni; Di Renzo, Andrea; Errichi, Silvia; Schönlau, Frank; Wilmer, James L; Lange, Mark; Blumenfeld, Julian
2009-06-01
The degree of absorption of bioflavonoids, a diverse and complex group of plant derived phytonutrients, has been a frequent debate among scientists. Monomeric flavonoid species are known to be absorbed within 2 h. The kinetics of plasma reactive oxygen species, a reflection of bioactivity, of a commercial blend of flavonoids, OPC-3 was investigated. OPC-3 was selected to compare absorption of an isotonic flavonoid solution vs tablet form with the equivalent amount of fluid. In the case of isotonic OPC-3 the reactive oxygen species of the subject's plasma decreased significantly (p < 0.05), six times greater than OPC-3 tablets by 10 min post-consumption. After 20 min the isotonic formulation was approximately four times more bioavailable and after 40 min twice as bioavailable as the tablet, respectively. At time points 1 h and later, both isotonic and tablet formulations lowered oxidative stress, although the isotonic formulation values remained significantly better throughout the investigation period of 4 h. These findings point to a dramatically accelerated bioavailability of flavonoids delivered in an isotonic formulation. (c) 2009 John Wiley & Sons, Ltd.
Framework for assessing causality in disease management programs: principles.
Wilson, Thomas; MacDowell, Martin
2003-01-01
To credibly state that a disease management (DM) program "caused" a specific outcome it is required that metrics observed in the DM population be compared with metrics that would have been expected in the absence of a DM intervention. That requirement can be very difficult to achieve, and epidemiologists and others have developed guiding principles of causality by which credible estimates of DM impact can be made. This paper introduces those key principles. First, DM program metrics must be compared with metrics from a "reference population." This population should be "equivalent" to the DM intervention population on all factors that could independently impact the outcome. In addition, the metrics used in both groups should use the same defining criteria (ie, they must be "comparable" to each other). The degree to which these populations fulfill the "equivalent" assumption and metrics fulfill the "comparability" assumption should be stated. Second, when "equivalence" or "comparability" is not achieved, the DM managers should acknowledge this fact and, where possible, "control" for those factors that may impact the outcome(s). Finally, it is highly unlikely that one study will provide definitive proof of any specific DM program value for all time; thus, we strongly recommend that studies be ongoing, at multiple points in time, and at multiple sites, and, when observational study designs are employed, that more than one type of study design be utilized. Methodologically sophisticated studies that follow these "principles of causality" will greatly enhance the reputation of the important and growing efforts in DM.
ERIC Educational Resources Information Center
Giudice, Nicholas A.; Betty, Maryann R.; Loomis, Jack M.
2011-01-01
This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the…
46 CFR 116.433 - Windows and air ports in fire control boundaries.
Code of Federal Regulations, 2011 CFR
2011-10-01
... fitted with frames of steel or equivalent material. Glazing beads or angles of steel or equivalent... event of a fire if: (1) Where a steel frame is used, it is not arranged to retain the glass in place; or (2) A frame of aluminum or other material with low melting point is used. (d) A window or air port...
Equivalence of Fluctuation Splitting and Finite Volume for One-Dimensional Gas Dynamics
NASA Technical Reports Server (NTRS)
Wood, William A.
1997-01-01
The equivalence of the discretized equations resulting from both fluctuation splitting and finite volume schemes is demonstrated in one dimension. Scalar equations are considered for advection, diffusion, and combined advection/diffusion. Analysis of systems is performed for the Euler and Navier-Stokes equations of gas dynamics. Non-uniform mesh-point distributions are included in the analyses.
ERIC Educational Resources Information Center
Green, Francis; Vignoles, Anna
2012-01-01
We present a method to compare different qualifications for entry to higher education by studying students' subsequent performance. Using this method for students holding either the International Baccalaureate (IB) or A-levels gaining their degrees in 2010, we estimate an "empirical" equivalence scale between IB grade points and UCAS…
Detector-device-independent quantum key distribution: Security analysis and fast implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boaron, Alberto; Korzh, Boris; Houlmann, Raphael
One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. But, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, we proposed an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) in order to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. We analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less
A high speed implementation of the random decrement algorithm
NASA Technical Reports Server (NTRS)
Kiraly, L. J.
1982-01-01
The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.
Detector-device-independent quantum key distribution: Security analysis and fast implementation
Boaron, Alberto; Korzh, Boris; Houlmann, Raphael; ...
2016-08-09
One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. But, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, we proposed an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) in order to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. We analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less
Wang, Fangnian; Tian, Yu; Chen, Lingjun; Luo, Robert; Sickler, Joanna; Liesenfeld, Oliver; Chen, Shuqi
2017-10-01
The performance of a polymerase chain reaction-based point-of-care assay, the cobas Strep A Nucleic Acid Test for use on the cobas Liat System (cobas Liat Strep A assay), for the detection of group A Streptococcus bacteria was evaluated in primary care settings. Throat swab specimens from 427 patients were tested with the cobas Liat Strep A assay and a rapid antigen detection test (RADT) by existing medical staff at 5 primary care clinics, and results were compared with bacterial culture. The cobas Liat Strep A assay demonstrated equivalent sensitivity (97.7%) and specificity (93.3%) to reference culture with a 15-minute turnaround time. In comparison to RADTs, the cobas Liat Strep A assay showed improved sensitivity (97.7% Liat vs 84.5% RADT). The Clinical Laboratory Improvement Amendments-waived cobas Liat Strep A assay demonstrated the ease of use and improved turnaround time of RADTs along with the sensitivity of culture.
NASA Astrophysics Data System (ADS)
Tarpin, Malo; Canet, Léonie; Wschebor, Nicolás
2018-05-01
In this paper, we present theoretical results on the statistical properties of stationary, homogeneous, and isotropic turbulence in incompressible flows in three dimensions. Within the framework of the non-perturbative renormalization group, we derive a closed renormalization flow equation for a generic n-point correlation (and response) function for large wave-numbers with respect to the inverse integral scale. The closure is obtained from a controlled expansion and relies on extended symmetries of the Navier-Stokes field theory. It yields the exact leading behavior of the flow equation at large wave-numbers |p→ i| and for arbitrary time differences ti in the stationary state. Furthermore, we obtain the form of the general solution of the corresponding fixed point equation, which yields the analytical form of the leading wave-number and time dependence of n-point correlation functions, for large wave-numbers and both for small ti and in the limit ti → ∞. At small ti, the leading contribution at large wave-numbers is logarithmically equivalent to -α (ɛL ) 2 /3|∑tip→ i|2, where α is a non-universal constant, L is the integral scale, and ɛ is the mean energy injection rate. For the 2-point function, the (tp)2 dependence is known to originate from the sweeping effect. The derived formula embodies the generalization of the effect of sweeping to n-point correlation functions. At large wave-numbers and large ti, we show that the ti2 dependence in the leading order contribution crosses over to a |ti| dependence. The expression of the correlation functions in this regime was not derived before, even for the 2-point function. Both predictions can be tested in direct numerical simulations and in experiments.
A numerical analysis of phase-change problems including natural convection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Y.; Faghri, A.
1990-08-01
Fixed grid solutions for phase-change problems remove the need to satisfy conditions at the phase-change front and can be easily extended to multidimensional problems. The two most important and widely used methods are enthalpy methods and temperature-based equivalent heat capacity methods. Both methods in this group have advantages and disadvantages. Enthalpy methods (Shamsundar and Sparrow, 1975; Voller and Prakash, 1987; Cao et al., 1989) are flexible and can handle phase-change problems occurring both at a single temperature and over a temperature range. The drawback of this method is that although the predicted temperature distributions and melting fronts are reasonable, themore » predicted time history of the temperature at a typical grid point may have some oscillations. The temperature-based fixed grid methods (Morgan, 1981; Hsiao and Chung, 1984) have no such time history problems and are more convenient with conjugate problems involving an adjacent wall, but have to deal with the severe nonlinearity of the governing equations when the phase-change temperature range is small. In this paper, a new temperature-based fixed-grid formulation is proposed, and the reason that the original equivalent heat capacity model is subject to such restrictions on the time step, mesh size, and the phase-change temperature range will also be discussed.« less
Ultra-wideband directional sampler
McEwan, Thomas E.
1996-01-01
The Ultra-Wideband (UWB) Directional Sampler is a four port device that combines the function of a directional coupler with a high speed sampler. Two of the four ports operate at a high sub-nanosecond speed, in "real time", and the other two ports operate at a slow millisecond-speed, in "equivalent time". A signal flowing inbound to either of the high speed ports is sampled and coupled, in equivalent time, to the adjacent equivalent time port while being isolated from the opposite equivalent time port. A primary application is for a time domain reflectometry (TDR) situation where the reflected pulse returns while the outbound pulse is still being transmitted, such as when the reflecting discontinuity is very close to the TDR apparatus.
Thompson, Deanne K; Omizzolo, Cristina; Adamson, Christopher; Lee, Katherine J; Stargatt, Robyn; Egan, Gary F; Doyle, Lex W; Inder, Terrie E; Anderson, Peter J
2014-08-01
The effects of prematurity on hippocampal development through early childhood are largely unknown. The aims of this study were to (1) compare the shape of the very preterm (VPT) hippocampus to that of full-term (FT) children at 7 years of age, and determine if hippocampal shape is associated with memory and learning impairment in VPT children, (2) compare change in shape and volume of the hippocampi from term-equivalent to 7 years of age between VPT and FT children, and determine if development of the hippocampi over time predicts memory and learning impairment in VPT children. T1 and T2 magnetic resonance images were acquired at both term equivalent and 7 years of age in 125 VPT and 25 FT children. Hippocampi were manually segmented and shape was characterized by boundary point distribution models at both time-points. Memory and learning outcomes were measured at 7 years of age. The VPT group demonstrated less hippocampal infolding than the FT group at 7 years. Hippocampal growth between infancy and 7 years was less in the VPT compared with the FT group, but the change in shape was similar between groups. There was little evidence that the measures of hippocampal development were related to memory and learning impairments in the VPT group. This study suggests that the developmental trajectory of the human hippocampus is altered in VPT children, but this does not predict memory and learning impairment. Further research is required to elucidate the mechanisms for memory and learning difficulties in VPT children. Copyright © 2014 Wiley Periodicals, Inc.
Using the Internet to promote physical activity: a randomized trial of intervention delivery modes.
Steele, Rebekah; Mummery, W Kerry; Dwyer, Trudy
2007-07-01
A growing number of the population are using the Internet for health information, such as physical activity (PA). The aim of this study was to examine the effectiveness of delivery modes for a behavior change program targeting PA. A randomized trial was conducted with 192 subjects randomly allocated to either a face-to-face, Internet-mediated, or Internet-only arm of a 12-wk intervention. Subjects included inactive adults with Internet access. The primary outcome variable was self-reported PA, assessed at four time points. The results showed no group x time interaction for PA F(6, 567) = 1.64, p > 0.05, and no main effect for group F(2, 189) = 1.58, p > 0.05. However, a main effect for time F(3, 567) = 75.7, p < 0.01 was observed for each group. All groups were statistically equivalent immediately post-intervention (p < 0.05), but not at the follow-up time points (p > 0.05). The Internet-mediated and Internet-only groups showed similar increases in PA to the face-to-face group immediately post-intervention. This study provides evidence in support of the Internet in the delivery of PA interventions and highlights avenues for future research.
Geometry in a dynamical system without space: Hyperbolic Geometry in Kuramoto Oscillator Systems
NASA Astrophysics Data System (ADS)
Engelbrecht, Jan; Chen, Bolun; Mirollo, Renato
Kuramoto oscillator networks have the special property that their time evolution is constrained to lie on 3D orbits of the Möbius group acting on the N-fold torus TN which explains the N - 3 constants of motion discovered by Watanabe and Strogatz. The dynamics for phase models can be further reduced to 2D invariant sets in T N - 1 which have a natural geometry equivalent to the unit disk Δ with hyperbolic metric. We show that the classic Kuramoto model with order parameter Z1 (the first moment of the oscillator configuration) is a gradient flow in this metric with a unique fixed point on each generic 2D invariant set, corresponding to the hyperbolic barycenter of an oscillator configuration. This gradient property makes the dynamics especially easy to analyze. We exhibit several new families of Kuramoto oscillator models which reduce to gradient flows in this metric; some of these have a richer fixed point structure including non-hyperbolic fixed points associated with fixed point bifurcations. Work Supported by NSF DMS 1413020.
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.
Simulations of a micro-PET system based on liquid xenon
NASA Astrophysics Data System (ADS)
Miceli, A.; Glister, J.; Andreyev, A.; Bryman, D.; Kurchaninov, L.; Lu, P.; Muennich, A.; Retiere, F.; Sossi, V.
2012-03-01
The imaging performance of a high-resolution preclinical micro-positron emission tomography (micro-PET) system employing liquid xenon (LXe) as the gamma-ray detection medium was simulated. The arrangement comprises a ring of detectors consisting of trapezoidal LXe time projection ionization chambers and two arrays of large area avalanche photodiodes for the measurement of ionization charge and scintillation light. A key feature of the LXePET system is the ability to identify individual photon interactions with high energy resolution and high spatial resolution in three dimensions and determine the correct interaction sequence using Compton reconstruction algorithms. The simulated LXePET imaging performance was evaluated by computing the noise equivalent count rate, the sensitivity and point spread function for a point source according to the NEMA-NU4 standard. The image quality was studied with a micro-Derenzo phantom. Results of these simulation studies included noise equivalent count rate peaking at 1326 kcps at 188 MBq (705 kcps at 184 MBq) for an energy window of 450-600 keV and a coincidence window of 1 ns for mouse (rat) phantoms. The absolute sensitivity at the center of the field of view was 12.6%. Radial, tangential and axial resolutions of 22Na point sources reconstructed with a list-mode maximum likelihood expectation maximization algorithm were ⩽0.8 mm (full-width at half-maximum) throughout the field of view. Hot-rod inserts of <0.8 mm diameter were resolvable in the transaxial image of a micro-Derenzo phantom. The simulations show that a LXe system would provide new capabilities for significantly enhancing PET images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan Qi; Saunders, Samuel E.; Bartelt-Hunt, Shannon L., E-mail: sbartelt2@unl.edu
Highlights: Black-Right-Pointing-Pointer This study evaluates methane and carbon dioxide production after land burial of cattle carcasses. Black-Right-Pointing-Pointer Disposal of animal mortalities is often overlooked in evaluating the environmental impacts of animal production. Black-Right-Pointing-Pointer we quantify annual emissions from cattle carcass disposal in the United States as 1.6 Tg CO{sub 2} equivalents. - Abstract: Approximately 2.2 million cattle carcasses require disposal annually in the United States. Land burial is a convenient disposal method that has been widely used in animal production for disposal of both daily mortalities as well as during catastrophic mortality events. To date, greenhouse gas production after mortalitymore » burial has not been quantified, and this study represents the first attempt to quantify greenhouse gas emissions from land burial of animal carcasses. In this study, anaerobic decomposition of both homogenized and unhomogenized cattle carcass material was investigated using bench-scale reactors. Maximum yields of methane and carbon dioxide were 0.33 and 0.09 m{sup 3}/kg dry material, respectively, a higher methane yield than that previously reported for municipal solid waste. Variability in methane production rates were observed over time and between reactors. Based on our laboratory data, annual methane emissions from burial of cattle mortalities in the United States could total 1.6 Tg CO{sub 2} equivalents. Although this represents less than 1% of total emissions produced by the agricultural sector in 2009, greenhouse gas emissions from animal carcass burial may be significant if disposal of swine and poultry carcasses is also considered.« less
Ignition of lean fuel-air mixtures in a premixing-prevaporizing duct at temperatures up to 1000 K
NASA Technical Reports Server (NTRS)
Tacina, R. R.
1980-01-01
Conditions were determined in a premixing prevaporizing fuel preparation duct at which ignition occurred. An air blast type fuel injector with nineteen fuel injection points was used to provide a uniform spatial fuel air mixture. The range of inlet conditions where ignition occurred were: inlet air temperatures of 600 to 1000 K air pressures of 180 to 660 kPa, equivalence ratios (fuel air ratio divided by stoichiometric fuel air ratio) from 0.12 to 1.05, and velocities from 3.5 to 30 m/s. The duct was insulated and the diameter was 12 cm. Mixing lengths were varied from 16.5 to 47.6 and residence times ranged from 4.6 to 107 ms. The fuel was no. 2 diesel. Results show a strong effect of equivalence ratio, pressure and temperature on the conditions where ignition occurred. The data did not fit the most commonly used model of auto-ignition. A correlation of the conditions where ignition would occur which apply to this test apparatus over the conditions tested is (p/V) phi to the 1.3 power = 0.62 e to the 2804/T power where p is the pressure in kPa, V is the velocity in m/e, phi is the equivalence ratio, and T is the temperature in K. The data scatter was considerable, varying by a maximum value of 5 at a given temperature and equivalence ratio. There was wide spread in the autoignition data contained in the references.
NASA Technical Reports Server (NTRS)
Mickens, R. E.
1985-01-01
The classical method of equivalent linearization is extended to a particular class of nonlinear difference equations. It is shown that the method can be used to obtain an approximation of the periodic solutions of these equations. In particular, the parameters of the limit cycle and the limit points can be determined. Three examples illustrating the method are presented.
Equivalent Hamiltonian for the Lee model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, H. F.
2008-03-15
Using the techniques of quasi-Hermitian quantum mechanics and quantum field theory we use a similarity transformation to construct an equivalent Hermitian Hamiltonian for the Lee model. In the field theory confined to the V/N{theta} sector it effectively decouples V, replacing the three-point interaction of the original Lee model by an additional mass term for the V particle and a four-point interaction between N and {theta}. While the construction is originally motivated by the regime where the bare coupling becomes imaginary, leading to a ghost, it applies equally to the standard Hermitian regime where the bare coupling is real. In thatmore » case the similarity transformation becomes a unitary transformation.« less
Construction of monoenergetic neutron calibration fields using 45Sc(p, n)45Ti reaction at JAEA.
Tanimura, Y; Saegusa, J; Shikaze, Y; Tsutsumi, M; Shimizu, S; Yoshizawa, M
2007-01-01
The 8 and 27 keV monoenergetic neutron calibration fields have been developed by using (45)Sc(p, n)(45)Ti reaction. Protons from a 4-MV Pelletron accelerator are used to bombard a thin scandium target evaporated onto a platinum disc. The proton energies are finely adjusted to the resonance to generate the 8 and 27 keV neutrons by applying a high voltage to the target assemblies. The neutron energies were measured using the time-of-flight method with a lithium glass scintillation detector. The neutron fluences at a calibration point located at 50 cm from the target were evaluated using Bonner spheres. A long counter was placed at 2.2 m from the target and at 60 degrees to the direction of the proton beam in order to monitor the fluence at the calibration point. Fluence and dose equivalent rates at the calibration point are sufficient to calibrate many types of the neutron survey metres.
Ultra-wideband directional sampler
McEwan, T.E.
1996-05-14
The Ultra-Wideband (UWB) Directional Sampler is a four port device that combines the function of a directional coupler with a high speed sampler. Two of the four ports operate at a high sub-nanosecond speed, in ``real time``, and the other two ports operate at a slow millisecond-speed, in ``equivalent time``. A signal flowing inbound to either of the high speed ports is sampled and coupled, in equivalent time, to the adjacent equivalent time port while being isolated from the opposite equivalent time port. A primary application is for a time domain reflectometry (TDR) situation where the reflected pulse returns while the outbound pulse is still being transmitted, such as when the reflecting discontinuity is very close to the TDR apparatus. 3 figs.
Bias-dependent hybrid PKI empirical-neural model of microwave FETs
NASA Astrophysics Data System (ADS)
Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera
2011-10-01
Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.
Symmetry investigations on the incompressible stationary axisymmetric Euler equations with swirl
NASA Astrophysics Data System (ADS)
Frewer, M.; Oberlack, M.; Guenther, S.
2007-08-01
We discuss the incompressible stationary axisymmetric Euler equations with swirl, for which we derive via a scalar stream function an equivalent representation, the Bragg-Hawthorne equation [Bragg, S.L., Hawthorne, W.R., 1950. Some exact solutions of the flow through annular cascade actuator discs. J. Aero. Sci. 17, 243]. Despite this obvious equivalence, we will show that under a local Lie point symmetry analysis the Bragg-Hawthorne equation exposes itself as not being fully equivalent to the original Euler equations. This is reflected in the way that it possesses additional symmetries not being admitted by its counterpart. In other words, a symmetry of the Bragg-Hawthorne equation is in general not a symmetry of the Euler equations. Not the differential Euler equations but rather a set of integro-differential equations attains full equivalence to the Bragg-Hawthorne equation. For these intermediate Euler equations, it is interesting to note that local symmetries of the Bragg-Hawthorne equation transform to local as well as to nonlocal symmetries. This behaviour, on the one hand, is in accordance with Zawistowski's result [Zawistowski, Z.J., 2001. Symmetries of integro-differential equations. Rep. Math. Phys. 48, 269; Zawistowski, Z.J., 2004. General criterion of invariance for integro-differential equations. Rep. Math. Phys. 54, 341] that it is possible for integro-differential equations to admit local Lie point symmetries. On the other hand, with this transformation process we collect symmetries which cannot be obtained when carrying out a usual local Lie point symmetry analysis. Finally, the symmetry classification of the Bragg-Hawthorne equation is used to find analytical solutions for the phenomenon of vortex breakdown.
Commonwealth Degrees from Class to Equivalence: Changing to Grade Point Averages in the Caribbean
ERIC Educational Resources Information Center
Bastick, Tony
2004-01-01
British Commonwealth universities inherited the class system for classifying degrees. However, increasing global marketization has brought with it increasing demands for student exchanges, particularly with universities in North America. Hence, Commonwealth universities are considering adopting grade point averages (GPAs) for degree classification…
Weigold, Arne; Weigold, Ingrid K; Russell, Elizabeth J
2013-03-01
Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as nonequivalent samples in different conditions due to recruitment, participant self-selection to conditions, and data collection procedures, as well as incomplete or inappropriate statistical procedures for examining equivalence. We conducted 2 studies examining the equivalence of paper-and-pencil and Internet data collection that accounted for these issues. In both studies, we used measures of personality, social desirability, and computer self-efficacy, and, in Study 2, we used personal growth initiative to assess quantitative equivalence (i.e., mean equivalence), qualitative equivalence (i.e., internal consistency and intercorrelations), and auxiliary equivalence (i.e., response rates, missing data, completion time, and comfort completing questionnaires using paper-and-pencil and the Internet). Study 1 investigated the effects of completing surveys via paper-and-pencil or the Internet in both traditional (i.e., lab) and natural (i.e., take-home) settings. Results indicated equivalence across conditions, except for auxiliary equivalence aspects of missing data and completion time. Study 2 examined mailed paper-and-pencil and Internet surveys without contact between experimenter and participants. Results indicated equivalence between conditions, except for auxiliary equivalence aspects of response rate for providing an address and completion time. Overall, the findings show that paper-and-pencil and Internet data collection methods are generally equivalent, particularly for quantitative and qualitative equivalence, with nonequivalence only for some aspects of auxiliary equivalence. PsycINFO Database Record (c) 2013 APA, all rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuire, J. B.
2011-12-01
There is a body of conventional wisdom that holds that a solvable quantum problem, by virtue of its solvability, is pathological and thus irrelevant. It has been difficult to refute this view owing to the paucity of theoretical constructs and experimental results. Recent experiments involving equivalent ions trapped in a spatial conformation of extreme anisotropic confinement (longitudinal extension tens, hundreds or even thousands of times transverse extension) have modified the view of relevancy, and it is now possible to consider systems previously thought pathological, in particular point Bosons that repel in one dimension. It has been difficult for the experimentalistsmore » to utilize existing theory, mainly due to long-standing theoretical misunderstanding of the relevance of the permutation group, in particular the non-commutativity of translations (periodicity) and transpositions (permutation). This misunderstanding is most easily rectified in the case of repelling Bosons.« less
Magnetic properties of X-ray bright points. [in sun
NASA Technical Reports Server (NTRS)
Golub, L.; Krieger, A. S.; Harvey, J. W.; Vaiana, G. S.
1977-01-01
Using high-resolution Kitt Peak National Observatory magnetograms and sequences of simultaneous S-054 soft X-ray solar images, the properties of X-ray bright points (XBP) and ephemeral active regions (ER) are compared. All XBP appear on the magnetograms as bipolar features, except for very recently emerged or old and decayed XBP. The separation of the magnetic bipoles is found to increase with the age of the XBP, with an average emergence growth rate of 2.2 plus or minus 0.4 km per sec. The total magnetic flux in a typical XBP living about 8 hr is found to be about two times ten to the nineteenth power Mx. A proportionality is found between XBP lifetime and total magnetic flux, equivalent to about ten to the twentieth power Mx per day of lifetime.
Bargar, K.E.; Fournier, R.O.
1988-01-01
Heating and freezing data were obtained for liquid-rich secondary fluid inclusions in magmatic quartz, hydrothermal calcite and hydrothermal quartz crystals from 19 sampled depths in eight production drill holes (PGM-1, 2, 3, 5, 10, 11, 12 and 15) of the Miravalles geothermal field in northwestern Costa Rica. Homogenization temperatures for 386 fluid inclusions range from near the present measured temperatures to as much as 70??C higher than the maximum measured well temperature of about 240??C. Melting-point temperature measurements for 76 fluid inclusions suggest a calculated salinity range of about 0.2-1.9 wt% NaCl equivalent. Calculated salinities as high as 3.1-4.0 wt% NaCl equivalent for 20 fluid inclusions from the lower part of drill hole PGM-15 (the deepest drill hole) indicate that higher salinity water probably was present in the deeper part of the Miravalles geothermal field at the time these fluid inclusions were formed. ?? 1988.
Radio-frequency ring applicator: energy distributions measured in the CDRH phantom.
van Rhoon, G C; Raskmark, P; Hornsleth, S N; van den Berg, P M
1994-11-01
SAR distributions were measured in the CDRH phantom, a 1 cm fat-equivalent shell filled with an abdomen-equivalent liquid (sigma = 0.4-1.0 S m-1; dimensions 22 x 32 x 57 cm) to demonstrate the feasibility of the ring applicator to obtain deep heating. The ring electrodes were fixed in a PVC tube; diameter 48 cm, ring width 20 cm and gap width between both rings 31.6 cm. Radio-frequency energy was fed to the electrodes at eight points. The medium between the electrodes and the phantom was deionised water. The SAR distribution in the liquid tissue volume was obtained by a scanning E-field probe measuring the E-field in all three directions. With equal amplitude and phase applied to all feeding points, a uniform SAR distribution was measured in the central cross-section at 30 MHz. With RF energy supplied to only four adjacent feeding points (others were connected to a 50 omega load), the feasibility to perform amplitude steering was demonstrated; SAR values above 50% of the maximum SAR were measured in one quadrant only. SAR distributions obtained at 70 MHz showed an improved focusing ability; a maximum at the centre exists for an electric conductivity of the abdomen-equivalent tissue of 0.6 and 0.4 S m-1.
Attentional awakening: gradual modulation of temporal attention in rapid serial visual presentation.
Ariga, Atsunori; Yokosawa, Kazuhiko
2008-03-01
Orienting attention to a point in time facilitates processing of an item within rapidly changing surroundings. We used a one-target RSVP task to look for differences in accuracy in reporting a target related to when the target temporally appeared in the sequence. The results show that observers correctly report a target early in the sequence less frequently than later in the sequence. Previous RSVP studies predicted equivalently accurate performances for one target wherever it appeared in the sequence. We named this new phenomenon attentional awakening, which reflects a gradual modulation of temporal attention in a rapid sequence.
Ayad, Sabry; Babazade, Rovnat; Elsharkawy, Hesham; Nadar, Vinayak; Lokhande, Chetan; Makarova, Natalya; Khanna, Rashi; Sessler, Daniel I; Turan, Alparslan
2016-01-01
Epidural analgesia is considered the standard of care but cannot be provided to all patients Liposomal bupivacaine has been approved for field blocks such as transversus abdominis plane (TAP) blocks but has not been clinically compared against other modalities. In this retrospective propensity matched cohort study we thus tested the primary hypothesis that TAP infiltration are noninferior (not worse) to continuous epidural analgesia and superior (better) to intravenous opioid analgesia in patients recovering from major lower abdominal surgery. 318 patients were propensity matched on 18 potential factors among three groups (106 per group): 1) TAP infiltration with bupivacaine liposome; 2) continuous Epidural analgesia with plain bupivacaine; and; 3) intravenous patient-controlled analgesia (IV PCA). We claimed TAP noninferior (not worse) over Epidural if TAP was noninferior (not worse) on total morphine-equivalent opioid and time-weighted average pain score (10-point scale) within first 72 hours after surgery with noninferiority deltas of 1 (10-point scale) for pain and an increase less of 20% in the mean morphine equivalent opioid consumption. We claimed TAP or Epidural groups superior (better) over IV PCA if TAP or Epidural was superior on opioid consumption and at least noninferior on pain outcome. Multivariable linear regressions within the propensity-matched cohorts were used to model total morphine-equivalent opioid dose and time-weighted average pain score within first 72 hours after surgery; joint hypothesis framework was used for formal testing. TAP infiltration were noninferior to Epidural on both primary outcomes (p<0.001). TAP infiltration were noninferior to IV PCA on pain scores (p = 0.001) but we did not find superiority on opioid consumption (p = 0.37). We did not find noninferiority of Epidural over IV PCA on pain scores (P = 0.13) and nor did we find superiority on opioid consumption (P = 0.98). TAP infiltration with liposomal bupivacaine and continuous epidural analgesia were similar in terms of pain and opioid consumption, and not worse in pain compared with IV PCA. TAP infiltrations might be a reasonable alternative to epidural analgesia in abdominal surgical patients. A large randomized trial comparing these techniques is justified.
Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation
NASA Astrophysics Data System (ADS)
Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla
2014-07-01
Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.
Benedict, Ralph H B; Smerbeck, Audrey; Parikh, Rajavi; Rodgers, Jonathan; Cadavid, Diego; Erlanger, David
2012-09-01
Cognitive impairment is common in multiple sclerosis (MS), but is seldom assessed in clinical trials investigating the effects of disease-modifying therapies. The Symbol Digit Modalities Test (SDMT) is a particularly promising tool due to its sensitivity and robust correlation with brain magnetic resonance imaging (MRI) and vocational disability. Unfortunately, there are no validated alternate SDMT forms, which are needed to mitigate practice effects. The aim of the study was to assess the reliability and equivalence of SDMT alternate forms. Twenty-five healthy participants completed each of five alternate versions of the SDMT - the standard form, two versions from the Rao Brief Repeatable Battery, and two forms specifically designed for this study. Order effects were controlled using a Latin-square research design. All five versions of the SDMT produced mean values within 3 raw score points of one another. Three forms were very consistent, and not different by conservative statistical tests. The SDMT test-retest reliability using these forms was good to excellent, with all r values exceeding 0.80. For the first time, we find good evidence that at least three alternate versions of the SDMT are of equivalent difficulty in healthy adults. The forms are reliable, and can be implemented in clinical trials emphasizing cognitive outcomes.
Structure and Stability of One-Dimensional Detonations in Ethylene-Air Mixtures
NASA Technical Reports Server (NTRS)
Yungster, S.; Radhakrishnan, K.; Perkins, High D. (Technical Monitor)
2003-01-01
The propagation of one-dimensional detonations in ethylene-air mixtures is investigated numerically by solving the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing scheme and a point implicit, first-order-accurate, time marching algorithm. The ethylene-air combustion is modeled with a 20-species, 36-step reaction mechanism. A multi-level, dynamically adaptive grid is utilized, in order to resolve the structure of the detonation. Parametric studies over an equivalence ratio range of 0.5 less than phi less than 3 for different initial pressures and degrees of detonation overdrive demonstrate that the detonation is unstable for low degrees of overdrive, but the dynamics of wave propagation varies with fuel-air equivalence ratio. For equivalence ratios less than approximately 1.2 the detonation exhibits a short-period oscillatory mode, characterized by high-frequency, low-amplitude waves. Richer mixtures (phi greater than 1.2) exhibit a low-frequency mode that includes large fluctuations in the detonation wave speed; that is, a galloping propagation mode is established. At high degrees of overdrive, stable detonation wave propagation is obtained. A modified McVey-Toong short-period wave-interaction theory is in excellent agreement with the numerical simulations.
Kutner, Jean S; Smith, Marlaine C; Corbin, Lisa; Hemphill, Linnea; Benton, Kathryn; Mellis, B Karen; Beaty, Brenda; Felton, Sue; Yamashita, Traci E; Bryant, Lucinda L; Fairclough, Diane L
2008-09-16
Small studies of variable quality suggest that massage therapy may relieve pain and other symptoms. To evaluate the efficacy of massage for decreasing pain and symptom distress and improving quality of life among persons with advanced cancer. Multisite, randomized clinical trial. Population-based Palliative Care Research Network. 380 adults with advanced cancer who were experiencing moderate-to-severe pain; 90% were enrolled in hospice. Six 30-minute massage or simple-touch sessions over 2 weeks. Primary outcomes were immediate (Memorial Pain Assessment Card, 0- to 10-point scale) and sustained (Brief Pain Inventory [BPI], 0- to 10-point scale) change in pain. Secondary outcomes were immediate change in mood (Memorial Pain Assessment Card) and 60-second heart and respiratory rates and sustained change in quality of life (McGill Quality of Life Questionnaire, 0- to 10-point scale), symptom distress (Memorial Symptom Assessment Scale, 0- to 4-point scale), and analgesic medication use (parenteral morphine equivalents [mg/d]). Immediate outcomes were obtained just before and after each treatment session. Sustained outcomes were obtained at baseline and weekly for 3 weeks. 298 persons were included in the immediate outcome analysis and 348 in the sustained outcome analysis. A total of 82 persons did not receive any allocated study treatments (37 massage patients, 45 control participants). Both groups demonstrated immediate improvement in pain (massage, -1.87 points [95% CI, -2.07 to -1.67 points]; control, -0.97 point [CI, -1.18 to -0.76 points]) and mood (massage, 1.58 points [CI, 1.40 to 1.76 points]; control, 0.97 point [CI, 0.78 to 1.16 points]). Massage was superior for both immediate pain and mood (mean difference, 0.90 and 0.61 points, respectively; P < 0.001). No between-group mean differences occurred over time in sustained pain (BPI mean pain, 0.07 point [CI, -0.23 to 0.37 points]; BPI worst pain, -0.14 point [CI, -0.59 to 0.31 points]), quality of life (McGill Quality of Life Questionnaire overall, 0.08 point [CI, -0.37 to 0.53 points]), symptom distress (Memorial Symptom Assessment Scale global distress index, -0.002 point [CI, -0.12 to 0.12 points]), or analgesic medication use (parenteral morphine equivalents, -0.10 mg/d [CI, -0.25 to 0.05 mg/d]). The immediate outcome measures were obtained by unblinded study therapists, possibly leading to reporting bias and the overestimation of a beneficial effect. The generalizability to all patients with advanced cancer is uncertain. The differential beneficial effect of massage therapy over simple touch is not conclusive without a usual care control group. Massage may have immediately beneficial effects on pain and mood among patients with advanced cancer. Given the lack of sustained effects and the observed improvements in both study groups, the potential benefits of attention and simple touch should also be considered in this patient population.
Turner, James D; Henshaw, Daryl S; Weller, Robert S; Jaffe, J Douglas; Edwards, Christopher J; Reynolds, J Wells; Russell, Gregory B; Dobson, Sean W
2018-05-08
To determine whether perineural dexamethasone prolongs peripheral nerve blockade (PNB) when measured objectively; and to determine if a 1 mg and 4 mg dose provide equivalent PNB prolongation compared to PNB without dexamethasone. Multiple studies have reported that perineural dexamethasone added to local anesthetics (LA) can prolong PNB. However, these studies have relied on subjective end-points to quantify PNB duration. The optimal dose remains unknown. We hypothesized that 1 mg of perineural dexamethasone would be equivalent in prolonging an adductor canal block (ACB) when compared to 4 mg of dexamethasone, and that both doses would be superior to an ACB performed without dexamethasone. This was a prospective, randomized, double-blind, placebo-controlled equivalency trial involving 85 patients undergoing a unicompartmental knee arthroplasty. All patients received an ACB with 20 ml of 0.25% bupivacaine with 1:400,000 epinephrine. Twelve patients had 0 mg of dexamethasone (placebo) added to the LA mixture; 36 patients had 1 mg of dexamethasone in the LA; and 37 patients had 4 mg of dexamethasone in the LA. The primary outcome was block duration determined by serial neurologic pinprick examinations. Secondary outcomes included time to first analgesic, serial pain scores, and cumulative opioid consumption. The 1 mg (31.8 ± 10.5 h) and 4 mg (37.9 ± 10 h) groups were not equivalent, TOST [Mean difference (95% CI); 6.1 (-10.5, -2.3)]. Also, the 4 mg group was superior to the 1 mg group (p-value = 0.035), and the placebo group (29.7 ± 6.8 h, p-value = 0.011). There were no differences in opioid consumption or time to analgesic request; however, some pain scores were significantly lower in the dexamethasone groups when compared to placebo. Dexamethasone 4 mg, but not 1 mg, prolonged the duration of an ACB when measured by serial neurologic pinprick exams. NCT02462148. Copyright © 2018 Elsevier Inc. All rights reserved.
Time-resolved spectrophotometry of the AM Herculis system E2003 + 225
NASA Technical Reports Server (NTRS)
Mccarthy, Patrick; Bowyer, Stuart; Clarke, John T.
1986-01-01
Time-resolved, medium-resolution photometry is reported for the binary system E2003 + 225 over a complete orbital period in 1984. The object was 1.5-2 mag fainter than when viewed earlier in 1984. The fluxes, equivalent widths and full widths at FWHM for dominant lines are presented for four points in the cycle. A coincidence of emission lines and a 4860 A continuum line was observed for the faster component, which had a 500 km/sec velocity amplitude that was symmetric around the zero line. An aberrant emission line component, i.e., stationary narrow emission lines displaced about 9 A from the rest wavelengths, is modeled as Zeeman splitting of emission from material close to the primary.
Inflation, quintessence, and the origin of mass
NASA Astrophysics Data System (ADS)
Wetterich, C.
2015-08-01
In a unified picture both inflation and present dynamical dark energy arise from the same scalar field. The history of the Universe describes a crossover from a scale invariant "past fixed point" where all particles are massless, to a "future fixed point" for which spontaneous breaking of the exact scale symmetry generates the particle masses. The cosmological solution can be extrapolated to the infinite past in physical time - the universe has no beginning. This is seen most easily in a frame where particle masses and the Planck mass are field-dependent and increase with time. In this "freeze frame" the Universe shrinks and heats up during radiation and matter domination. In the equivalent, but singular Einstein frame cosmic history finds the familiar big bang description. The vicinity of the past fixed point corresponds to inflation. It ends at a first stage of the crossover. A simple model with no more free parameters than ΛCDM predicts for the primordial fluctuations a relation between the tensor amplitude r and the spectral index n, r = 8.19 (1 - n) - 0.137. The crossover is completed by a second stage where the beyond-standard-model sector undergoes the transition to the future fixed point. The resulting increase of neutrino masses stops a cosmological scaling solution, relating the present dark energy density to the present neutrino mass. At present our simple model seems compatible with all observational tests. We discuss how the fixed points can be rooted within quantum gravity in a crossover between ultraviolet and infrared fixed points. Then quantum properties of gravity could be tested both by very early and late cosmology.
Neutron scattered dose equivalent to a fetus from proton radiotherapy of the mother.
Mesoloras, Geraldine; Sandison, George A; Stewart, Robert D; Farr, Jonathan B; Hsi, Wen C
2006-07-01
Scattered neutron dose equivalent to a representative point for a fetus is evaluated in an anthropomorphic phantom of the mother undergoing proton radiotherapy. The effect on scattered neutron dose equivalent to the fetus of changing the incident proton beam energy, aperture size, beam location, and air gap between the beam delivery snout and skin was studied for both a small field snout and a large field snout. Measurements of the fetus scattered neutron dose equivalent were made by placing a neutron bubble detector 10 cm below the umbilicus of an anthropomorphic Rando phantom enhanced by a wax bolus to simulate a second trimester pregnancy. The neutron dose equivalent in milliSieverts (mSv) per proton treatment Gray increased with incident proton energy and decreased with aperture size, distance of the fetus representative point from the field edge, and increasing air gap. Neutron dose equivalent to the fetus varied from 0.025 to 0.450 mSv per proton Gray for the small field snout and from 0.097 to 0.871 mSv per proton Gray for the large field snout. There is likely to be no excess risk to the fetus of severe mental retardation for a typical proton treatment of 80 Gray to the mother since the scattered neutron dose to the fetus of 69.7 mSv is well below the lower confidence limit for the threshold of 300 mGy observed for the occurrence of severe mental retardation in prenatally exposed Japanese atomic bomb survivors. However, based on the linear no threshold hypothesis, and this same typical treatment for the mother, the excess risk to the fetus of radiation induced cancer death in the first 10 years of life is 17.4 per 10,000 children.
NASA Astrophysics Data System (ADS)
Krajíček, Zdeněk; Bergoglio, Mercede; Jousten, Karl; Otal, Pierre; Sabuga, Wladimir; Saxholm, Sari; Pražák, Dominik; Vičar, Martin
2014-01-01
This report describes a EURAMET comparison of five European National Metrology Institutes in low gauge and absolute pressure in gas (nitrogen), denoted as EURAMET.M.P-K4.2010. Its main intention is to state equivalence of the pressure standards, in particular those based on the technology of force-balanced piston gauges such as e.g. FRS by Furness Controls, UK and FPG8601 by DHI-Fluke, USA. It covers the range from 1 Pa to 15 kPa, both gauge and absolute. The comparison in absolute mode serves as a EURAMET Key Comparison which can be linked to CCM.P-K4 and CCM.P-K2 via PTB. The comparison in gauge mode is a supplementary comparison. The comparison was carried out from September 2008 till October 2012. The participating laboratories were the following: CMI, INRIM, LNE, MIKES, PTB-Berlin (absolute pressure 1 kPa and below) and PTB-Braunschweig (absolute pressure 1 kPa and above and gauge pressure). CMI was the pilot laboratory and provided a transfer standard for the comparison. This transfer standard was also the laboratory standard of CMI at the same time, which resulted in a unique and logistically difficult star comparison. Both in gauge and absolute pressures all the participating institutes successfully proved their equivalence with respect to the reference value and all also proved mutual bilateral equivalences in all the points. All the participating laboratories are also equivalent with the reference values of CCM.P-K4 and CCM.P-K2 in the relevant points. The comparison also proved the ability of FPG8601 to serve as a transfer standard. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Hecksel, D; Anferov, V; Fitzek, M; Shahnazi, K
2010-06-01
Conventional proton therapy facilities use double scattering nozzles, which are optimized for delivery of a few fixed field sizes. Similarly, uniform scanning nozzles are commissioned for a limited number of field sizes. However, cases invariably occur where the treatment field is significantly different from these fixed field sizes. The purpose of this work was to determine the impact of the radiation field conformity to the patient-specific collimator on the secondary neutron dose equivalent. Using a WENDI-II neutron detector, the authors experimentally investigated how the neutron dose equivalent at a particular point of interest varied with different collimator sizes, while the beam spreading was kept constant. The measurements were performed for different modes of dose delivery in proton therapy, all of which are available at the Midwest Proton Radiotherapy Institute (MPRI): Double scattering, uniform scanning delivering rectangular fields, and uniform scanning delivering circular fields. The authors also studied how the neutron dose equivalent changes when one changes the amplitudes of the scanned field for a fixed collimator size. The secondary neutron dose equivalent was found to decrease linearly with the collimator area for all methods of dose delivery. The relative values of the neutron dose equivalent for a collimator with a 5 cm diameter opening using 88 MeV protons were 1.0 for the double scattering field, 0.76 for rectangular uniform field, and 0.6 for the circular uniform field. Furthermore, when a single circle wobbling was optimized for delivery of a uniform field 5 cm in diameter, the secondary neutron dose equivalent was reduced by a factor of 6 compared to the double scattering nozzle. Additionally, when the collimator size was kept constant, the neutron dose equivalent at the given point of interest increased linearly with the area of the scanned proton beam. The results of these experiments suggest that the patient-specific collimator is a significant contributor to the secondary neutron dose equivalent to a distant organ at risk. Improving conformity of the radiation field to the patient-specific collimator can significantly reduce secondary neutron dose equivalent to the patient. Therefore, it is important to increase the number of available generic field sizes in double scattering systems as well as in uniform scanning nozzles.
20 CFR 655.736 - What are H-1B-dependent employers and willful violators?
Code of Federal Regulations, 2014 CFR
2014-04-01
.... workers and H-1B nonimmigrants, and measured according to full-time equivalent employees) and the employer...)— (i)(A) The employer has 25 or fewer full-time equivalent employees who are employed in the U.S.; and... than 50 full-time equivalent employees who are employed in the U.S.; and (B) Employs more than 12 H-1B...
20 CFR 655.736 - What are H-1B-dependent employers and willful violators?
Code of Federal Regulations, 2011 CFR
2011-04-01
.... workers and H-1B nonimmigrants, and measured according to full-time equivalent employees) and the employer...)— (i)(A) The employer has 25 or fewer full-time equivalent employees who are employed in the U.S.; and... than 50 full-time equivalent employees who are employed in the U.S.; and (B) Employs more than 12 H-1B...
20 CFR 655.736 - What are H-1B-dependent employers and willful violators?
Code of Federal Regulations, 2013 CFR
2013-04-01
.... workers and H-1B nonimmigrants, and measured according to full-time equivalent employees) and the employer...)— (i)(A) The employer has 25 or fewer full-time equivalent employees who are employed in the U.S.; and... than 50 full-time equivalent employees who are employed in the U.S.; and (B) Employs more than 12 H-1B...
20 CFR 655.736 - What are H-1B-dependent employers and willful violators?
Code of Federal Regulations, 2012 CFR
2012-04-01
.... workers and H-1B nonimmigrants, and measured according to full-time equivalent employees) and the employer...)— (i)(A) The employer has 25 or fewer full-time equivalent employees who are employed in the U.S.; and... than 50 full-time equivalent employees who are employed in the U.S.; and (B) Employs more than 12 H-1B...
20 CFR 655.736 - What are H-1B-dependent employers and willful violators?
Code of Federal Regulations, 2010 CFR
2010-04-01
.... workers and H-1B nonimmigrants, and measured according to full-time equivalent employees) and the employer...)— (i)(A) The employer has 25 or fewer full-time equivalent employees who are employed in the U.S.; and... than 50 full-time equivalent employees who are employed in the U.S.; and (B) Employs more than 12 H-1B...
EURAMET.T-K7 Key Comparison of Water Triple-Point Cells
NASA Astrophysics Data System (ADS)
Peruzzi, A.; Bosma, R.; Kerkhof, O.; Rosenkranz, P.; Del Campo Maldonado, M. D.; Strnad, R.; Nielsen, J.; Anagnostou, M.; Veliki, T.; Zvizdic, D.; Grudnewicz, E.; Nedea, M.; Neagu, D. M.; Steur, P.; Filipe, E.; Lobo, I.; Antonsen, I.; Renaot, E.; Heinonen, M.; Weckstrom, T.; Bojkovski, J.; Turzo-Andras, E.; Nemeth, S.; White, M.; Tegeler, E.; Dobre, M.; Duris, S.; Kartal Dogan, A.; Uytun, A.; Augevicius, V.; Pauzha, A.; Pokhodun, A.; Simic, S.
2011-12-01
The results of a EURAMET key comparison of water triple-point cells (EURAMET.T-K7) are reported. The equipment used, the measuring conditions applied, and the procedures adopted for the water triple-point measurement at the participating laboratories are synthetically presented. The definitions of the national reference for the water triple-point temperature adopted by each laboratory are disclosed. The multiplicity of degrees of equivalence arising for the linking laboratories with respect to the "mother" comparison CCT-K7 is discussed in detail.
COSMOS-e'-soft Higgsotic attractors
NASA Astrophysics Data System (ADS)
Choudhury, Sayantan
2017-07-01
In this work, we have developed an elegant algorithm to study the cosmological consequences from a huge class of quantum field theories (i.e. superstring theory, supergravity, extra dimensional theory, modified gravity, etc.), which are equivalently described by soft attractors in the effective field theory framework. In this description we have restricted our analysis for two scalar fields - dilaton and Higgsotic fields minimally coupled with Einstein gravity, which can be generalized for any arbitrary number of scalar field contents with generalized non-canonical and non-minimal interactions. We have explicitly used R^2 gravity, from which we have studied the attractor and non-attractor phases by exactly computing two point, three point and four point correlation functions from scalar fluctuations using the In-In (Schwinger-Keldysh) and the δ N formalisms. We have also presented theoretical bounds on the amplitude, tilt and running of the primordial power spectrum, various shapes (equilateral, squeezed, folded kite or counter-collinear) of the amplitude as obtained from three and four point scalar functions, which are consistent with observed data. Also the results from two point tensor fluctuations and the field excursion formula are explicitly presented for the attractor and non-attractor phase. Further, reheating constraints, scale dependent behavior of the couplings and the dynamical solution for the dilaton and Higgsotic fields are also presented. New sets of consistency relations between two, three and four point observables are also presented, which shows significant deviation from canonical slow-roll models. Additionally, three possible theoretical proposals have presented to overcome the tachyonic instability at the time of late time acceleration. Finally, we have also provided the bulk interpretation from the three and four point scalar correlation functions for completeness.
Noninvasive stress testing - Methodology for elimination of the phonocardiogram
NASA Technical Reports Server (NTRS)
Spodick, D. H.; Lance, V. Q.
1976-01-01
Measurement by systolic time intervals (STI) of cardiac responses requires extremely careful recording during actual stress test performance. Previous work indicated no significant changes in the pulse transmission time (PTT) during exercise and other challenges. Since external STI depend on the carotid pulse offset by the PTT as an aortic curve equivalent, stable PTT implies that timing of the carotid upstroke and the carotid incisura would respectively track the pre-ejection period and the aortic incisura. In ten subjects, STIs were recorded at supine rest, sitting, standing, during prompt and sustained squatting and during isometric and dynamic exercise. The results demonstrated the tracking of both points. Coefficients of correlation and of determination were uniformly high for all challenges except isometric handgrip (IHG). Since left ventricular ejection time is obtained directly from the pulse curve, with the exception of IHG, STI responses during stress testing can be measured without a phonocardiogram.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
Real-Time Model and Simulation Architecture for Half- and Full-Bridge Modular Multilevel Converters
NASA Astrophysics Data System (ADS)
Ashourloo, Mojtaba
This work presents an equivalent model and simulation architecture for real-time electromagnetic transient analysis of either half-bridge or full-bridge modular multilevel converter (MMC) with 400 sub-modules (SMs) per arm. The proposed CPU/FPGA-based architecture is optimized for the parallel implementation of the presented MMC model on the FPGA and is beneficiary of a high-throughput floating-point computational engine. The developed real-time simulation architecture is capable of simulating MMCs with 400 SMs per arm at 825 nanoseconds. To address the difficulties of the sorting process implementation, a modified Odd-Even Bubble sorting is presented in this work. The comparison of the results under various test scenarios reveals that the proposed real-time simulator is representing the system responses in the same way of its corresponding off-line counterpart obtained from the PSCAD/EMTDC program.
Soil-moisture constants and their variation
Walter M. Broadfoot; Hubert D. Burke
1958-01-01
"Constants" like field capacity, liquid limit, moisture equivalent, and wilting point are used by most students and workers in soil moisture. These constants may be equilibrium points or other values that describe soil moisture. Their values under specific soil and cover conditions have been discussed at length in the literature, but few general analyses and...
40 CFR 63.653 - Monitoring, recordkeeping, and implementation plan for emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) For each emission point included in an emissions average, the owner or operator shall perform testing, monitoring, recordkeeping, and reporting equivalent to that required for Group 1 emission points complying... internal floating roof, external roof, or a closed vent system with a control device, as appropriate to the...
Derivation and evaluation of a labeled hedonic scale.
Lim, Juyun; Wood, Alison; Green, Barry G
2009-11-01
The objective of this study was to develop a semantically labeled hedonic scale (LHS) that would yield ratio-level data on the magnitude of liking/disliking of sensation equivalent to that produced by magnitude estimation (ME). The LHS was constructed by having 49 subjects who were trained in ME rate the semantic magnitudes of 10 common hedonic descriptors within a broad context of imagined hedonic experiences that included tastes and flavors. The resulting bipolar scale is statistically symmetrical around neutral and has a unique semantic structure. The LHS was evaluated quantitatively by comparing it with ME and the 9-point hedonic scale. The LHS yielded nearly identical ratings to those obtained using ME, which implies that its semantic labels are valid and that it produces ratio-level data equivalent to ME. Analyses of variance conducted on the hedonic ratings from the LHS and the 9-point scale gave similar results, but the LHS showed much greater resistance to ceiling effects and yielded normally distributed data, whereas the 9-point scale did not. These results indicate that the LHS has significant semantic, quantitative, and statistical advantages over the 9-point hedonic scale.
Stable plume rise in a shear layer.
Overcamp, Thomas J
2007-03-01
Solutions are given for plume rise assuming a power-law wind speed profile in a stably stratified layer for point and finite sources with initial vertical momentum and buoyancy. For a constant wind speed, these solutions simplify to the conventional plume rise equations in a stable atmosphere. In a shear layer, the point of maximum rise occurs further downwind and is slightly lower compared with the plume rise with a constant wind speed equal to the wind speed at the top of the stack. If the predictions with shear are compared with predictions for an equivalent average wind speed over the depth of the plume, the plume rise with shear is higher than plume rise with an equivalent average wind speed.
ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Viterna, Larry A.
1991-01-01
A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.
Localised burst reconstruction from space-time PODs in a turbulent channel
NASA Astrophysics Data System (ADS)
Garcia-Gutierrez, Adrian; Jimenez, Javier
2017-11-01
The traditional proper orthogonal decomposition of the turbulent velocity fluctuations in a channel is extended to time under the assumption that the attractor is statistically stationary and can be treated as periodic for long-enough times. The objective is to extract space- and time-localised eddies that optimally represent the kinetic energy (and two-event correlation) of the flow. Using time-resolved data of a small-box simulation at Reτ = 1880 , minimal for y / h 0.25 , PODs are computed from the two-point spectral-density tensor Φ(kx ,kz , y ,y' , ω) . They are Fourier components in x, z and time, and depend on y and on the temporal frequency ω, or, equivalently, on the convection velocity c = ω /kx . Although the latter depends on y, a spatially and temporally localised `burst' can be synthesised by adding a range of PODs with specific phases. The results are localised bursts that are amplified and tilted, in a time-periodic version of Orr-like behaviour. Funded by the ERC COTURB project.
Mathematical embryology: the fluid mechanics of nodal cilia
NASA Astrophysics Data System (ADS)
Smith, D. J.; Smith, A. A.; Blake, J. R.
2011-07-01
Left-right symmetry breaking is critical to vertebrate embryonic development; in many species this process begins with cilia-driven flow in a structure termed the `node'. Primary `whirling' cilia, tilted towards the posterior, transport morphogen-containing vesicles towards the left, initiating left-right asymmetric development. We review recent theoretical models based on the point-force stokeslet and point-torque rotlet singularities, explaining how rotation and surface-tilt produce directional flow. Analysis of image singularity systems enforcing the no-slip condition shows how tilted rotation produces a far-field `stresslet' directional flow, and how time-dependent point-force and time-independent point-torque models are in this respect equivalent. Associated slender body theory analysis is reviewed; this approach enables efficient and accurate simulation of three-dimensional time-dependent flow, time-dependence being essential in predicting features of the flow such as chaotic advection, which have subsequently been determined experimentally. A new model for the nodal flow utilising the regularized stokeslet method is developed, to model the effect of the overlying Reichert's membrane. Velocity fields and particle paths within the enclosed domain are computed and compared with the flow profiles predicted by previous `membrane-less' models. Computations confirm that the presence of the membrane produces flow-reversal in the upper region, but no continuous region of reverse flow close to the epithelium. The stresslet far-field is no longer evident in the membrane model, due to the depth of the cavity being of similar magnitude to the cilium length. Simulations predict that vesicles released within one cilium length of the epithelium are generally transported to the left via a `loopy drift' motion, sometimes involving highly unpredictable detours around leftward cilia [truncated
New equivalent lumped electrical circuit for piezoelectric transformers.
Gonnard, Paul; Schmitt, P M; Brissaud, Michel
2006-04-01
A new equivalent circuit is proposed for a contour-vibration-mode piezoelectric transformer (PT). It is shown that the usual lumped equivalent circuit derived from the conventional Mason approach is not accurate. The proposed circuit, built on experimental measurements, makes an explicit difference between the elastic energies stored respectively on the primary and secondary parts. The experimental and theoretical resonance frequencies with the secondary in open or short circuit are in good agreement as well as the output "voltage-current" characteristic and the optimum efficiency working point. This circuit can be extended to various PT configurations and appears to be a useful tool for modeling electronic devices that integrate piezoelectric transformers.
Facts about Hospital Worker Safety
... Transferred FTE full-time employee (or full-time equivalent) HIPAA Health Insurance Portability and Accountability Act MSD ... injury and illness rates per 100 full-time equivalent employees (FTEs)—also known as the Total Case ...
A mission-based productivity compensation model for an academic anesthesiology department.
Reich, David L; Galati, Maria; Krol, Marina; Bodian, Carol A; Kahn, Ronald A
2008-12-01
We replaced a nearly fixed-salary academic physician compensation model with a mission-based productivity model with the goal of improving attending anesthesiologist productivity. The base salary system was stratified according to rank and clinical experience. The supplemental pay structure was linked to electronic patient records and a scheduling database to award points for clinical activity; educational, research, and administrative points systems were constructed in parallel. We analyzed monthly American Society of Anesthesiologist (ASA) unit data for operating room activity and physician compensation from 2000 through mid-2007, excluding the 1-yr implementation period (July 2004-June 2005) for the new model. Comparing 2005-2006 with 2000-2004, quarterly ASA units increased by 14% (P = 0.0001) and quarterly ASA units per full-time equivalent increased by 31% (P < 0.0001), while quarterly ASA units per anesthetizing location decreased by 10% (P = 0.046). Compared with a baseline year (2001), Instructor and Assistant Professor faculty compensation increased more than Associate Professor and Professor faculty (P < 0.001) in both pre- and postimplementation periods. There were larger compensation increases for the postimplementation period compared with preimplementation across faculty rank groupings (P < 0.0001). Academic and educational output was stable. Implementing a productivity-based faculty compensation model in an academic department was associated with increased mean supplemental pay with relatively fewer faculty. ASA units per month and ASA units per operating room full-time equivalent increased, and these metrics are the most likely drivers of the increased compensation. This occurred despite a slight decrease in clinical productivity as measured by ASA units per anesthetizing location. Academic and educational output was stable.
Lin, Yu-Ching; Wu, Wei-Ting; Hsu, Yu-Chun; Han, Der-Sheng; Chang, Ke-Vin
2018-02-01
To explore the effectiveness of botulinum toxin compared with non-surgical treatments in patients with lateral epicondylitis. Data sources including PubMed, Scopus, Embase and Airity Library from the earliest record to February 2017 were searched. Study design, patients' characteristics, dosage/brand of botulinum toxin, injection techniques, and measurements of pain and hand grip strength were retrieved. The standardized mean differences (SMDs) in pain relief and grip strength reduction were calculated at the following time points: 2-4, 8-12, and 16 weeks or more after injection. Six randomized controlled trials (321 participants) comparing botulinum toxin with placebo or corticosteroid injections were included. Compared with placebo, botulinum toxin injection significantly reduced pain at all three time points (SMD, -0.729, 95% confidence interval [CI], -1.286 to -0.171; SMD, -0.446, 95% CI, -0.740 to -0.152; SMD, -0.543, 95% CI, -0.978 to -0.107, respectively). Botulinum toxin was less effective than corticosteroid at 2-4 weeks (SMD, 1.153; 95% CI, 0.568-1.737) and both treatments appeared similar in efficacy after 8 weeks. Different injection sites and dosage/brand did not affect effectiveness. Botulinum toxin decreased grip strength 2-4 weeks after injection, and high equivalent dose could extend its paralytic effects to 8-12 weeks. When treating lateral epicondylitis, botulinum toxin was superior to placebo and could last for 16 weeks. Corticosteroid and botulinum toxin injections were largely equivalent, except the corticosteroid injections were better at pain relief in the early stages and were associated with less weakness in grip in the first 12 weeks.
Kaur, Primal; Chow, Vincent; Zhang, Nan; Moxness, Michael; Kaliyaperumal, Arunan; Markus, Richard
2017-01-01
Objective To demonstrate pharmacokinetic (PK) similarity of biosimilar candidate ABP 501 relative to adalimumab reference product from the USA and European Union (EU) and evaluate safety, tolerability and immunogenicity of ABP 501. Methods Randomised, single-blind, single-dose, three-arm, parallel-group study; healthy subjects were randomised to receive ABP 501 (n=67), adalimumab (USA) (n=69) or adalimumab (EU) (n=67) 40 mg subcutaneously. Primary end points were area under the serum concentration-time curve from time 0 extrapolated to infinity (AUCinf) and the maximum observed concentration (Cmax). Secondary end points included safety and immunogenicity. Results AUCinf and Cmax were similar across the three groups. Geometrical mean ratio (GMR) of AUCinf was 1.11 between ABP 501 and adalimumab (USA), and 1.04 between ABP 501 and adalimumab (EU). GMR of Cmax was 1.04 between ABP 501 and adalimumab (USA) and 0.96 between ABP 501 and adalimumab (EU). The 90% CIs for the GMRs of AUCinf and Cmax were within the prespecified standard PK equivalence criteria of 0.80 to 1.25. Treatment-related adverse events were mild to moderate and were reported for 35.8%, 24.6% and 41.8% of subjects in the ABP 501, adalimumab (USA) and adalimumab (EU) groups; incidence of antidrug antibodies (ADAbs) was similar among the study groups. Conclusions Results of this study demonstrated PK similarity of ABP 501 with adalimumab (USA) and adalimumab (EU) after a single 40-mg subcutaneous injection. No new safety signals with ABP 501 were identified. The safety and tolerability of ABP 501 was similar to the reference products, and similar ADAb rates were observed across the three groups. Trial registration number EudraCT number 2012-000785-37; Results. PMID:27466231
Olsho, Lauren Ew; Klerman, Jacob A; Wilde, Parke E; Bartlett, Susan
2016-08-01
US fruit and vegetable (FV) intake remains below recommendations, particularly for low-income populations. Evidence on effectiveness of rebates in addressing this shortfall is limited. This study evaluated the USDA Healthy Incentives Pilot (HIP), which offered rebates to Supplemental Nutrition Assistance Program (SNAP) participants for purchasing targeted FVs (TFVs). As part of a randomized controlled trial in Hampden County, Massachusetts, 7500 randomly selected SNAP households received a 30% rebate on TFVs purchased with SNAP benefits. The remaining 47,595 SNAP households in the county received usual benefits. Adults in 5076 HIP and non-HIP households were randomly sampled for telephone surveys, including 24-h dietary recall interviews. Surveys were conducted at baseline (1-3 mo before implementation) and in 2 follow-up rounds (4-6 mo and 9-11 mo after implementation). 2784 adults (1388 HIP, 1396 non-HIP) completed baseline interviews; data were analyzed for 2009 adults (72%) who also completed ≥1 follow-up interview. Regression-adjusted mean TFV intake at follow-up was 0.24 cup-equivalents/d (95% CI: 0.13, 0.34 cup-equivalents/d) higher among HIP participants. Across all fruit and vegetables (AFVs), regression-adjusted mean intake was 0.32 cup-equivalents/d (95% CI: 0.17, 0.48 cup-equivalents/d) higher among HIP participants. The AFV-TFV difference was explained by greater intake of 100% fruit juice (0.10 cup-equivalents/d; 95% CI: 0.02, 0.17 cup-equivalents/d); juice purchases did not earn the HIP rebate. Refined grain intake was 0.43 ounce-equivalents/d lower (95% CI: -0.69, -0.16 ounce-equivalents/d) among HIP participants, possibly indicating substitution effects. Increased AFV intake and decreased refined grain intake contributed to higher Healthy Eating Index-2010 scores among HIP participants (4.7 points; 95% CI: 2.4, 7.1 points). The HIP significantly increased FV intake among SNAP participants, closing ∼20% of the gap relative to recommendations and increasing dietary quality. More research on mechanisms of action is warranted. The HIP trial was registered at clinicaltrials.gov as NCT02651064. © 2016 American Society for Nutrition.
Moore, J.N.; Christenson, B.W.; Allis, R.G.; Browne, P.R.L.; Lutz, S.J.
2004-01-01
Acidic steam condensates in volcanic systems or shallow, oxygenated geothermal environments are typically enriched in SO4 and poor in Cl. These fluids produce distinctive alteration-induced assemblages as they descend. At Karaha - Telaga Bodas, located on the flank of Galunggung Volcano, Indonesia, neutralization of descending acid waters has resulted in the successive appearance of 1) advanced argillic alteration characterized by alunite, clay minerals and pyrite, 2) anhydrite, pyrite and interlayered sheet silicates, and 3) carbonates. Minor tourmaline, fluorite and native sulfur also are present locally, reflecting interactions with discharging magmatic gases. Water rock interactions were modeled at temperatures up to 250??C using the composition of acidic lake water from Telaga Bodas and that of a typical andesite as reactants. The simulations predict mineral distributions consistent with the observed assemblages and a decrease in the freezing-point depression of the fluid with increasing temperature. Fluids trapped in anhydrite, calcite and fluorite display a similar decrease in their freezing-point depressions, from 2.8?? to 1.5??C, as homogenization temperatures increase from 160?? to 205??C. The simulations indicate that the progressive change in fluid composition is due mainly to the incorporation of SO4 into the newly formed hydrothermal minerals. The salinities of fluid inclusions containing Cl-deficient steam condensates are better expressed in terms of H2SO4 equivalents than the commonly used NaCl equivalents. At solute concentrations >1.5 molal, freezing-point depressions represented as NaCl equivalents overestimate the salinity of Cl-poor waters. At lower concentrations, differences between apparent salinities calculated as NaCl and H2SO 4 equivalents are negligible.
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
Fluctuation and reproducibility of exposure and effect of insulin glargine in healthy subjects.
Becker, R H A; Frick, A D; Teichert, L; Nosek, L; Heinemann, L; Heise, T; Rave, K
2008-11-01
Low diurnal fluctuation and high day-to-day reproducibility in exposure and effect characterize beneficial basal insulin products. Two insulin glargine (LANTUS) formulations [without (R) or with polysorbate-20 (T)], added to minimize unfolding of proteins and subsequent formation of fibril structures, were assessed for equivalence in exposure and effect, and aspects of fluctuation and reproducibility in time-concentration and time-action profiles. A dose of 0.4 U/kg was subcutaneously administered to 24 healthy subjects in a two-sequence (R-T-R-T or T-R-T-R), randomized, four-way crossover trial utilizing 30-h Biostator-based euglycaemic glucose clamps. Identical serum insulin glargine concentration and time-action profiles established average, individual and population equivalence in insulin exposure and effect. Point estimates for 24-h area under the curve for insulin (INS-AUC(0-24) (h)) and glucose infusion rates (GIR-AUC(0-24) (h)) were 97% [90% confidence interval (CI): 91-103%] and 100% (88-114%), respectively. Within-subject variability (coefficient of variation) for INS-AUC(0-24) (h) and GIR-AUC(0-24) (h) were 19% (95% CI: 14-25%) and 34% (24-43%), respectively. The diurnal relative fluctuation of the serum insulin glargine concentration was 20% (95% CI: 19-21%). Insulin glargine in either formulation presents with a high day-to-day reproducibility of a uniform release after injection enabling an effective basal insulin supplementation.
Behar, Vera; Adam, Dan
2005-12-01
An effective aperture approach is used for optimization of a sparse synthetic transmit aperture (STA) imaging system with coded excitation and frequency division. A new two-stage algorithm is proposed for optimization of both the positions of the transmit elements and the weights of the receive elements. In order to increase the signal-to-noise ratio in a synthetic aperture system, temporal encoding of the excitation signals is employed. When comparing the excitation by linear frequency modulation (LFM) signals and phase shift key modulation (PSKM) signals, the analysis shows that chirps are better for excitation, since at the output of a compression filter the sidelobes generated are much smaller than those produced by the binary PSKM signals. Here, an implementation of a fast STA imaging is studied by spatial encoding with frequency division of the LFM signals. The proposed system employs a 64-element array with only four active elements used during transmit. The two-dimensional point spread function (PSF) produced by such a sparse STA system is compared to the PSF produced by an equivalent phased array system, using the Field II simulation program. The analysis demonstrates the superiority of the new sparse STA imaging system while using coded excitation and frequency division. Compared to a conventional phased array imaging system, this system acquires images of equivalent quality 60 times faster, when the transmit elements are fired in pairs consecutively and the power level used during transmit is very low. The fastest acquisition time is achieved when all transmit elements are fired simultaneously, which improves detectability, but at the cost of a slight degradation of the axial resolution. In real-time implementation, however, it must be borne in mind that the frame rate of a STA imaging system depends not only on the acquisition time of the data but also on the processing time needed for image reconstruction. Comparing to phased array imaging, a significant increase in the frame rate of a STA imaging system is possible if and only if an equivalent time efficient algorithm is used for image reconstruction.
Jiang, Jifa; Niu, Lei
2017-04-01
We study the asymptotic behavior of the competitive Leslie/Gower model (map) [Formula: see text]It is shown that T unconditionally admits a globally attracting 1-codimensional invariant hypersurface [Formula: see text], called carrying simplex, such that every nontrivial orbit is asymptotic to one in [Formula: see text]. More general and easily checked conditions to guarantee the existence of carrying simplex for competitive maps are provided. An equivalence relation is defined relative to local stability of fixed points on [Formula: see text] (the boundary of [Formula: see text]) on the space of all three-dimensional Leslie/Gower models. Using a formula on the sum of the indices of all fixed points on the carrying simplex for three-dimensional maps, we list the 33 stable equivalence classes in terms of simple inequalities on the parameters [Formula: see text] and [Formula: see text] and draw their orbits on [Formula: see text]. In classes 1-18, every nontrivial orbit tends to a fixed point on [Formula: see text]. In classes 19-25, each map possesses a unique positive fixed point which is a saddle on [Formula: see text], and hence Neimark-Sacker bifurcations do not occur. Neimark-Sacker bifurcation does occur within each of classes 26-31, while it does not occur in class 32. Each map from class 27 admits a heteroclinic cycle, which forms the boundary of [Formula: see text]. The criteria on the stability of heteroclinic cycles are also given. This classification makes it possible to further investigate various dynamical properties in respective class.
NASA Astrophysics Data System (ADS)
Li, Wang; Takahashi, C.; Hussain, F.; Hong, Yi; Nham, H. S.; Chan, K. H.; Lee, L. T.; Chahine, K.
2007-01-01
This APMP key comparison of humidity measurements using a dew point meter as a transfer standard was carried out among eight national metrology institutes from February 1999 to January 2001. The NMC/SPRING, Singapore was the pilot laboratory and a chilled mirror dew point meter offered by NMIJ was used as a transfer standard. The transfer standard was calibrated by each participating institute against local humidity standards in terms of frost and dew point temperature. Each institute selected its frost/dew point temperature calibration points within the range from -70 °C to 20 °C frost/dew point with 5 °C step. The majority of participating institutes measured from -60 °C to 20 °C frost/dew point and a simple mean evaluation was performed in this range. The differences between the institute values and the simple means for all participating institutes are within two standard deviations from the mean values. Bilateral equivalence was analysed in terms of pair difference and single parameter Quantified Demonstrated Equivalence. The results are presented in the report. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
40 CFR 421.91 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Metallurgical Acid Plants Subcategory § 421... percent equivalent sulfuric acid, H2 SO4 capacity. [50 FR 38342, Sept. 20, 1985] ...
Laughlin, Walter A; Fleisig, Glenn S; Aune, Kyle T; Diffendaffer, Alek Z
2016-01-01
Swing trajectory and ground reaction forces (GRF) of 30 collegiate baseball batters hitting a pitched ball were compared between a standard bat, a bat with extra weight about its barrel, and a bat with extra weight in its handle. It was hypothesised that when compared to a standard bat, only a handle-weighted bat would produce equivalent bat kinematics. It was also hypothesised that hitters would not produce equivalent GRFs for each weighted bat, but would maintain equivalent timing when compared to a standard bat. Data were collected utilising a 500 Hz motion capture system and 1,000 Hz force plate system. Data between bats were considered equivalent when the 95% confidence interval of the difference was contained entirely within ±5% of the standard bat mean value. The handle-weighted bat had equivalent kinematics, whereas the barrel-weighted bat did not. Both weighted bats had equivalent peak GRF variables. Neither weighted bat maintained equivalence in the timing of bat kinematics and some peak GRFs. The ability to maintain swing kinematics with a handle-weighted bat may have implications for swing training and warm-up. However, altered timings of kinematics and kinetics require further research to understand the implications on returning to a conventionally weighted bat.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berezhiani, Lasha; Khoury, Justin; Wang, Junpu, E-mail: lashaber@gmail.com, E-mail: jkhoury@sas.upenn.edu, E-mail: jwang217@jhu.edu
Single-field perturbations satisfy an infinite number of consistency relations constraining the squeezed limit of correlation functions at each order in the soft momentum. These can be understood as Ward identities for an infinite set of residual global symmetries, or equivalently as Slavnov-Taylor identities for spatial diffeomorphisms. In this paper, we perform a number of novel, non-trivial checks of the identities in the context of single field inflationary models with arbitrary sound speed. We focus for concreteness on identities involving 3-point functions with a soft external mode, and consider all possible scalar and tensor combinations for the hard-momentum modes. In allmore » these cases, we check the consistency relations up to and including cubic order in the soft momentum. For this purpose, we compute for the first time the 3-point functions involving 2 scalars and 1 tensor, as well as 2 tensors and 1 scalar, for arbitrary sound speed.« less
Multipole correction of atomic monopole models of molecular charge distribution. I. Peptides
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Keller, D. A.; Ornstein, R. L.; Rein, R.
1993-01-01
The defects in atomic monopole models of molecular charge distribution have been analyzed for several model-blocked peptides and compared with accurate quantum chemical values. The results indicate that the angular characteristics of the molecular electrostatic potential around functional groups capable of forming hydrogen bonds can be considerably distorted within various models relying upon isotropic atomic charges only. It is shown that these defects can be corrected by augmenting the atomic point charge models by cumulative atomic multipole moments (CAMMs). Alternatively, sets of off-center atomic point charges could be automatically derived from respective multipoles, providing approximately equivalent corrections. For the first time, correlated atomic multipoles have been calculated for N-acetyl, N'-methylamide-blocked derivatives of glycine, alanine, cysteine, threonine, leucine, lysine, and serine using the MP2 method. The role of the correlation effects in the peptide molecular charge distribution are discussed.
Multi-Point Interferometric Rayleigh Scattering using Dual-Pass Light Recirculation
NASA Technical Reports Server (NTRS)
Bivolaru, Daniel; Danehy, Paul M.; Cutler, Andrew D.
2008-01-01
This paper describes for the first time an interferometric Rayleigh scattering system using dual-pass light recirculation (IRS-LR) capable of simultaneously measuring at multiple points two orthogonal components of flow velocity in combustion flows using single shot laser probing. An additional optical path containing the interferometer input mirror, a quarter-wave plate, a polarization dependent beam combiner, and a high reflectivity mirror partially recirculates the light that is rejected by the interferometer. Temporally- and spatially-resolved acquisitions of Rayleigh spectra in a large-scale combustion-heated supersonic axi-symmetric jet were performed to demonstrate the technique. Recirculating of Rayleigh scattered light increases the number of photons analyzed by the system up to a factor of 1.8 compared with previous configurations. This is equivalent to performing measurements with less laser energy or performing measurements with the previous system in gas flows at higher temperatures.
Analysis of ICESat Data Using Kalman Filter and Kriging to Study Height Changes in East Antarctica
NASA Technical Reports Server (NTRS)
Herring, Thomas A.
2005-01-01
We analyze ICESat derived heights collected between Feb. 03-Nov. 04 using a kriging/Kalman filtering approach to investigate height changes in East Antarctica. The model's parameters are height change to an a priori static digital height model, seasonal signal expressed as an amplitude Beta and phase Theta, and height-change rate dh/dt for each (100 km)(exp 2) block. From the Kalman filter results, dh/dt has a mean of -0.06 m/yr in the flat interior of East Antarctica. Spatially correlated pointing errors in the current data releases give uncertainties in the range 0.06 m/yr, making height change detection unreliable at this time. Our test shows that when using all available data with pointing knowledge equivalent to that of Laser 2a, height change detection with an accuracy level 0.02 m/yr can be achieved over flat terrains in East Antarctica.
NASA Technical Reports Server (NTRS)
Tacina, K. M.; Chang, C. T.; Lee, P.; Mongia, H.; Podboy, D. P.; Dam, B.
2015-01-01
Dynamic pressure measurements were taken during flame-tube emissions testing of three second-generation swirl-venturi lean direct injection (SV-LDI) combustor configurations. These measurements show that combustion dynamics were typically small. However, a small number of points showed high combustion dynamics, with peak-to-peak dynamic pressure fluctuations above 0.5 psi. High combustion dynamics occurred at low inlet temperatures in all three SV-LDI configurations, so combustion dynamics were explored further at low temperature conditions. A point with greater than 1.5 psi peak-to-peak dynamic pressure fluctuations was identified at an inlet temperature of 450!F, a pressure of 100 psia, an air pressure drop of 3%, and an overall equivalence ratio of 0.35. This is an off design condition: the temperature and pressure are typical of 7% power conditions, but the equivalence ratio is high. At this condition, the combustion dynamics depended strongly on the fuel staging. Combustion dynamics could be reduced significantly without changing the overall equivalence ratio by shifting the fuel distribution between stages. Shifting the fuel distribution also decreased NOx emissions.
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhong, Jiaqi; Song, Hongwei; Zhu, Lei; Wang, Jin; Zhan, Mingsheng
2014-08-01
Vibrational noise is one of the most important noises that limits the performance of the nonisotopes atom-interferometers (AIs) -based weak-equivalence-principle (WEP) -test experiment. By analyzing the vibration-induced phases, we find that, although the induced phases are not completely common, their ratio is always a constant at every experimental data point, which is not fully utilized in the traditional elliptic curve-fitting method. From this point, we propose a strategy that can greatly suppress the vibration-induced phase noise by stabilizing the Raman laser frequencies at high precision and controlling the scanning-phase ratio. The noise rejection ratio can be as high as 1015 with arbitrary dual-species AIs. Our method provides a Lissajous curve, and the shape of the curve indicates the breakdown of the weak-equivalence-principle signal. Then we manage to derive an estimator for the differential phase of the Lissajous curve. This strategy could be helpful in extending the candidates of atomic species for high-precision AIs-based WEP-test experiments.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
Hawking radiation, Unruh radiation, and the equivalence principle.
Singleton, Douglas; Wilburn, Steve
2011-08-19
We compare the response function of an Unruh-DeWitt detector for different space-times and different vacua and show that there is a detailed violation of the equivalence principle. In particular comparing the response of an accelerating detector to a detector at rest in a Schwarzschild space-time we find that both detectors register thermal radiation, but for a given, equivalent acceleration the fixed detector in the Schwarzschild space-time measures a higher temperature. This allows one to locally distinguish the two cases. As one approaches the horizon the two temperatures have the same limit so that the equivalence principle is restored at the horizon. © 2011 American Physical Society
Detector-device-independent quantum key distribution: Security analysis and fast implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boaron, Alberto; Korzh, Boris; Boso, Gianluca
One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. However, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) has been proposed to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. Here, we analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less
Knapp, Herschel; Chan, Kee; Anaya, Henry D; Goetz, Matthew B
2011-06-01
We successfully created and implemented an effective HIV rapid testing training and certification curriculum using traditional in-person training at multiple sites within the U.S. Department of Veterans Affairs (VA) Healthcare System. Considering the multitude of geographically remote facilities in the nationwide VA system, coupled with the expansion of HIV diagnostics, we developed an alternate training method that is affordable, efficient, and effective. Using materials initially developed for in-person HIV rapid test in-services, we used a distance learning model to offer this training via live audiovisual online technology to educate clinicians at a remote outpatient primary care VA facility. Participants' evaluation metrics showed that this form of remote education is equivalent to in-person training; additionally, HIV testing rates increased considerably in the months following this intervention. Although there is a one-time setup cost associated with this remote training protocol, there is potential cost savings associated with the point-of-care nurse manager's time productivity by using the Internet in-service learning module for teaching HIV rapid testing. If additional in-service training modules are developed into Internet-based format, there is the potential for additional cost savings. Our cost analysis demonstrates that the remote in-service method provides a more affordable and efficient alternative compared with in-person training. The online in-service provided training that was equivalent to in-person sessions based on first-hand supervisor observation, participant satisfaction surveys, and follow-up results. This method saves time and money, requires fewer personnel, and affords access to expert trainers regardless of geographic location. Further, it is generalizable to training beyond HIV rapid testing. Based on these consistent implementation successes, we plan to expand use of online training to include remote VA satellite facilities spanning several states for a variety of diagnostic devices. Ultimately, Internet-based training has the potential to provide "big city" quality of care to patients at remote (rural) clinics.
Domain switching kinetics in ferroelectric-resistive BiFeO3 thin film memories
NASA Astrophysics Data System (ADS)
Meng, Jianwei; Jiang, Jun; Geng, Wenping; Chen, Zhihui; Zhang, Wei; Jiang, Anquan
2015-02-01
We fabricated (00l) BiFeO3 (BFO) thin films in different growth modes on SrRuO3/SrTiO3 substrates using a pulsed laser deposition technique. X-ray diffraction patterns show an out-of-plane lattice constant of 4.03 Å and ferroelectric polarization of 82 µC/cm2 for the BFO thin film in a layer-by-layer growth mode (2D-BFO), larger than 3.96 Å and 51 µC/cm2 for the thin film in the 3D-island formation growth mode (3D-BFO). The 2D-BFO thin film at 300 K shows switchable on/off diode currents upon polarization flipping near a negative coercive voltage, which is nevertheless absent from the above 3D-BFO thin film. From a positive-up-negative-down pulse characterization technique, we measured domain switching current transients as well as polarization-voltage (Pf-Vf) hysteresis loops in both semiconducting thin films. Pf-Vf hysteresis loops after 1 µs-retention time show the preferred domain orientation pointing to bottom electrodes in a 3D-BFO thin film. The poor retention of the domains pointing to top electrodes can be improved considerably in a 2D-BFO thin film. From these measurements, we extracted domain switching time dependence of coercive voltage at temperatures of 78-300 K. From these dependences, we found coercive voltages in semiconducting ferroelectric thin films much higher than those in insulating thin films, disobeying the traditional Merz equation. Finally, an equivalent resistance model in description of free-carrier compensation of the front domain boundary charge is developed to interpret this difference. This equivalent resistance can be coincidently extracted either from domain switching time dependence of coercive voltage or from applied voltage dependence of domain switching current, which drops almost linearly with the temperature until down to 0 in a ferroelectric insulator at 78 K.
Electrical circuit modeling and analysis of microwave acoustic interaction with biological tissues.
Gao, Fei; Zheng, Qian; Zheng, Yuanjin
2014-05-01
Numerical study of microwave imaging and microwave-induced thermoacoustic imaging utilizes finite difference time domain (FDTD) analysis for simulation of microwave and acoustic interaction with biological tissues, which is time consuming due to complex grid-segmentation and numerous calculations, not straightforward due to no analytical solution and physical explanation, and incompatible with hardware development requiring circuit simulator such as SPICE. In this paper, instead of conventional FDTD numerical simulation, an equivalent electrical circuit model is proposed to model the microwave acoustic interaction with biological tissues for fast simulation and quantitative analysis in both one and two dimensions (2D). The equivalent circuit of ideal point-like tissue for microwave-acoustic interaction is proposed including transmission line, voltage-controlled current source, envelop detector, and resistor-inductor-capacitor (RLC) network, to model the microwave scattering, thermal expansion, and acoustic generation. Based on which, two-port network of the point-like tissue is built and characterized using pseudo S-parameters and transducer gain. Two dimensional circuit network including acoustic scatterer and acoustic channel is also constructed to model the 2D spatial information and acoustic scattering effect in heterogeneous medium. Both FDTD simulation, circuit simulation, and experimental measurement are performed to compare the results in terms of time domain, frequency domain, and pseudo S-parameters characterization. 2D circuit network simulation is also performed under different scenarios including different sizes of tumors and the effect of acoustic scatterer. The proposed circuit model of microwave acoustic interaction with biological tissue could give good agreement with FDTD simulated and experimental measured results. The pseudo S-parameters and characteristic gain could globally evaluate the performance of tumor detection. The 2D circuit network enables the potential to combine the quasi-numerical simulation and circuit simulation in a uniform simulator for codesign and simulation of a microwave acoustic imaging system, bridging bioeffect study and hardware development seamlessly.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
... Lightering Operations. Since there will be no new VOC controls for point sources, non-point source sector... equivalent to 1.52 x 1.74 = 2.64 tpd NO X reduction shortfall. Delaware has implemented numerous controls... achieved ``as expeditious as practicable.'' Control measures under RACT constitute a major group of RACM...
ERIC Educational Resources Information Center
Rowland, D. R.
2007-01-01
The physical analysis of a uniformly accelerating point charge provides a rich problem to explore in advanced courses in electrodynamics and relativity since it brings together fundamental concepts in relation to electromagnetic radiation, Einstein's equivalence principle and the inertial mass of field energy in ways that reveal subtleties in each…
... decline in general knowledge and in verbal ability (equivalent to 4 IQ points) between the preteen years ... Cone EJ, Bigelow GE, Herrmann ES, et al. Non-smoker exposure to secondhand cannabis smoke. I. Urine ...
Wilson, P W; Heneghan, A F; Haymet, A D J
2003-02-01
In biological systems, nucleation of ice from a supercooled aqueous solution is a stochastic process and always heterogeneous. The average time any solution may remain supercooled is determined only by the degree of supercooling and heterogeneous nucleation sites it encounters. Here we summarize the many and varied definitions of the so-called "supercooling point," also called the "temperature of crystallization" and the "nucleation temperature," and exhibit the natural, inherent width associated with this quantity. We describe a new method for accurate determination of the supercooling point, which takes into account the inherent statistical fluctuations of the value. We show further that many measurements on a single unchanging sample are required to make a statistically valid measure of the supercooling point. This raises an interesting difference in circumstances where such repeat measurements are inconvenient, or impossible, for example for live organism experiments. We also discuss the effect of solutes on this temperature of nucleation. Existing data appear to show that various solute species decrease the nucleation temperature somewhat more than the equivalent melting point depression. For non-ionic solutes the species appears not to be a significant factor whereas for ions the species does affect the level of decrease of the nucleation temperature.
Influence of warning information changes on emergency response
NASA Astrophysics Data System (ADS)
Heisterkamp, Tobias; Ulbrich, Uwe; Glade, Thomas; Tetzlaff, Gerd
2014-05-01
Mitigation and risk reduction of natural hazards is significantly related to the possibility of predicting the actual event. Some hazards can already be forecasted several days in advance. For these hazards, early warning systems have been developed, installed and improved over the years. The formation of winter storms for example can be recognized up to one week before they pass through Central Europe. This relative long early warning time has the advantage that forecasters can concretise the warnings over time. Therefore, warnings can even be adapted to alternating conditions within the process, the observation or changes in its modelling. Emergency managers are one group of warning recipients in the civil protection sector. They have to prepare or initiate prevention or response measures at a specific point of time, depending on the required lead time of the referring actions. At this point of time already, the forecast and its equivalent warning, has to be assumed as a stage of reality, hence the decision-makers have to come to a conclusion. These decisions are based on spatial and temporal knowledge of the forecasted event and the consequential situation of risk. With incoming warning updates, the detailed status of information is permanently being alternated. Consequently, decisions can be influenced by the development of the warning situation and the inherent tendency before a certain point of time. They can also be adapted to updates later on, according to the changing 'decision reality'. The influence of these dynamic hazard situations on operational planning and response by emergency managers is investigated in case studies on winter storms for Berlin, Germany. Therefore, the issued warnings by the weather service and data of operation of Berlin Fire Brigades are analysed and compared. This presentation shows and discusses first results.
The a(4) Scheme-A High Order Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2009-01-01
The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a nondissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To provide a solid foundation for a systematic CESE development of high order schemes, in this paper we describe a new high order (4-5th order) and neutrally stable CESE solver of a 1D advection equation with a constant advection speed a. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and two points at the lower time level. Because it is associated with four independent mesh variables (the numerical analogues of the dependent variable and its first, second, and third-order spatial derivatives) and four equations per mesh point, the new scheme is referred to as the a(4) scheme. As in the case of other similar CESE neutrally stable solvers, the a(4) scheme enforces conservation laws in space-time locally and globally, and it has the basic, forward marching, and backward marching forms. Except for a singular case, these forms are equivalent and satisfy a space-time inversion (STI) invariant property which is shared by the advection equation. Based on the concept of STI invariance, a set of algebraic relations is developed and used to prove the a(4) scheme must be neutrally stable when it is stable. Numerically, it has been established that the scheme is stable if the value of the Courant number is less than 1/3
Data Characterization Using Artificial-Star Tests: Performance Evaluation
NASA Astrophysics Data System (ADS)
Hu, Yi; Deng, Licai; de Grijs, Richard; Liu, Qiang
2011-01-01
Traditional artificial-star tests are widely applied to photometry in crowded stellar fields. However, to obtain reliable binary fractions (and their uncertainties) of remote, dense, and rich star clusters, one needs to recover huge numbers of artificial stars. Hence, this will consume much computation time for data reduction of the images to which the artificial stars must be added. In this article, we present a new method applicable to data sets characterized by stable, well-defined, point-spread functions, in which we add artificial stars to the retrieved-data catalog instead of to the raw images. Taking the young Large Magellanic Cloud cluster NGC 1818 as an example, we compare results from both methods and show that they are equivalent, while our new method saves significant computational time.
Time-dependent Calculations of an Impact-triggered Runaway Greenhouse Atmosphere on Mars
NASA Technical Reports Server (NTRS)
Segura, T. L.; Toon, O. B.; Colaprete, A.
2003-01-01
Large asteroid and comet impacts result in the production of thick (greater than tens of meters) global debris layers of 1500+ K and the release through precipitation of impact-injected steam and melting ground ice) of large amounts (greater than tens of meters global equivalent thickness) of water on the surface of Mars. Modeling shows that the surface of Mars is still above the freezing point of water after the rainout of the impact-injected steam and melting of subsurface ice. The energy remaining in the hot debris layer will allow evaporation of this water back into the atmosphere where it may rain out at a later time. Given a sufficiently rapid supply of this water to the atmosphere it will initiate a temporary "runaway" greenhouse state.
Robustness analysis of multirate and periodically time varying systems
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1991-01-01
A new method for analyzing the stability and robustness of multirate and periodically time varying systems is presented. It is shown that a multirate or periodically time varying system can be transformed into an equivalent time invariant system. For a SISO system, traditional gain and phase margins can be found by direct application of the Nyquist criterion to this equivalent time invariant system. For a MIMO system, structured and unstructured singular values can be used to determine the system's robustness. The limitations and implications of utilizing this equivalent time invariant system for calculating gain and phase margins, and for estimating robustness via singular value analysis are discussed.
Rath, Chandra; Kluckow, Martin
2016-03-01
Compare the oxygen saturation profiles before discharge of neonates born extremely preterm (<28 weeks), now at term equivalent age, with healthy term neonates and assess the impact of feeding on this profile in each group. We prospectively evaluated and compared the oxygen saturation profile in 15 very low birthweight infants at term equivalent age, ready to be discharged home without any oxygen and 15 term newborns after 48 hours of life. We also evaluated and compared the saturations of these two groups during a one-hour period during and after feeding. Term equivalent preterm and term infants spent median 3% and 0%, respectively, of the time below 90% in a 12-hour saturation-recording period. Term infants spent a median 0.26% and 0.65% of the time in <90% saturation during feed time and no feed time, respectively. In contrast, preterm infants spent significantly more time <90% saturation (3.47% and 3.5% during feed time and no feed time, respectively). Term equivalent preterm infants spent significantly more time in a saturation range <90% compared to term infants. Feeding had little effect on saturation profile overall within each group. ©2015 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
von Kármán-Howarth equation for three-dimensional two-fluid plasmas.
Andrés, N; Mininni, P D; Dmitruk, P; Gómez, D O
2016-06-01
We derive the von Kármán-Howarth equation for a full three-dimensional incompressible two-fluid plasma. In the long-time limit and for very large Reynolds numbers we obtain the equivalent of the hydrodynamic "four-fifths" law. This exact law predicts the scaling of the third-order two-point correlation functions, and puts a strong constraint on the plasma turbulent dynamics. Finally, we derive a simple expression for the 4/5 law in terms of third-order structure functions, which is appropriate for comparison with in situ measurements in the solar wind at different spatial ranges.
NASA Astrophysics Data System (ADS)
Nezir, Veysel; Mustafa, Nizami
2017-04-01
In 2008, P.K. Lin provided the first example of a nonreflexive space that can be renormed to have fixed point property for nonexpansive mappings. This space was the Banach space of absolutely summable sequences l1 and researchers aim to generalize this to c0, Banach space of null sequences. Before P.K. Lin's intriguing result, in 1979, Goebel and Kuczumow showed that there is a large class of non-weak* compact closed, bounded, convex subsets of l1 with fixed point property for nonexpansive mappings. Then, P.K. Lin inspired by Goebel and Kuczumow's ideas to give his result. Similarly to P.K. Lin's study, Hernández-Linares worked on L1 and in his Ph.D. thesis, supervisored under Maria Japón, showed that L1 can be renormed to have fixed point property for affine nonexpansive mappings. Then, related questions for c0 have been considered by researchers. Recently, Nezir constructed several equivalent norms on c0 and showed that there are non-weakly compact closed, bounded, convex subsets of c0 with fixed point property for affine nonexpansive mappings. In this study, we construct a family of equivalent norms containing those developed by Nezir as well and show that there exists a large class of non-weakly compact closed, bounded, convex subsets of c0 with fixed point property for affine nonexpansive mappings.
XFEM with equivalent eigenstrain for matrix-inclusion interfaces
NASA Astrophysics Data System (ADS)
Benvenuti, Elena
2014-05-01
Several engineering applications rely on particulate composite materials, and numerical modelling of the matrix-inclusion interface is therefore a crucial part of the design process. The focus of this work is on an original use of the equivalent eigenstrain concept in the development of a simplified eXtended Finite Element Method. Key points are: the replacement of the matrix-inclusion interface by a coating layer with small but finite thickness, and its simulation as an inclusion with an equivalent eigenstrain. For vanishing thickness, the model is consistent with a spring-like interface model. The problem of a spherical inclusion within a cylinder is solved. The results show that the proposed approach is effective and accurate.
Eley, John; Newhauser, Wayne; Homann, Kenneth; Howell, Rebecca; Schneider, Christopher; Durante, Marco; Bert, Christoph
2015-01-01
Equivalent dose from neutrons produced during proton radiotherapy increases the predicted risk of radiogenic late effects. However, out-of-field neutron dose is not taken into account by commercial proton radiotherapy treatment planning systems. The purpose of this study was to demonstrate the feasibility of implementing an analytical model to calculate leakage neutron equivalent dose in a treatment planning system. Passive scattering proton treatment plans were created for a water phantom and for a patient. For both the phantom and patient, the neutron equivalent doses were small but non-negligible and extended far beyond the therapeutic field. The time required for neutron equivalent dose calculation was 1.6 times longer than that required for proton dose calculation, with a total calculation time of less than 1 h on one processor for both treatment plans. Our results demonstrate that it is feasible to predict neutron equivalent dose distributions using an analytical dose algorithm for individual patients with irregular surfaces and internal tissue heterogeneities. Eventually, personalized estimates of neutron equivalent dose to organs far from the treatment field may guide clinicians to create treatment plans that reduce the risk of late effects. PMID:25768061
Eley, John; Newhauser, Wayne; Homann, Kenneth; Howell, Rebecca; Schneider, Christopher; Durante, Marco; Bert, Christoph
2015-03-11
Equivalent dose from neutrons produced during proton radiotherapy increases the predicted risk of radiogenic late effects. However, out-of-field neutron dose is not taken into account by commercial proton radiotherapy treatment planning systems. The purpose of this study was to demonstrate the feasibility of implementing an analytical model to calculate leakage neutron equivalent dose in a treatment planning system. Passive scattering proton treatment plans were created for a water phantom and for a patient. For both the phantom and patient, the neutron equivalent doses were small but non-negligible and extended far beyond the therapeutic field. The time required for neutron equivalent dose calculation was 1.6 times longer than that required for proton dose calculation, with a total calculation time of less than 1 h on one processor for both treatment plans. Our results demonstrate that it is feasible to predict neutron equivalent dose distributions using an analytical dose algorithm for individual patients with irregular surfaces and internal tissue heterogeneities. Eventually, personalized estimates of neutron equivalent dose to organs far from the treatment field may guide clinicians to create treatment plans that reduce the risk of late effects.
McEwan, Thomas E.
1998-01-01
A "laser tape measure" for measuring distance which includes a transmitter such as a laser diode which transmits a sequence of electromagnetic pulses in response to a transmit timing signal. A receiver samples reflections from objects within the field of the sequence of visible electromagnetic pulses with controlled timing, in response to a receive timing signal. The receiver generates a sample signal in response to the samples which indicates distance to the object causing the reflections. The timing circuit supplies the transmit timing signal to the transmitter and supplies the receive timing signal to the receiver. The receive timing signal causes the receiver to sample the reflection such that the time between transmission of pulses in the sequence in sampling by the receiver sweeps over a range of delays. The transmit timing signal causes the transmitter to transmit the sequence of electromagnetic pulses at a pulse repetition rate, and the received timing signal sweeps over the range of delays in a sweep cycle such that reflections are sampled at the pulse repetition rate and with different delays in the range of delays, such that the sample signal represents received reflections in equivalent time. The receiver according to one aspect of the invention includes an avalanche photodiode and a sampling gate coupled to the photodiode which is responsive to the received timing signal. The transmitter includes a laser diode which supplies a sequence of visible electromagnetic pulses. A bright spot projected on to the target clearly indicates the point that is being measured, and the user can read the range to that point with precision of better than 0.1%.
McEwan, T.E.
1998-06-16
A ``laser tape measure`` for measuring distance is disclosed which includes a transmitter such as a laser diode which transmits a sequence of electromagnetic pulses in response to a transmit timing signal. A receiver samples reflections from objects within the field of the sequence of visible electromagnetic pulses with controlled timing, in response to a receive timing signal. The receiver generates a sample signal in response to the samples which indicates distance to the object causing the reflections. The timing circuit supplies the transmit timing signal to the transmitter and supplies the receive timing signal to the receiver. The receive timing signal causes the receiver to sample the reflection such that the time between transmission of pulses in the sequence in sampling by the receiver sweeps over a range of delays. The transmit timing signal causes the transmitter to transmit the sequence of electromagnetic pulses at a pulse repetition rate, and the received timing signal sweeps over the range of delays in a sweep cycle such that reflections are sampled at the pulse repetition rate and with different delays in the range of delays, such that the sample signal represents received reflections in equivalent time. The receiver according to one aspect of the invention includes an avalanche photodiode and a sampling gate coupled to the photodiode which is responsive to the received timing signal. The transmitter includes a laser diode which supplies a sequence of visible electromagnetic pulses. A bright spot projected on to the target clearly indicates the point that is being measured, and the user can read the range to that point with precision of better than 0.1%. 7 figs.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
An investigation of turbulent transport in the extreme lower atmosphere
NASA Technical Reports Server (NTRS)
Koper, C. A., Jr.; Sadeh, W. Z.
1975-01-01
A model in which the Lagrangian autocorrelation is expressed by a domain integral over a set of usual Eulerian autocorrelations acquired concurrently at all points within a turbulence box is proposed along with a method for ascertaining the statistical stationarity of turbulent velocity by creating an equivalent ensemble to investigate the flow in the extreme lower atmosphere. Simultaneous measurements of turbulent velocity on a turbulence line along the wake axis were carried out utilizing a longitudinal array of five hot-wire anemometers remotely operated. The stationarity test revealed that the turbulent velocity is approximated as a realization of a weakly self-stationary random process. Based on the Lagrangian autocorrelation it is found that: (1) large diffusion time predominated; (2) ratios of Lagrangian to Eulerian time and spatial scales were smaller than unity; and, (3) short and long diffusion time scales and diffusion spatial scales were constrained within their Eulerian counterparts.
Convergence of discrete Aubry–Mather model in the continuous limit
NASA Astrophysics Data System (ADS)
Su, Xifeng; Thieullen, Philippe
2018-05-01
We develop two approximation schemes for solving the cell equation and the discounted cell equation using Aubry–Mather–Fathi theory. The Hamiltonian is supposed to be Tonelli, time-independent and periodic in space. By Legendre transform it is equivalent to find a fixed point of some nonlinear operator, called Lax-Oleinik operator, which may be discounted or not. By discretizing in time, we are led to solve an additive eigenvalue problem involving a discrete Lax–Oleinik operator. We show how to approximate the effective Hamiltonian and some weak KAM solutions by letting the time step in the discrete model tend to zero. We also obtain a selected discrete weak KAM solution as in Davini et al (2016 Invent. Math. 206 29–55), and show that it converges to a particular solution of the cell equation. In order to unify the two settings, continuous and discrete, we develop a more general formalism of the short-range interactions.
Longitudinal Study of White Matter Development and Outcomes in Children Born Very Preterm.
Young, Julia M; Morgan, Benjamin R; Whyte, Hilary E A; Lee, Wayne; Smith, Mary Lou; Raybaud, Charles; Shroff, Manohar M; Sled, John G; Taylor, Margot J
2017-08-01
Identifying trajectories of early white matter development is important for understanding atypical brain development and impaired functional outcomes in children born very preterm (<32 weeks gestational age [GA]). In this study, 161 diffusion images were acquired in children born very preterm (median GA: 29 weeks) shortly following birth (75), term-equivalent (39), 2 years (18), and 4 years of age (29). Diffusion tensors were computed to obtain measures of fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD), which were aligned and averaged. A paediatric atlas was applied to obtain diffusion metrics within 12 white matter tracts. Developmental trajectories across time points demonstrated age-related changes which plateaued between term-equivalent and 2 years of age in the majority of posterior tracts and between 2 and 4 years of age in anterior tracts. Between preterm and term-equivalent scans, FA rates of change were slower in anterior than posterior tracts. Partial least squares analyses revealed associations between slower MD and RD rates of change within the external and internal capsule with lower intelligence quotients and language scores at 4 years of age. These results uniquely demonstrate early white matter development and its linkage to cognitive functions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Engelbrecht, Nicolaas; Chiuta, Steven; Bessarabov, Dmitri G.
2018-05-01
The experimental evaluation of an autothermal microchannel reactor for H2 production from NH3 decomposition is described. The reactor design incorporates an autothermal approach, with added NH3 oxidation, for coupled heat supply to the endothermic decomposition reaction. An alternating catalytic plate arrangement is used to accomplish this thermal coupling in a cocurrent flow strategy. Detailed analysis of the transient operating regime associated with reactor start-up and steady-state results is presented. The effects of operating parameters on reactor performance are investigated, specifically, the NH3 decomposition flow rate, NH3 oxidation flow rate, and fuel-oxygen equivalence ratio. Overall, the reactor exhibits rapid response time during start-up; within 60 min, H2 production is approximately 95% of steady-state values. The recommended operating point for steady-state H2 production corresponds to an NH3 decomposition flow rate of 6 NL min-1, NH3 oxidation flow rate of 4 NL min-1, and fuel-oxygen equivalence ratio of 1.4. Under these flows, NH3 conversion of 99.8% and H2 equivalent fuel cell power output of 0.71 kWe is achieved. The reactor shows good heat utilization with a thermal efficiency of 75.9%. An efficient autothermal reactor design is therefore demonstrated, which may be upscaled to a multi-kW H2 production system for commercial implementation.
5 CFR 531.407 - Equivalent increase determinations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Equivalent increase determinations. 531... PAY UNDER THE GENERAL SCHEDULE Within-Grade Increases § 531.407 Equivalent increase determinations. (a) GS employees. For a GS employee, an equivalent increase is considered to occur at the time of any of...
5 CFR 531.407 - Equivalent increase determinations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Equivalent increase determinations. 531... PAY UNDER THE GENERAL SCHEDULE Within-Grade Increases § 531.407 Equivalent increase determinations. (a) GS employees. For a GS employee, an equivalent increase is considered to occur at the time of any of...
2016-10-01
reductions reported in average strength bNumber of reductions reported in full-time equivalents Note: DOD costs savings provided for the prior FY are...comparing costs from FY 2012 to FY 2017, and not each year in between. Further, officials stated that DOD did not include full- time equivalents ...Application FTE Full-time Equivalent NDAA National Defense Authorization Act This is a work of the U.S. government and is not subject to copyright
Stability and chaos in Kustaanheimo-Stiefel space induced by the Hopf fibration
NASA Astrophysics Data System (ADS)
Roa, Javier; Urrutxua, Hodei; Peláez, Jesús
2016-07-01
The need for the extra dimension in Kustaanheimo-Stiefel (KS) regularization is explained by the topology of the Hopf fibration, which defines the geometry and structure of KS space. A trajectory in Cartesian space is represented by a four-dimensional manifold called the fundamental manifold. Based on geometric and topological aspects classical concepts of stability are translated to KS language. The separation between manifolds of solutions generalizes the concept of Lyapunov stability. The dimension-raising nature of the fibration transforms fixed points, limit cycles, attractive sets, and Poincaré sections to higher dimensional subspaces. From these concepts chaotic systems are studied. In strongly perturbed problems, the numerical error can break the topological structure of KS space: points in a fibre are no longer transformed to the same point in Cartesian space. An observer in three dimensions will see orbits departing from the same initial conditions but diverging in time. This apparent randomness of the integration can only be understood in four dimensions. The concept of topological stability results in a simple method for estimating the time-scale in which numerical simulations can be trusted. Ideally, all trajectories departing from the same fibre should be KS transformed to a unique trajectory in three-dimensional space, because the fundamental manifold that they constitute is unique. By monitoring how trajectories departing from one fibre separate from the fundamental manifold a critical time, equivalent to the Lyapunov time, is estimated. These concepts are tested on N-body examples: the Pythagorean problem, and an example of field stars interacting with a binary.
An Increase of Intelligence in Saudi Arabia, 1977-2010
ERIC Educational Resources Information Center
Batterjee, Adel A.; Khaleefa, Omar; Ali, Khalil; Lynn, Richard
2013-01-01
Normative data for 8-15 year olds for the Standard Progressive Matrices in Saudi Arabia were obtained in 1977 and 2010. The 2010 sample obtained higher average scores than the 1977 sample by 0.78d, equivalent to 11.7 IQ points. This represents a gain of 3.55 IQ points a decade over the 33 year period. (Contains 1 table.)
NASA Astrophysics Data System (ADS)
Bell, S.; Stevens, M.; Abe, H.; Benyon, R.; Bosma, R.; Fernicola, V.; Heinonen, M.; Huang, P.; Kitano, H.; Li, Z.; Nielsen, J.; Ochi, N.; Podmurnaya, O. A.; Scace, G.; Smorgon, D.; Vicente, T.; Vinge, A. F.; Wang, L.; Yi, H.
2015-01-01
A key comparison in dew-point temperature was carried out among the national standards held by NPL (pilot), NMIJ, INTA, VSL, INRIM, MIKES, NIST, NIM, VNIIFTRI-ESB and NMC. A pair of condensation-principle dew-point hygrometers was circulated and used to compare the local realisations of dew point for participant humidity generators in the range -50 °C to +20 °C. The duration of the comparison was prolonged by numerous problems with the hygrometers, requiring some repairs, and several additional check measurements by the pilot. Despite the problems and the extended timescale, the comparison was effective in providing evidence of equivalence. Agreement with the key comparison reference value was achieved in the majority of cases, and bilateral degrees of equivalence are also reported. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Ding, Ming; Zhu, Qianlong
2016-01-01
Hardware protection and control action are two kinds of low voltage ride-through technical proposals widely used in a permanent magnet synchronous generator (PMSG). This paper proposes an innovative clustering concept for the equivalent modeling of a PMSG-based wind power plant (WPP), in which the impacts of both the chopper protection and the coordinated control of active and reactive powers are taken into account. First, the post-fault DC link voltage is selected as a concentrated expression of unit parameters, incoming wind and electrical distance to a fault point to reflect the transient characteristics of PMSGs. Next, we provide an effective method for calculating the post-fault DC link voltage based on the pre-fault wind energy and the terminal voltage dip. Third, PMSGs are divided into groups by analyzing the calculated DC link voltages without any clustering algorithm. Finally, PMSGs of the same group are equivalent as one rescaled PMSG to realize the transient equivalent modeling of the PMSG-based WPP. Using the DIgSILENT PowerFactory simulation platform, the efficiency and accuracy of the proposed equivalent model are tested against the traditional equivalent WPP and the detailed WPP. The simulation results show the proposed equivalent model can be used to analyze the offline electromechanical transients in power systems.
Azevedo, Bruna M; Ferreira, Janaína M M; Luccas, Valdecir; Bolini, Helena M A
2016-12-01
The consumption of diet products has increased greatly in recent years. The objectives of the study were to develop a bittersweet chocolate added inulin and stevias with different rebaudioside A contents (60%, 80%, and 97%). Five chocolate samples were formulated with different sucrose concentrations to determine the ideal sucrose concentration for bittersweet chocolate. The use of just-about-right scale identified an ideal sucrose concentration of 47.5% (w/w). The sweetness equivalence in sugar-free bittersweet chocolates was determined by the time-intensity method by 14 selected and trained judges. The data collected during each session of sensory evaluation furnished the following parameters in relation to the sweet stimulus: Imax (maximum intensity recorded), Timax (time at which the maximum intensity was recorded), Area (area of time × intensity curve), and Ttot (total duration time of the stimulus). The time-intensity analysis indicated that the percentages of rebaudioside A did not interfere with the sweetness intensity of the sweetener stevia in bittersweet chocolate and there was no significant difference in the concentrations tested (0.16%, 0.22%, 0.27%) of each stevia, in relation to the parameters evaluated. In addition, the reduction in fat content did not alter the perception of the sweetness intensity of the samples. These results showed important information to research and development of chocolate products. Therefore, the use of the lowest stevia concentration tested (0.16%) is the most indicated for use, since this quantity was sufficient to reach the ideal sweetness of the product, so there was no point in adding more. © 2016 Institute of Food Technologists®.
PMU-Aided Voltage Security Assessment for a Wind Power Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Zhang, Jun Jason
2015-10-05
Because wind power penetration levels in electric power systems are continuously increasing, voltage stability is a critical issue for maintaining power system security and operation. The traditional methods to analyze voltage stability can be classified into two categories: dynamic and steady-state. Dynamic analysis relies on time-domain simulations of faults at different locations; however, this method needs to exhaust faults at all locations to find the security region for voltage at a single bus. With the widely located phasor measurement units (PMUs), the Thevenin equivalent matrix can be calculated by the voltage and current information collected by the PMUs. This papermore » proposes a method based on a Thevenin equivalent matrix to identify system locations that will have the greatest impact on the voltage at the wind power plant's point of interconnection. The number of dynamic voltage stability analysis runs is greatly reduced by using the proposed method. The numerical results demonstrate the feasibility, effectiveness, and robustness of the proposed approach for voltage security assessment for a wind power plant.« less
Synchronization Analysis of Master-Slave Probabilistic Boolean Networks.
Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W C; Cao, Jinde
2015-08-28
In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results.
Raman q-plates for Singular Atom Optics
NASA Astrophysics Data System (ADS)
Schultz, Justin T.; Hansen, Azure; Murphree, Joseph D.; Jayaseelan, Maitreyi; Bigelow, Nicholas P.
2016-05-01
We use a coherent two-photon Raman interaction as the atom-optic equivalent of a birefringent optical q-plate to facilitate spin-to-orbital angular momentum conversion in a pseudo-spin-1/2 BEC. A q-plate is a waveplate with a fixed retardance but a spatially varying fast axis orientation angle. We derive the time evolution operator for the system and compare it to a Jones matrix for an optical waveplate to show that in our Raman q-plate, the equivalent orientation of the fast axis is described by the relative phase of the Raman beams and the retardance is determined by the pulse area. The charge of the Raman q-plate is determined by the orbital angular momentum of the Raman beams, and the beams contain umbilic C-point polarization singularities which are imprinted into the condensate as spin singularities: lemons, stars, spirals, and saddles. By tuning the optical beam parameters, we can create a full-Bloch BEC, which is a coreless vortex that contains every possible superposition of two spin states, that is, it covers the Bloch sphere.
Synchronization Analysis of Master-Slave Probabilistic Boolean Networks
Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W. C.; Cao, Jinde
2015-01-01
In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results. PMID:26315380
The Prediction of Scattered Broadband Shock-Associated Noise
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.
Gulart, Aline Almeida; Munari, Anelise Bauer; Klein, Suelen Roberta; Santos da Silveira, Lucas; Mayer, Anamaria Fleig
2018-02-01
The study objective was to determine a cut-off point for the Glittre activities of daily living (ADL)test (TGlittre) to discriminate patients with normal and abnormal functional capacity. Fifty-nine patients with moderate to very severe COPD (45 males; 65 ± 8.84 years; BMI: 26 ± 4.78 kg/m 2 ; FEV 1 : 35.3 ± 13.4% pred) were evaluated for spirometry, TGlittre, 6-minute walk test (6 MWT), physical ADL, modified Medical Research Council scale (mMRC), BODE index, Saint George's Respiratory Questionnaire (SGRQ), and COPD Assessment Test (CAT). The receiver operating characteristic (ROC) curve was used to determine the cut-off point for TGlittre in order to discriminate patients with 6 MWT < 82% pred. The ROC curve indicated a cut-off point of 3.5 minutes for the TGlittre (sensitivity = 92%, specificity = 83%, and area under the ROC curve = 0.95 [95% CI: 0.89-0.99]). Patients with abnormal functional capacity had higher mMRC (median difference 1 point), CAT (mean difference: 4.5 points), SGRQ (mean difference: 12.1 points), and BODE (1.37 points) scores, longer time of physical activity <1.5 metabolic equivalent of task (mean difference: 47.9 minutes) and in sitting position (mean difference: 59.4 minutes) and smaller number of steps (mean difference: 1,549 minutes); p < 0.05 for all. In conclusion, the cut-off point of 3.5 minutes in the TGlittre is sensitive and specific to distinguish COPD patients with abnormal and normal functional capacity.
NASA Technical Reports Server (NTRS)
VanBaalen, Mary; Bahadon, Amir; Shavers, Mark; Semones, Edward
2011-01-01
The purpose of this study is to use NASA radiation transport codes to compare astronaut organ dose equivalents resulting from solar particle events (SPE), geomagnetically trapped protons, and free-space galactic cosmic rays (GCR) using phantom models representing Earth-based and microgravity-based anthropometry and positioning. Methods: The Univer sity of Florida hybrid adult phantoms were scaled to represent male and female astronauts with 5th, 50th, and 95th percentile heights and weights as measured on Earth. Another set of scaled phantoms, incorporating microgravity-induced changes, such as spinal lengthening, leg volume loss, and the assumption of the neutral body position, was also created. A ray-tracer was created and used to generate body self-shielding distributions for dose points within a voxelized phantom under isotropic irradiation conditions, which closely approximates the free-space radiation environment. Simplified external shielding consisting of an aluminum spherical shell was used to consider the influence of a spacesuit or shielding of a hull. These distributions were combined with depth dose distributions generated from the NASA radiation transport codes BRYNTRN (SPE and trapped protons) and HZETRN (GCR) to yield dose equivalent. Many points were sampled per organ. Results: The organ dos e equivalent rates were on the order of 1.5-2.5 mSv per day for GCR (1977 solar minimum) and 0.4-0.8 mSv per day for trapped proton irradiation with shielding of 2 g cm-2 aluminum equivalent. The organ dose equivalents for SPE irradiation varied considerably, with the skin and eye lens having the highest organ dose equivalents and deep-seated organs, such as the bladder, liver, and stomach having the lowest. Conclus ions: The greatest differences between the Earth-based and microgravity-based phantoms are observed for smaller ray thicknesses, since the most drastic changes involved limb repositioning and not overall phantom size. Improved self-shielding models reduce the overall uncertainty in organ dosimetry for mission-risk projections and assessments for astronauts
NASA Technical Reports Server (NTRS)
Schum, Harold J.; Davison, Elmer H.; Petrash, Donald A.
1955-01-01
The over-all component performance characteristics of the J71 Type IIA three-stage turbine were experimentally determined over a range of speed and over-all turbine total-pressure ratio at inlet-air conditions af 35 inches of mercury absolute and 700 deg. R. The results are compared with those obtained for the J71 Type IIF turbine, which was previously investigated, the two turbines being designed for the same engine application. Geometrically the two turbines were much alike, having the same variation of annular flow area and the same number of blades for corresponding stator and rotor rows. However, the blade throat areas downstream of the first stator of the IIA turbine were smaller than those of the IIF; and the IIA blade profiles were curve-backed, whereas those of the IIF were straight-backed. The IIA turbine passed the equivalent design weight flow and had a brake internal efficiency of 0.880 at design equivalent speed and work output. A maximum efficiency of 0.896 occurred at 130 percent of design equivalent speed and a pressure ratio of 4.0. The turbine had a wide range of efficient operation. The IIA turbine had slightly higher efficiencies than the IIF turbine at comparable operating conditions. The fact that the IIA turbine obtained the design equivalent weight flow at the design equivalent operating point was probably a result of the decrease in the blading throat areas downstream of the first stator from those of the IIF turbine, which passed 105 percent of design weight flow at the corresponding operating point. The third stator row of blades of the IIA turbine choked at the design equivalent speed and at an over-all pressure ratio of 4.2; the third rotor choked at a pressure ratio of approximately 4.9
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Eswaran, Hari; Wilson, James D; Murphy, Pam; Siegel, Eric R; Lowery, Curtis L
2016-03-01
The goal was to test a newly developed pneumatic tocodynamometer (pTOCO) that is disposable and lightweight, and evaluate its equivalence to the standard strain gauge-based tocodynamometer (TOCO). The equivalence between the devices was determined by both mechanical testing and recording of contractile events on women. The data were recorded simultaneously from a pTOCO prototype and standard TOCO that were in place on women who were undergoing routine contraction monitoring in the Labor and Delivery unit at the University of Arkansas for Medical Sciences. In this prospective equivalence study, the output from 31 recordings on 28 pregnant women that had 171 measureable contractions simultaneously in both types of TOCO were analyzed. The traces were scored for contraction start, peak and end times, and the duration of the event was computed from these times. The response curve to loaded weights and applied pressure were similar for both devices, indicating their mechanical equivalence. The paired differences in times and duration between devices were subjected to mixed-models analysis to test the pTOCO for equivalence with standard TOCOs using the two-one-sided tests procedure. The event times and duration analyzed simultaneously from both TOCO types were all found to be significantly equivalent to within ±10 s (all p-values ≤0.0001). pTOCO is equivalent to the standard TOCO in the detection of the timing and duration of uterine contractions. pTOCO would provide a lightweight, disposable alternative to commercially available standard TOCOs. © 2015 Nordic Federation of Societies of Obstetrics and Gynecology.
The evaluation of the neutron dose equivalent in the two-bend maze.
Tóth, Á Á; Petrović, B; Jovančević, N; Krmar, M; Rutonjski, L; Čudić, O
2017-04-01
The purpose of this study was to explore the effect of the second bend of the maze, on the neutron dose equivalent, in the 15MV linear accelerator vault, with two bend maze. These two bends of the maze were covered by 32 points where the neutron dose equivalent was measured. There is one available method for estimation of the neutron dose equivalent at the entrance door of the two bend maze which was tested using the results of the measurements. The results of this study show that the neutron equivalent dose at the door of the two bend maze was reduced almost three orders of magnitude. The measured TVD in the first bend (closer to the inner maze entrance) is about 5m. The measured TVD result is close to the TVD values usually used in the proposed models for estimation of neutron dose equivalent at the entrance door of the single bend maze. The results also determined that the TVD in the second bend (next to the maze entrance door) is significantly lower than the TVD values found in the first maze bend. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
The manual control of vehicles undergoing slow transitions in dynamic characteristics
NASA Technical Reports Server (NTRS)
Moriarty, T. E.
1974-01-01
The manual control was studied of a vehicle with slowly time-varying dynamics to develop analytic and computer techniques necessary for the study of time-varying systems. The human operator is considered as he controls a time-varying plant in which the changes are neither abrupt nor so slow that the time variations are unimportant. An experiment in which pilots controlled the longitudinal mode of a simulated time-varying aircraft is described. The vehicle changed from a pure double integrator to a damped second order system, either instantaneously or smoothly over time intervals of 30, 75, or 120 seconds. The regulator task consisted of trying to null the error term resulting from injected random disturbances with bandwidths of 0.8, 1.4, and 2.0 radians per second. Each of the twelve experimental conditons was replicated ten times. It is shown that the pilot's performance in the time-varying task is essentially equivalent to his performance in stationary tasks which correspond to various points in the transition. A rudimentary model for the pilot-vehicle-regulator is presented.
Comparison of Sigma-Point and Extended Kalman Filters on a Realistic Orbit Determination Scenario
NASA Technical Reports Server (NTRS)
Gaebler, John; Hur-Diaz. Sun; Carpenter, Russell
2010-01-01
Sigma-point filters have received a lot of attention in recent years as a better alternative to extended Kalman filters for highly nonlinear problems. In this paper, we compare the performance of the additive divided difference sigma-point filter to the extended Kalman filter when applied to orbit determination of a realistic operational scenario based on the Interstellar Boundary Explorer mission. For the scenario studied, both filters provided equivalent results. The performance of each is discussed in detail.
14 CFR 171.267 - Glide path automatic monitor system.
Code of Federal Regulations, 2011 CFR
2011-01-01
... control points when any of the following occurs: (1) A shift of the mean ISMLS glide path angle equivalent... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave...
14 CFR 171.267 - Glide path automatic monitor system.
Code of Federal Regulations, 2014 CFR
2014-01-01
... control points when any of the following occurs: (1) A shift of the mean ISMLS glide path angle equivalent... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave...
14 CFR 171.267 - Glide path automatic monitor system.
Code of Federal Regulations, 2010 CFR
2010-01-01
... control points when any of the following occurs: (1) A shift of the mean ISMLS glide path angle equivalent... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave...
Cosmological perturbations in inflation and in de Sitter space
NASA Astrophysics Data System (ADS)
Pimentel, Guilherme Leite
This thesis focuses on various aspects of inflationary fluctuations. First, we study gravitational wave fluctuations in de Sitter space. The isometries of the spacetime constrain to a few parameters the Wheeler-DeWitt wavefunctional of the universe, to cubic order in fluctuations. At cubic order, there are three independent terms in the wavefunctional. From the point of view of the bulk action, one term corresponds to Einstein gravity, and a new term comes from a cubic term in the curvature tensor. The third term is a pure phase and does not give rise to a new shape for expectation values of graviton fluctuations. These results can be seen as the leading order non-gaussian contributions in a slow-roll expansion for inflationary observables. We also use the wavefunctional approach to explain a universal consistency condition of n-point expectation values in single field inflation. This consistency condition relates a soft limit of an n-point expectation value to ( n-1)-point expectation values. We show how these conditions can be easily derived from the wavefunctional point of view. Namely, they follow from the momentum constraint of general relativity, which is equivalent to the constraint of spatial diffeomorphism invariance. We also study expectation values beyond tree level. We show that subhorizon fluctuations in loop diagrams do not generate a mass term for superhorizon fluctuations. Such a mass term could spoil the predictivity of inflation, which is based on the existence of properly defined field variables that become constant once their wavelength is bigger than the size of the horizon. Such a mass term would be seen in the two point expectation value as a contribution that grows linearly with time at late times. The absence of this mass term is closely related to the soft limits studied in previous chapters. It is analogous to the absence of a mass term for the photon in quantum electrodynamics, due to gauge symmetry. Finally, we use the tools of holography and entanglement entropy to study superhorizon correlations in quantum field theories in de Sitter space. The entropy has interesting terms that have no equivalent in flat space field theories. These new terms are due to particle creation in an expanding universe. The entropy is calculated directly for free massive scalar theories. For theories with holographic duals, it is determined by the area of some extremal surface in the bulk geometry. We calculate the entropy for different classes of holographic duals. For one of these classes, the holographic dual geometry is an asymptotically Anti-de Sitter space that decays into a crunching cosmology, an open Friedmann-Robertson-Walker universe. The extremal surface used in the calculation of the entropy lies almost entirely on the slice of maximal scale factor of the crunching cosmology.
Equivalent Mass versus Life Cycle Cost for Life Support Technology Selection
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The decision to develop a particular life support technology or to select it for flight usually depends on the cost to develop and fly it. Other criteria such as performance, safety, reliability, crew time, and technical and schedule risk are considered, but cost is always an important factor. Because launch cost would account for much of the cost of a future planetary mission, and because launch cost is directly proportional to the mass launched, equivalent mass has been used instead of cost to select advanced life support technology. The equivalent mass of a life support system includes the estimated mass of the hardware and of the spacecraft pressurized volume, power supply, and cooling system that the hardware requires. The equivalent mass of a system is defined as the total payload launch mass needed to provide and support the system. An extension of equivalent mass, Equivalent System Mass (ESM), has been established for use in the Advanced Life Support project. ESM adds a mass-equivalent of crew time and possibly other cost factors to equivalent mass. Traditional equivalent mass is strictly based on flown mass and reflects only the launch cost. ESM includes other important cost factors, but it complicates the simple flown mass definition of equivalent mass by adding a non-physical mass penalty for crew time that may exceed the actual flown mass. Equivalent mass is used only in life support analysis. Life Cycle Cost (LCC) is much more commonly used. LCC includes DDT&E, launch, and operations costs. For Earth orbit rather than planetary missions, the launch cost is less than the cost of Design, Development, Test, and Evaluation (DDTBE). LCC is a more inclusive cost estimator than equivalent mass. The relative costs of development, launch, and operations vary depending on the mission destination and duration. Since DDTBE or operations may cost more than launch, LCC gives a more accurate relative cost ranking than equivalent mass. To select the lowest cost technology for a particular application we should use LCC rather than equivalent mass.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
2005-09-06
This Tempel 1 image was built up from scaling images from NASA Deep Impact to 5 meters/pixel and aligned to fixed points. Each image at closer range replaced equivalent locations observed at a greater distance.
Deep grey matter growth predicts neurodevelopmental outcomes in very preterm children.
Young, Julia M; Powell, Tamara L; Morgan, Benjamin R; Card, Dallas; Lee, Wayne; Smith, Mary Lou; Sled, John G; Taylor, Margot J
2015-05-01
We evaluated whether the volume and growth rate of critical brain structures measured by MRI in the first weeks of life following very preterm (<32/40 weeks) birth could predict subsequent neurodevelopmental outcomes at 4 years of age. A significant proportion of children born very prematurely have cognitive deficits, but these problems are often only detected at early school age. Structural T2-weighted magnetic resonance images were acquired in 96 very preterm neonates scanned within 2 weeks of birth and 70 of these at term-equivalent age. An automated 3D image analysis procedure was used to measure the volume of selected brain structures across all scans and time points. At 4 years of age, 53 children returned for neuropsychological assessments evaluating IQ, language and visual motor integration. Associations with maternal education and perinatal measures were also explored. Multiple regression analyses revealed that growth of the caudate and globus pallidus between preterm birth and term-equivalent age predicted visual motor integration scores after controlling for sex and gestational age. Further associations were found between caudate and putamen growth with IQ and language scores. Analyses at either preterm or term-equivalent age only found associations between normalized deep grey matter growth and visual motor integration scores at term-equivalent age. Maternal education levels were associated with measures of IQ and language, but not visual motor integration. Thalamic growth was additionally linked with perinatal measures and presence of white matter lesions. These results highlight deep grey matter growth rates as promising biomarkers of long-term outcomes following very preterm birth, and contribute to our understanding of the brain-behaviour relations in these children. Copyright © 2015 Elsevier Inc. All rights reserved.
Meuldijk, D; Carlier, I V E; van Vliet, I M; van Veen, T; Wolterbeek, R; van Hemert, A M; Zitman, F G
2016-03-01
Depressive and anxiety disorders contribute to a high disease burden. This paper investigates whether concise formats of cognitive behavioral- and/or pharmacotherapy are equivalent with longer standard care in the treatment of depressive and/or anxiety disorders in secondary mental health care. A pragmatic randomized controlled equivalence trial was conducted at five Dutch outpatient Mental Healthcare Centers (MHCs) of the Regional Mental Health Provider (RMHP) 'Rivierduinen'. Patients (aged 18-65 years) with a mild to moderate anxiety and/or depressive disorder, were randomly allocated to concise or standard care. Data were collected at baseline, 3, 6 and 12 months by Routine Outcome Monitoring (ROM). Primary outcomes were the Brief Symptom Inventory (BSI) and the Web Screening Questionnaire (WSQ). We used Generalized Estimating Equations (GEE) to assess outcomes. Between March 2010 and December 2012, 182 patients, were enrolled (n=89 standard care; n=93 concise care). Both intention-to-treat and per-protocol analyses demonstrated equivalence of concise care and standard care at all time points. Severity of illness reduced, and both treatments improved patient's general health status and subdomains of quality of life. Moreover, in concise care, the beneficial effects started earlier. Concise care has the potential to be a feasible and promising alternative to longer standard secondary mental health care in the treatment of outpatients with a mild to moderate depressive and/or anxiety disorder. For future research, we recommend adhering more strictly to the concise treatment protocols to further explore the beneficial effects of the concise treatment. The study is registered in the Netherlands Trial Register, number NTR2590. Clinicaltrials.gov identifier: NCT01643642. Copyright © 2015 Elsevier Inc. All rights reserved.
Vanderborght, Jan; Vereecken, Harry
2002-01-01
The local scale dispersion tensor, Dd, is a controlling parameter for the dilution of concentrations in a solute plume that is displaced by groundwater flow in a heterogeneous aquifer. In this paper, we estimate the local scale dispersion from time series or breakthrough curves, BTCs, of Br concentrations that were measured at several points in a fluvial aquifer during a natural gradient tracer test at Krauthausen. Locally measured BTCs were characterized by equivalent convection dispersion parameters: equivalent velocity, v(eq)(x) and expected equivalent dispersivity, [lambda(eq)(x)]. A Lagrangian framework was used to approximately predict these equivalent parameters in terms of the spatial covariance of log(e) transformed conductivity and the local scale dispersion coefficient. The approximate Lagrangian theory illustrates that [lambda(eq)(x)] increases with increasing travel distance and is much larger than the local scale dispersivity, lambda(d). A sensitivity analysis indicates that [lambda(eq)(x)] is predominantly determined by the transverse component of the local scale dispersion and by the correlation scale of the hydraulic conductivity in the transverse to flow direction whereas it is relatively insensitive to the longitudinal component of the local scale dispersion. By comparing predicted [lambda(eq)(x)] for a range of Dd values with [lambda(eq)(x)] obtained from locally measured BTCs, the transverse component of Dd, DdT, was estimated. The estimated transverse local scale dispersivity, lambda(dT) = DdT/U1 (U1 = mean advection velocity) is in the order of 10(1)-10(2) mm, which is relatively large but realistic for the fluvial gravel sediments at Krauthausen.
Plum, Leona; Ahmed, Leaque; Febres, Gerardo; Bessler, Marc; Inabnet, William; Kunreuther, Elizabeth; McMahon, Donald J; Korner, Judith
2011-11-01
Weight-loss independent mechanisms may play an important role in the improvement of glucose homeostasis after Roux-en-Y gastric bypass (RYGB). The objective of this analysis was to determine whether RYGB causes greater improvement in glucostatic parameters as compared with laparoscopic adjustable gastric banding (LAGB) or low calorie diet (LCD) after equivalent weight loss and independent of enteral nutrient passage. Study 1 recruited participants without type 2 diabetes mellitus (T2DM) who underwent LAGB (n = 8) or RYGB (n = 9). Study 2 recruited subjects with T2DM who underwent LCD (n = 7) or RYGB (n = 7). Insulin-supplemented frequently-sampled intravenous glucose tolerance test (fsIVGTT) was performed before and after equivalent weight reduction. MINMOD analysis of insulin sensitivity (Si), acute insulin response to glucose (AIRg) and C-peptide (ACPRg) response to glucose, and insulin secretion normalized to the degree of insulin resistance (disposition index (DI)) were analyzed. Weight loss was comparable in all groups (7.8 ± 0.4%). In Study 1, significant improvement of Si, ACPRg, and DI were observed only after LAGB. In Study 2, Si, ACPRg, and plasma adiponectin increased significantly in the RYGB-DM group but not in LCD. DI improved in both T2DM groups, but the absolute increase was greater after RYGB (258.2 ± 86.6 vs. 55.9 ± 19.9; P < 0.05). Antidiabetic medications were discontinued after RYGB contrasting with 55% reduction in the number of medications after LCD. No intervention affected fasting glucagon-like peptide (GLP)-1, peptide YY (PYY) or ghrelin levels. In conclusion, RYGB produced greater improvement in Si and DI compared with diet at equivalent weight loss in T2DM subjects. Such a beneficial effect was not observed in nondiabetic subjects at this early time-point.
Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2007-01-01
Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.
Smith, Aaron Douglas; Lockman, Nur Ain; Holtzapple, Mark T
2011-06-01
Nutrients are essential for microbial growth and metabolism in mixed-culture acid fermentations. Understanding the influence of nutrient feeding strategies on fermentation performance is necessary for optimization. For a four-bottle fermentation train, five nutrient contacting patterns (single-point nutrient addition to fermentors F1, F2, F3, and F4 and multi-point parallel addition) were investigated. Compared to the traditional nutrient contacting method (all nutrients fed to F1), the near-optimal feeding strategies improved exit yield, culture yield, process yield, exit acetate-equivalent yield, conversion, and total acid productivity by approximately 31%, 39%, 46%, 31%, 100%, and 19%, respectively. There was no statistical improvement in total acid concentration. The traditional nutrient feeding strategy had the highest selectivity and acetate-equivalent selectivity. Total acid productivity depends on carbon-nitrogen ratio.
Wang, Xiao-Lan; Zhan, Ting-Ting; Zhan, Xian-Cheng; Tan, Xiao-Ying; Qu, Xiao-You; Wang, Xin-Yue; Li, Cheng-Rong
2014-01-01
The osmotic pressure of ammonium sulfate solutions has been measured by the well-established freezing point osmometry in dilute solutions and we recently reported air humidity osmometry in a much wider range of concentration. Air humidity osmometry cross-validated the theoretical calculations of osmotic pressure based on the Pitzer model at high concentrations by two one-sided test (TOST) of equivalence with multiple testing corrections, where no other experimental method could serve as a reference for comparison. Although more strict equivalence criteria were established between the measurements of freezing point osmometry and the calculations based on the Pitzer model at low concentration, air humidity osmometry is the only currently available osmometry applicable to high concentration, serves as an economic addition to standard osmometry.
Relativistic Transverse Gravitational Redshift
NASA Astrophysics Data System (ADS)
Mayer, A. F.
2012-12-01
The parametrized post-Newtonian (PPN) formalism is a tool for quantitative analysis of the weak gravitational field based on the field equations of general relativity. This formalism and its ten parameters provide the practical theoretical foundation for the evaluation of empirical data produced by space-based missions designed to map and better understand the gravitational field (e.g., GRAIL, GRACE, GOCE). Accordingly, mission data is interpreted in the context of the canonical PPN formalism; unexpected, anomalous data are explained as similarly unexpected but apparently real physical phenomena, which may be characterized as ``gravitational anomalies," or by various sources contributing to the total error budget. Another possibility, which is typically not considered, is a small modeling error in canonical general relativity. The concept of the idealized point-mass spherical equipotential surface, which originates with Newton's law of gravity, is preserved in Einstein's synthesis of special relativity with accelerated reference frames in the form of the field equations. It was not previously realized that the fundamental principles of relativity invalidate this concept and with it the idea that the gravitational field is conservative (i.e., zero net work is done on any closed path). The ideal radial free fall of a material body from arbitrarily-large range to a point on such an equipotential surface (S) determines a unique escape-velocity vector of magnitude v collinear to the acceleration vector of magnitude g at this point. For two such points on S separated by angle dφ , the Equivalence Principle implies distinct reference frames experiencing inertial acceleration of identical magnitude g in different directions in space. The complete equivalence of these inertially-accelerated frames to their analogous frames at rest on S requires evaluation at instantaneous velocity v relative to a local inertial observer. Because these velocity vectors are not parallel, a symmetric energy potential exists between the frames that is quantified by the instantaneous Δ {v} = v\\cdot{d}φ between them; in order for either frame to become indistinguishable from the other, such that their respective velocity and acceleration vectors are parallel, a change in velocity is required. While the qualitative features of general relativity imply this phenomenon (i.e., a symmetric potential difference between two points on a Newtonian `equipotential surface' that is similar to a friction effect), it is not predicted by the field equations due to a modeling error concerning time. This is an error of omission; time has fundamental geometric properties implied by the principles of relativity that are not reflected in the field equations. Where b is the radius and g is the gravitational acceleration characterizing a spherical geoid S of an ideal point-source gravitational field, an elegant derivation that rests on first principles shows that for two points at rest on S separated by a distance d << b, a symmetric relativistic redshift exists between these points of magnitude z = gd2/bc^2, which over 1 km at Earth sea level yields z ˜{10-17}. It can be tested with a variety of methods, in particular laser interferometry. A more sophisticated derivation yields a considerably more complex predictive formula for any two points in a gravitational field.
Effective time-independent analysis for quantum kicked systems.
Bandyopadhyay, Jayendra N; Guha Sarkar, Tapomoy
2015-03-01
We present a mapping of potentially chaotic time-dependent quantum kicked systems to an equivalent approximate effective time-independent scenario, whereby the system is rendered integrable. The time evolution is factorized into an initial kick, followed by an evolution dictated by a time-independent Hamiltonian and a final kick. This method is applied to the kicked top model. The effective time-independent Hamiltonian thus obtained does not suffer from spurious divergences encountered if the traditional Baker-Cambell-Hausdorff treatment is used. The quasienergy spectrum of the Floquet operator is found to be in excellent agreement with the energy levels of the effective Hamiltonian for a wide range of system parameters. The density of states for the effective system exhibits sharp peaklike features, pointing towards quantum criticality. The dynamics in the classical limit of the integrable effective Hamiltonian shows remarkable agreement with the nonintegrable map corresponding to the actual time-dependent system in the nonchaotic regime. This suggests that the effective Hamiltonian serves as a substitute for the actual system in the nonchaotic regime at both the quantum and classical level.
Effective time-independent analysis for quantum kicked systems
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Jayendra N.; Guha Sarkar, Tapomoy
2015-03-01
We present a mapping of potentially chaotic time-dependent quantum kicked systems to an equivalent approximate effective time-independent scenario, whereby the system is rendered integrable. The time evolution is factorized into an initial kick, followed by an evolution dictated by a time-independent Hamiltonian and a final kick. This method is applied to the kicked top model. The effective time-independent Hamiltonian thus obtained does not suffer from spurious divergences encountered if the traditional Baker-Cambell-Hausdorff treatment is used. The quasienergy spectrum of the Floquet operator is found to be in excellent agreement with the energy levels of the effective Hamiltonian for a wide range of system parameters. The density of states for the effective system exhibits sharp peaklike features, pointing towards quantum criticality. The dynamics in the classical limit of the integrable effective Hamiltonian shows remarkable agreement with the nonintegrable map corresponding to the actual time-dependent system in the nonchaotic regime. This suggests that the effective Hamiltonian serves as a substitute for the actual system in the nonchaotic regime at both the quantum and classical level.
Equivalent Quantum Equations in a System Inspired by Bouncing Droplets Experiments
NASA Astrophysics Data System (ADS)
Borghesi, Christian
2017-07-01
In this paper we study a classical and theoretical system which consists of an elastic medium carrying transverse waves and one point-like high elastic medium density, called concretion. We compute the equation of motion for the concretion as well as the wave equation of this system. Afterwards we always consider the case where the concretion is not the wave source any longer. Then the concretion obeys a general and covariant guidance formula, which leads in low-velocity approximation to an equivalent de Broglie-Bohm guidance formula. The concretion moves then as if exists an equivalent quantum potential. A strictly equivalent free Schrödinger equation is retrieved, as well as the quantum stationary states in a linear or spherical cavity. We compute the energy (and momentum) of the concretion, naturally defined from the energy (and momentum) density of the vibrating elastic medium. Provided one condition about the amplitude of oscillation is fulfilled, it strikingly appears that the energy and momentum of the concretion not only are written in the same form as in quantum mechanics, but also encapsulate equivalent relativistic formulas.
When Is All Understood and Done? The Psychological Reality of the Recognition Point
ERIC Educational Resources Information Center
Bolte, Jens; Uhe, Mechtild
2004-01-01
Using lexical decision, the effects of primes of different length on spoken word recognition were evaluated in three partial repetition priming experiments. Prime length was determined via gating (Experiments 1a and 2a). It was shorter than, equivalent to, or longer than the recognition point (RP), or a complete word. In Experiments 1b and 1c,…
Reversal time of jump-noise magnetization dynamics in nanomagnets via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Parthasarathy, Arun; Rakheja, Shaloo
2018-06-01
The jump-noise is a nonhomogeneous Poisson process which models thermal effects in magnetization dynamics, with special applications in low temperature escape rate phenomena. In this work, we develop improved numerical methods for Monte Carlo simulation of the jump-noise dynamics and validate the method by comparing the stationary distribution obtained empirically against the Boltzmann distribution. In accordance with the Néel-Brown theory, the jump-noise dynamics display an exponential relaxation toward equilibrium with a characteristic reversal time, which we extract for nanomagnets with uniaxial and cubic anisotropy. We relate the jump-noise dynamics to the equivalent Landau-Lifshitz dynamics up to second order correction for a general energy landscape and obtain the analogous Néel-Brown theory's solution of the reversal time. We find that the reversal time of jump-noise dynamics is characterized by Néel-Brown theory's solution at the energy saddle point for small noise. For large noise, the magnetization reversal due to jump-noise dynamics phenomenologically represents macroscopic tunneling of magnetization.
Spontaneous ignition delay characteristics of hydrocarbon fuel-air mixtures
NASA Technical Reports Server (NTRS)
Lefebvre, A. H.; Freeman, W. G.; Cowell, L. H.
1986-01-01
The influence of pressure on the autoignition characteristics of homogeneous mixtures of hydrocarbon fuels in air is examined. Autoignition delay times are measured for propane, ethylene, methane, and acetylene in a continuous flow apparatus featuring a multi-point fuel injector. Results are presented for mixture temperatures from 670K to 1020K, pressures from 1 to 10 atmospheres, equivalence ratios from 0.2 to 0.7, and velocities from 5 to 30 m/s. Delay time is related to pressure, temperature, and fuel concentration by global reaction theory. The results show variations in global activation energy from 25 to 38 kcal/kg-mol, pressure exponents from 0.66 to 1.21, and fuel concentration exponents from 0.19 to 0.75 for the fuels studied. These results are generally in good agreement with previous studies carried out under similar conditions.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Keene, David J; Mistry, Dipesh; Nam, Julian; Tutton, Elizabeth; Handley, Robert; Morgan, Lesley; Roberts, Emma; Gray, Bridget; Briggs, Andrew; Lall, Ranjit; Chesser, Tim Js; Pallister, Ian; Lamb, Sarah E; Willett, Keith
2016-10-01
Close contact casting (CCC) may offer an alternative to open reduction and internal fixation (ORIF) surgery for unstable ankle fractures in older adults. We aimed to (1) determine if CCC for unstable ankle fractures in adults aged over 60 years resulted in equivalent clinical outcome compared with ORIF, (2) estimate cost-effectiveness to the NHS and society and (3) explore participant experiences. A pragmatic, multicentre, equivalence randomised controlled trial incorporating health economic evaluation and qualitative study. Trauma and orthopaedic departments of 24 NHS hospitals. Adults aged over 60 years with unstable ankle fracture. Those with serious limb or concomitant disease or substantial cognitive impairment were excluded. CCC was conducted under anaesthetic in theatre by surgeons who attended training. ORIF was as per local practice. Participants were randomised in 1 : 1 allocation via remote telephone randomisation. Sequence generation was by random block size, with stratification by centre and fracture pattern. Follow-up was conducted at 6 weeks and, by blinded outcome assessors, at 6 months after randomisation. The primary outcome was the Olerud-Molander Ankle Score (OMAS), a patient-reported assessment of ankle function, at 6 months. Secondary outcomes were quality of life (as measured by the European Quality of Life 5-Dimensions, Short Form questionnaire-12 items), pain, ankle range of motion and mobility (as measured by the timed up and go test), patient satisfaction and radiological measures. In accordance with equivalence trial US Food and Drug Administration guidance, primary analysis was per protocol. We recruited 620 participants, 95 from the pilot and 525 from the multicentre phase, between June 2010 and November 2013. The majority of participants, 579 out of 620 (93%), received the allocated treatment; 52 out of 275 (19%) who received CCC later converted to ORIF because of loss of fracture reduction. CCC resulted in equivalent ankle function compared with ORIF at 6 months {OMAS 64.5 points [standard deviation (SD) 22.4 points] vs. OMAS 66.0 points (SD 21.1 points); mean difference -0.65 points, 95% confidence interval (CI) -3.98 to 2.68 points; standardised effect size -0.04, 95% CI -0.23 to 0.15}. There were no differences in quality of life, ankle motion, pain, mobility and patient satisfaction. Infection and/or wound problems were more common with ORIF [29/298 (10%) vs. 4/275 (1%)], as were additional operating theatre procedures [17/298 (6%) vs. 3/275 (1%)]. Malunion was more common with CCC [38/249 (15%) vs. 8/274 (3%); p < 0.001]. Malleolar non-union was lower in the ORIF group [lateral: 0/274 (0%) vs. 8/248 (3%); p = 0.002; medial: 3/274 (1%) vs. 18/248 (7%); p < 0.001]. During the trial, CCC showed modest mean cost savings [NHS mean difference -£644 (95% CI -£1390 to £76); society mean difference -£683 (95% CI -£1851 to £536)]. Estimates showed some imprecision. Incremental quality-adjusted life-years following CCC were no different from ORIF. Over common willingness-to-pay thresholds, the probability that CCC was cost-effective was very high (> 95% from NHS perspective and 85% from societal perspective). Experiences of treatments were similar; both groups endured the impact of fracture, uncertainty regarding future function and the need for further interventions. Assessors at 6 weeks were necessarily not blinded. The learning-effect analysis was inconclusive because of limited CCC applications per surgeon. CCC provides a clinically equivalent outcome to ORIF at reduced cost to the NHS and to society at 6 months. Longer-term follow-up of trial participants is under way to address concerns over potential later complications or additional procedures and their potential to impact on ankle function. Further study of the patient factors, radiological fracture patterns and outcomes, treatment responses and prognosis would also contribute to understanding the treatment pathway. Current Controlled Trials ISRCTN04180738. The National Institute for Health Research Health Technology Assessment programme and will be published in full in Health Technology Assessment ; Vol. 20, No. 75. See the NIHR Journals Library website for further project information. This report was developed in association with the National Institute for Health Research Oxford Biomedical Research Unit funding scheme. The pilot phase was funded by the AO Research Foundation.
Thomas, Evan M; Popple, Richard A; Wu, Xingen; Clark, Grant M; Markert, James M; Guthrie, Barton L; Yuan, Yu; Dobelbower, Michael C; Spencer, Sharon A; Fiveash, John B
2014-10-01
Volumetric modulated arc therapy (VMAT) has been shown to be feasible for radiosurgical treatment of multiple cranial lesions with a single isocenter. To investigate whether equivalent radiosurgical plan quality and reduced delivery time could be achieved in VMAT for patients with multiple intracranial targets previously treated with Gamma Knife (GK) radiosurgery. We identified 28 GK treatments of multiple metastases. These were replanned for multiarc and single-arc, single-isocenter VMAT (RapidArc) in Eclipse. The prescription for all targets was standardized to 18 Gy. Each plan was normalized for 100% prescription dose to 99% to 100% of target volume. Plan quality was analyzed by target conformity (Radiation Therapy Oncology Group and Paddick conformity indices [CIs]), dose falloff (area under the dose-volume histogram curve), as well as the V4.5, V9, V12, and V18 isodose volumes. Other end points included beam-on and treatment time. Compared with GK, multiarc VMAT improved median plan conformity (CIVMAT = 1.14, CIGK = 1.65; P < .001) with no significant difference in median dose falloff (P = .269), 12 Gy isodose volume (P = .500), or low isodose spill (P = .49). Multiarc VMAT plans were associated with markedly reduced treatment time. A predictive model of the 12 Gy isodose volume as a function of tumor number and volume was also developed. For multiple target stereotactic radiosurgery, 4-arc VMAT produced clinically equivalent conformity, dose falloff, 12 Gy isodose volume, and low isodose spill, and reduced treatment time compared with GK. Because of its similar plan quality and increased delivery efficiency, single-isocenter VMAT radiosurgery may constitute an attractive alternative to multi-isocenter radiosurgery for some patients.
NASA Astrophysics Data System (ADS)
Couceiro, Miguel; Crespo, Paulo; Marques, Rui F.; Fonte, Paulo
2014-06-01
Scatter Fraction (SF) and Noise Equivalent Count Rate (NECR) of a 2400 mm wide axial field-of-view Positron Emission Tomography (PET) system based on Resistive Plate Chamber (RPC) detectors with 300 ps Time Of Flight (TOF) resolution were studied by simulation using Geant4. The study followed the NEMA NU2-2001 standards, using the standard 700 mm long phantom and an axially extended one with 1800 mm, modeling the foreseeable use of this PET system. Data was processed based on the actual RPC readout, which requires a 0.2 μs non-paralyzable dead time for timing signals and a paralyzable dead time (τps) for position signals. For NECR, the best coincidence trigger consisted of a multiple time window coincidence sorter retaining single coincidence pairs (involving only two photons) and all possible coincidence pairs obtained from Multiple coincidences, keeping only those for which the direct TOF-reconstructed point falls inside a tight region surrounding the phantom. For the 700 mm phantom, the SF was 51.8% and, with τps = 3.0 μs, the peak NECR was 167 kcps at 7.6 kBq/cm3. Using τps = 1.0 μs the NECR was 349 kcps at 7.6 kBq/cm3, and no peak was found. For the 1800 mm phantom, the SF was slightly higher, and the NECR curves were identical to those obtained with the standard phantom, but shifted to lower activity concentrations. Although the higher SF, the values obtained for NECR allow concluding that the proposed scanner is expected to outperform current commercial PET systems.
Electrowetting on dielectric: experimental and model study of oil conductivity on rupture voltage
NASA Astrophysics Data System (ADS)
Zhao, Qing; Tang, Biao; Dong, Baoqin; Li, Hui; Zhou, Rui; Guo, Yuanyuan; Dou, Yingying; Deng, Yong; Groenewold, Jan; Henzen, Alexander Victor; Zhou, Guofu
2018-05-01
Electrowetting on dielectric devices uses a conducting (water) and insulating (oil) liquid phase in conjunction on a dielectric layer. In these devices, the wetting properties of the liquid phases can be manipulated by applying an electric field. The electric field can rupture the initially flat oil film and promotes further dewetting of the oil. Here, we investigate a problem in the operation of electrowetting on dielectric caused by a finite conductivity of the oil. In particular, we find that the voltage at which the oil film ruptures is sensitive to the application of relatively low DC voltages prior to switching. Here, we systematically investigate this dependence using controlled driving schemes. The mechanism behind these history effects point to charge transport processes in the dielectric and the oil, which can be modeled and characterized by a decay time. To quantify the effects the typical response timescales have been measured with a high-speed video camera. The results have been reproduced in simulations. In addition, a simplified yet accurate equivalent circuit model is developed to analyze larger data sets more conveniently. The experimental data support the hypothesis that each pixel can be characterized by a single decay time. We studied an ensemble of pixels and found that they showed a rather broad distribution of decay times with an average value of about 440 ms. This decay time can be interpreted as a discharge timescale of the oil, not to be confused with discharge of the entire system which is generally much faster (<1 ms). Through the equivalent circuit model, we also found that variations in the fluoropolymer (FP) conductivity cannot explain the distribution of decay times, while variations in oil conductivity can.
29 CFR 1915.56 - Arc welding and cutting.
Code of Federal Regulations, 2011 CFR
2011-07-01
... or other equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable... generation of an arc, sparks or heat at any point shall cause rejection of the structure as a ground circuit...
29 CFR 1915.56 - Arc welding and cutting.
Code of Federal Regulations, 2013 CFR
2013-07-01
... or other equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable... generation of an arc, sparks or heat at any point shall cause rejection of the structure as a ground circuit...
29 CFR 1915.56 - Arc welding and cutting.
Code of Federal Regulations, 2010 CFR
2010-07-01
... or other equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable... generation of an arc, sparks or heat at any point shall cause rejection of the structure as a ground circuit...
29 CFR 1915.56 - Arc welding and cutting.
Code of Federal Regulations, 2012 CFR
2012-07-01
... or other equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable... generation of an arc, sparks or heat at any point shall cause rejection of the structure as a ground circuit...
NASA Astrophysics Data System (ADS)
Bellver-Cebreros, Consuelo; Rodriguez-Danta, Marcelo
2009-03-01
An apparently unnoticed analogy between the torque-free motion of a rotating rigid body about a fixed point and the propagation of light in anisotropic media is stated. First, a new plane construction for visualizing this torque-free motion is proposed. This method uses an intrinsic representation alternative to angular momentum and independent of the modulus of angular velocity ω. The equivalence between this plane construction and the well-known Poinsot's three-dimensional graphical procedure is also shown. From this equivalence, analogies have been found between the general plane wave equation (relation of dispersion) in anisotropic media and basic equations of torque-free motion of a rigid body about a fixed point. These analogies allow reciprocal transfer of results between optics and mechanics and, as an example, reinterpretation of the internal conical refraction phenomenon in biaxial media is carried out. This paper is intended as an interdisciplinary application of analogies for students and teachers in the context of intermediate physics courses at university level.
A formal and data-based comparison of measures of motor-equivalent covariation.
Verrel, Julius
2011-09-15
Different analysis methods have been developed for assessing motor-equivalent organization of movement variability. In the uncontrolled manifold (UCM) method, the structure of variability is analyzed by comparing goal-equivalent and non-goal-equivalent variability components at the level of elemental variables (e.g., joint angles). In contrast, in the covariation by randomization (CR) approach, motor-equivalent organization is assessed by comparing variability at the task level between empirical and decorrelated surrogate data. UCM effects can be due to both covariation among elemental variables and selective channeling of variability to elemental variables with low task sensitivity ("individual variation"), suggesting a link between the UCM and CR method. However, the precise relationship between the notion of covariation in the two approaches has not been analyzed in detail yet. Analysis of empirical and simulated data from a study on manual pointing shows that in general the two approaches are not equivalent, but the respective covariation measures are highly correlated (ρ > 0.7) for two proposed definitions of covariation in the UCM context. For one-dimensional task spaces, a formal comparison is possible and in fact the two notions of covariation are equivalent. In situations in which individual variation does not contribute to UCM effects, for which necessary and sufficient conditions are derived, this entails the equivalence of the UCM and CR analysis. Implications for the interpretation of UCM effects are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
Linear decomposition approach for a class of nonconvex programming problems.
Shen, Peiping; Wang, Chunfeng
2017-01-01
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
gallon equivalent of natural gas at the time fuel is dispensed or delivered into the tank of a motor vehicle. A gasoline gallon equivalent is equal to 5.66 lbs. of CNG and a diesel gallon equivalent is equal
Space based optical staring sensor LOS determination and calibration using GCPs observation
NASA Astrophysics Data System (ADS)
Chen, Jun; An, Wei; Deng, Xinpu; Yang, Jungang; Sha, Zhichao
2016-10-01
Line of sight (LOS) attitude determination and calibration is the key prerequisite of tracking and location of targets in space based infrared (IR) surveillance systems (SBIRS) and the LOS determination and calibration of staring sensor is one of the difficulties. This paper provides a novel methodology for removing staring sensor bias through the use of Ground Control Points (GCPs) detected in the background field of the sensor. Based on researching the imaging model and characteristics of the staring sensor of SBIRS geostationary earth orbit part (GEO), the real time LOS attitude determination and calibration algorithm using landmark control point is proposed. The influential factors (including the thermal distortions error, assemble error, and so on) of staring sensor LOS attitude error are equivalent to bias angle of LOS attitude. By establishing the observation equation of GCPs and the state transition equation of bias angle, and using an extend Kalman filter (EKF), the real time estimation of bias angle and the high precision sensor LOS attitude determination and calibration are achieved. The simulation results show that the precision and timeliness of the proposed algorithm meet the request of target tracking and location process in space based infrared surveillance system.
Predicting the Lifetime of Dynamic Networks Experiencing Persistent Random Attacks.
Podobnik, Boris; Lipic, Tomislav; Horvatic, Davor; Majdandzic, Antonio; Bishop, Steven R; Eugene Stanley, H
2015-09-21
Estimating the critical points at which complex systems abruptly flip from one state to another is one of the remaining challenges in network science. Due to lack of knowledge about the underlying stochastic processes controlling critical transitions, it is widely considered difficult to determine the location of critical points for real-world networks, and it is even more difficult to predict the time at which these potentially catastrophic failures occur. We analyse a class of decaying dynamic networks experiencing persistent failures in which the magnitude of the overall failure is quantified by the probability that a potentially permanent internal failure will occur. When the fraction of active neighbours is reduced to a critical threshold, cascading failures can trigger a total network failure. For this class of network we find that the time to network failure, which is equivalent to network lifetime, is inversely dependent upon the magnitude of the failure and logarithmically dependent on the threshold. We analyse how permanent failures affect network robustness using network lifetime as a measure. These findings provide new methodological insight into system dynamics and, in particular, of the dynamic processes of networks. We illustrate the network model by selected examples from biology, and social science.
Equivalent Mass versus Life Cycle Cost for Life Support Technology Selection
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The decision to develop a particular life support technology or to select it for flight usually depends on the cost to develop and fly it. Other criteria - performance, safety, reliability, crew time, and risk - are considered, but cost is always an important factor. Because launch cost accounts for most of the cost of planetary missions, and because launch cost is directly proportional to the mass launched, equivalent mass has been used instead of cost to select life support technology. The equivalent mass of a life support system includes the estimated masses of the hardware and of the pressurized volume, power supply, and cooling system that the hardware requires. The equivalent mass is defined as the total payload launch mass needed to provide and support the system. An extension of equivalent mass, Equivalent System Mass (ESM), has been established for use in Advanced Life Support. A crew time mass-equivalent and sometimes other non-mass factors are added to equivalent mass to create ESM. Equivalent mass is an estimate of the launch cost only. For earth orbit rather than planetary missions, the launch cost is usually exceeded by the cost of Design, Development, Test, and Evaluation (DDT&E). Equivalent mass is used only in life support analysis. Life Cycle Cost (LCC) is much more commonly used. LCC includes DDT&E, launch, and operations costs. Since LCC includes launch cost, it is always a more accurate cost estimator than equivalent mass. The relative costs of development, launch, and operations vary depending on the mission design, destination, and duration. Since DDT&E or operations may cost more than launch, LCC may give a more accurate cost ranking than equivalent mass. To be sure of identifying the lowest cost technology for a particular mission, we should use LCC rather than equivalent mass.
Gill, James R.; Cobban, William Aubrey
1973-01-01
During Late Cretaceous time a broad north-trending epicontinental sea covered much of the western interior of North America and extended from the Gulf of Mexico to the Arctic Ocean. The sea was bounded on the west by a narrow, unstable, and constantly rising cordillera which extended from Central America to Alaska and which separated the sea from Pacific oceanic waters. The east margin of the sea was bounded by the low-lying stable platform of the central part of the United States.Rocks of the type Montana Group in Montana and equivalent rocks in adjacent States, which consist of eastward-pointing wedges of shallow-water marine and nonmarine strata that enclose westward-pointing wedges of fine-grained marine strata, were deposited in and marginal to this sea. These rocks range in age from middle Santonian to early Maestrichtian and represent a time span of about 14 million years. Twenty-nine distinctive ammonite zones, each with a time span of about half a million years, characterize the marine strata.Persistent beds of bentonite in the transgressive part of the Claggett and Bearpaw Shales of Montana and equivalent rocks elsewhere represent periods of explosive volcanism and perhaps concurrent subsidence along the west shore in the vicinity of the Elkhorn Mountains and the Deer Creek volcanic fields in Montana. Seaward retreat of st randlines, marked by deposition of the Telegraph Creek, Eagle, Judith River, and Fox Hills Formations in Montana and the Mesaverde Formation in Wyoming, may be attributed to uplift in near-coastal areas and to an increase in volcaniclastic rocks delivered to the sea.Rates of transgression and regression determined for the Montana Group in central Montana reveal that the strandline movement was more rapid during times of transgression. The regression of the Telegraph Creek and Eagle strandlines averaged about 50 miles per million years compared with a rate of about 95 miles per million years for the advance of the strand-line during Claggett time. The Judith River regression averaged about 60 miles per million years compared with movement of the strandline during the Bearpaw advance of about 70 miles per million years.The final retreat of marine waters from Montana, marked by the Fox Hills regression, was about 35 miles per million years at first, but near the end of the regression it accelerated to a rate of about 500 miles per million years.Rates of sedimentation range from less than 50 feet per million years in the eastern parts of North and South Dakota to at least 2,500 feet in western Wyoming. The low rates in the Dakotas correspond well with modern rates in the open ocean, and the rates in western Wyoming approach the rate of present coastal sedimentation.
NULL Convention Floating Point Multiplier
Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Araújo Dos Santos Júnior, José; Dos Santos Amaral, Romilton; Simões Cezar Menezes, Rômulo; Reinaldo Estevez Álvarez, Juan; Marques do Nascimento Santos, Josineide; Herrero Fernández, Zahily; Dias Bezerra, Jairo; Antônio da Silva, Alberto; Francys Rodrigues Damascena, Kennedy; de Almeida Maciel Neto, José
2017-07-01
One of the main natural uranium deposits in Brazil is located in the municipality of Espinharas, in the State of Paraíba. This area may present high levels of natural radioactivity due to the presence of these radionuclides. Since this is a populated area, there is need for a radioecological dosimetry assessment to investigate the possible risks to the population. Based on this problem, the objective of this study was to estimate the environmental effective dose outdoors in inhabited areas influenced by the uranium deposit, using the specific activities of equivalent uranium, equivalent thorium and 40 K and conversion factors. The environmental assessment was carried using gamma spectroscopy in sixty-two points within the municipality, with a high-resolution gamma spectrometer with HPGe semiconductor detector and Be window. The results obtained ranged from 0.01 to 19.11 mSv y -1 , with an average of 2.64 mSv y -1 . These levels are, on average, 23 times higher than UNSCEAR reference levels and up to 273 times the reference value of the earth's crust for primordial radionuclides. Therefore, given the high radioactivity levels found, we conclude that there is need for further investigation to evaluate the levels of radioactivity in indoor environments, which will reflect more closely the risks of the local population. Copyright © 2017 Elsevier Inc. All rights reserved.
2017-12-19
information is accumulated (drift rate). Note that decision time is not equivalent to reaction time because reaction time includes non -decision time...countermeasures are not used. The magnitude of the performance loss is nearly equivalent to that measured using the psychomotor vigilance test, which is...model non -decision time parameter for Modafinil and No Modafinil groups as a function of measurement time for the 3D task
NASA Technical Reports Server (NTRS)
Kofskey, M. G.; Nusbaum, W. J.
1978-01-01
A cold air experimental investigation of a free power turbine designed for a 112-kW automotive gas-turbine was made over a range of speeds from 0 to 130 percent of design equivalent speeds and over a range of pressure ratio from 1.11 to 2.45. Results are presented in terms of equivalent power, torque, mass flow, and efficiency for the design power point setting of the variable stator.
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe
2017-01-01
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian
2017-07-18
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.
Exposure of the surgeon's hands to radiation during hand surgery procedures.
Żyluk, Andrzej; Puchalski, Piotr; Szlosser, Zbigniew; Dec, Paweł; Chrąchol, Joanna
2014-01-01
The objective of the study was to assess the time of exposure of the surgeon's hands to radiation and calculate of the equivalent dose absorbed during surgery of hand and wrist fractures with C-arm fluoroscope guidance. The necessary data specified by the objective of the study were acquired from operations of 287 patients with fractures of fingers, metacarpals, wrist bones and distal radius. 218 operations (78%) were percutaneous procedures and 60 (22%) were performed by open method. Data on the time of exposure and dose of radiation were acquired from the display of the fluoroscope, where they were automatically generated. These data were assigned to the individual patient, type of fracture, method of surgery and the operating surgeon. Fixations of distal radial fractures required longer times of radiation exposure (mean 61 sec.) than fractures of the wrist/metacarpals and fingers (38 and 32 sec., respectively), which was associated with absorption of significantly higher equivalent doses. Fixations of distal radial fractures by open method were associated with statistically significantly higher equivalent doses (0.41 mSv) than percutaneous procedures (0.3 mSv). Fixations of wrist and metacarpal bone fractures by open method were associated with lower equivalent doses (0.34 mSv) than percutaneous procedures (0.37 mSv),but the difference was not significant. Fixations of finger fractures by open method were associated with lower equivalent doses (0.13 mSv) than percutaneous procedures (0.24 mSv), the difference being statistically non-significant. Statistically significant differences in exposure time and equivalent doses were noted between 4 surgeons participating in the study, but no definitive relationship was found between these parameters and surgeons' employment time. 1. Hand surgery procedures under fluoroscopic guidance are associated with mild exposure of the surgeons' hands to radiation. 2. The equivalent dose was related to the type of fracture, operative technique and - to some degree - to the time of employment of the surgeon.
Cuntz-Krieger algebras representations from orbits of interval maps
NASA Astrophysics Data System (ADS)
Correia Ramos, C.; Martins, Nuno; Pinto, Paulo R.; Sousa Ramos, J.
2008-05-01
Let f be an expansive Markov interval map with finite transition matrix Af. Then for every point, we yield an irreducible representation of the Cuntz-Krieger algebra and show that two such representations are unitarily equivalent if and only if the points belong to the same generalized orbit. The restriction of each representation to the gauge part of is decomposed into irreducible representations, according to the decomposition of the orbit.
29 CFR 1926.351 - Arc welding and cutting.
Code of Federal Regulations, 2011 CFR
2011-07-01
... equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable shall have a safe... electrical contact exists at all joints. The generation of an arc, sparks, or heat at any point shall cause...
29 CFR 1926.351 - Arc welding and cutting.
Code of Federal Regulations, 2012 CFR
2012-07-01
... equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable shall have a safe... electrical contact exists at all joints. The generation of an arc, sparks, or heat at any point shall cause...
29 CFR 1926.351 - Arc welding and cutting.
Code of Federal Regulations, 2013 CFR
2013-07-01
... equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable shall have a safe... electrical contact exists at all joints. The generation of an arc, sparks, or heat at any point shall cause...
29 CFR 1926.351 - Arc welding and cutting.
Code of Federal Regulations, 2014 CFR
2014-07-01
... equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable shall have a safe... electrical contact exists at all joints. The generation of an arc, sparks, or heat at any point shall cause...
29 CFR 1926.351 - Arc welding and cutting.
Code of Federal Regulations, 2010 CFR
2010-07-01
... equivalent insulation. (c) Ground returns and machine grounding. (1) A ground return cable shall have a safe... electrical contact exists at all joints. The generation of an arc, sparks, or heat at any point shall cause...
On the generalized geometry origin of noncommutative gauge theory
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Schupp, Peter; Vysoký, Jan
2013-07-01
We discuss noncommutative gauge theory from the generalized geometry point of view. We argue that the equivalence between the commutative and semiclassically noncommutative DBI actions is naturally encoded in the generalized geometry of D-branes.
Skyshine at neutron energies less than or equal to 400 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.
1980-10-01
The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less
From random microstructures to representative volume elements
NASA Astrophysics Data System (ADS)
Zeman, J.; Šejnoha, M.
2007-06-01
A unified treatment of random microstructures proposed in this contribution opens the way to efficient solutions of large-scale real world problems. The paper introduces a notion of statistically equivalent periodic unit cell (SEPUC) that replaces in a computational step the actual complex geometries on an arbitrary scale. A SEPUC is constructed such that its morphology conforms with images of real microstructures. Here, the appreciated two-point probability function and the lineal path function are employed to classify, from the statistical point of view, the geometrical arrangement of various material systems. Examples of statistically equivalent unit cells constructed for a unidirectional fibre tow, a plain weave textile composite and an irregular-coursed masonry wall are given. A specific result promoting the applicability of the SEPUC as a tool for the derivation of homogenized effective properties that are subsequently used in an independent macroscopic analysis is also presented.
Integrability from point symmetries in a family of cosmological Horndeski Lagrangians
NASA Astrophysics Data System (ADS)
Dimakis, N.; Giacomini, Alex; Paliathanasis, Andronikos
2017-07-01
For a family of Horndeski theories, formulated in terms of a generalized Galileon model, we study the integrability of the field equations in a Friedmann-Lemaître-Robertson-Walker space-time. We are interested in point transformations which leave invariant the field equations. Noether's theorem is applied to determine the conservation laws for a family of models that belong to the same general class. The cosmological scenarios with or without an extra perfect fluid with constant equation of state parameter are the two important cases of our study. The de Sitter universe and ideal gas solutions are derived by using the invariant functions of the symmetry generators as a demonstration of our result. Furthermore, we discuss the connection of the different models under conformal transformations while we show that when the Horndeski theory reduces to a canonical field the same holds for the conformal equivalent theory. Finally, we discuss how singular solutions provides nonsingular universes in a different frame and vice versa.
D'Orazio, Alessia; Dragonetti, Antonella; Finiguerra, Ivana; Simone, Paola
2015-01-01
The measurement of nursing workload in a sub-intensive unit with the Nine Equivalents of Nursing Manpower Scale. The need to maximize the nursing manpower to patients complexity requires a careful assessment of the nursing workload. To measure the nursing workload in a sub-intensive care unit and to assess the impact of patients isolated for multidrug resistant microorganisms (MDR) and with delirium, on the nursing workload. From december 1 2014 to march 31 2015 the nursing workload of patients admitted to a semi intensive untit of a Turin Hospital was measured with Nine Equivalents of Nursing Manpower (NEMS) original and modified, adding 1 point score for patients isolated and with delirium (Richmond Agitation Sedation Scale). Admission and discharge times, and the activities performed in and out of the unit were registered. Two-hundred-thirty patients were daily assessed and no differences were observed in mean NEMS scores with the original and modified scale: december 17.3 vs 18.5; January 19.4 vs 20.2; February 19.9 vs 20.6; March 19.5 vs 20.1). mean scores did not change across shifts although on average 8 days a month the scores exceeded 21, identifiyng an excess workload and a need of a 2:1 patient/nurse ratio. The maximum workload was concentrated between 12.00 and 18.00 pm. The NEMS scale allows to measure the nursing workload. Apparently patients isolated and with delirium did not significantly impact on the nursing workload.
Effects of major depression on moment-in-time work performance.
Wang, Philip S; Beck, Arne L; Berglund, Pat; McKenas, David K; Pronk, Nicolaas P; Simon, Gregory E; Kessler, Ronald C
2004-10-01
Although major depression is thought to have substantial negative effects on work performance, the possibility of recall bias limits self-report studies of these effects. The authors used the experience sampling method to address this problem by collecting comparative data on moment-in-time work performance among service workers who were depressed and those who were not depressed. The group studied included 105 airline reservation agents and 181 telephone customer service representatives selected from a larger baseline sample; depressed workers were deliberately oversampled. Respondents were given pagers and experience sampling method diaries for each day of the study. A computerized autodialer paged respondents at random time points. When paged, respondents reported on their work performance in the diary. Moment-in-time work performance was assessed at five random times each day over a 7-day data collection period (35 data points for each respondent). Seven conditions (allergies, arthritis, back pain, headaches, high blood pressure, asthma, and major depression) occurred often enough in this group of respondents to be studied. Major depression was the only condition significantly related to decrements in both of the dimensions of work performance assessed in the diaries: task focus and productivity. These effects were equivalent to approximately 2.3 days absent because of sickness per depressed worker per month of being depressed. Previous studies based on days missed from work significantly underestimate the adverse economic effects associated with depression. Productivity losses related to depression appear to exceed the costs of effective treatment.
Gorczyca, Anna M; Eaton, Charles B; LaMonte, Michael J; Manson, JoAnn E; Johnston, Jeanne D; Bidulescu, Aurelian; Waring, Molly E; Manini, Todd; Martin, Lisa W; Stefanick, Marcia L; He, Ka; Chomistek, Andrea K
2017-05-15
How physical activity (PA) and sitting time may change after first myocardial infarction (MI) and the association with mortality in postmenopausal women is unknown. Participants included postmenopausal women in the Women's Health Initiative-Observational Study, aged 50 to 79 years who experienced a clinical MI during the study. This analysis included 856 women who had adequate data on PA exposure and 533 women for sitting time exposures. Sitting time was self-reported at baseline, year 3, and year 6. Self-reported PA was reported at baseline through year 8. Change in PA and sitting time were calculated as the difference between the cumulative average immediately following MI and the cumulative average immediately preceding MI. The 4 categories of change were: maintained low, decreased, increased, and maintained high. The cut points were ≥7.5 metabolic equivalent of task hours/week versus <7.5 metabolic equivalent of task hours/week for PA and ≥8 h/day versus <8 h/day for sitting time. Cox proportional hazard models estimated hazard ratios and 95% CIs for all-cause, coronary heart disease, and cardiovascular disease mortality. Compared with women who maintained low PA (referent), the risk of all-cause mortality was: 0.54 (0.34-0.86) for increased PA and 0.52 (0.36-0.73) for maintained high PA. Women who had pre-MI levels of sitting time <8 h/day, every 1 h/day increase in sitting time was associated with a 9% increased risk (hazard ratio=1.09, 95% CI: 1.01, 1.19) of all-cause mortality. Meeting the recommended PA guidelines pre- and post-MI may have a protective role against mortality in postmenopausal women. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
NASA Astrophysics Data System (ADS)
Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group
2003-04-01
The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.
Modeling and calculation of impact friction caused by corner contact in gear transmission
NASA Astrophysics Data System (ADS)
Zhou, Changjiang; Chen, Siyu
2014-09-01
Corner contact in gear pair causes vibration and noise, which has attracted many attentions. However, teeth errors and deformation make it difficulty to determine the point situated at corner contact and study the mechanism of teeth impact friction in the current researches. Based on the mechanism of corner contact, the process of corner contact is divided into two stages of impact and scratch, and the calculation model including gear equivalent error—combined deformation is established along the line of action. According to the distributive law, gear equivalent error is synthesized by base pitch error, normal backlash and tooth profile modification on the line of action. The combined tooth compliance of the first point lying in corner contact before the normal path is inversed along the line of action, on basis of the theory of engagement and the curve of tooth synthetic compliance & load-history. Combined secondarily the equivalent error with the combined deflection, the position standard of the point situated at corner contact is probed. Then the impact positions and forces, from the beginning to the end during corner contact before the normal path, are calculated accurately. Due to the above results, the lash model during corner contact is founded, and the impact force and frictional coefficient are quantified. A numerical example is performed and the averaged impact friction coefficient based on the presented calculation method is validated. This research obtains the results which could be referenced to understand the complex mechanism of teeth impact friction and quantitative calculation of the friction force and coefficient, and to gear exact design for tribology.
Output-Sensitive Construction of Reeb Graphs.
Doraiswamy, H; Natarajan, V
2012-01-01
The Reeb graph of a scalar function represents the evolution of the topology of its level sets. This paper describes a near-optimal output-sensitive algorithm for computing the Reeb graph of scalar functions defined over manifolds or non-manifolds in any dimension. Key to the simplicity and efficiency of the algorithm is an alternate definition of the Reeb graph that considers equivalence classes of level sets instead of individual level sets. The algorithm works in two steps. The first step locates all critical points of the function in the domain. Critical points correspond to nodes in the Reeb graph. Arcs connecting the nodes are computed in the second step by a simple search procedure that works on a small subset of the domain that corresponds to a pair of critical points. The paper also describes a scheme for controlled simplification of the Reeb graph and two different graph layout schemes that help in the effective presentation of Reeb graphs for visual analysis of scalar fields. Finally, the Reeb graph is employed in four different applications-surface segmentation, spatially-aware transfer function design, visualization of interval volumes, and interactive exploration of time-varying data.
2006-01-01
There is accumulating evidence that animations aid learning of dynamic concepts in cell biology. However, existing animation packages are expensive and difficult to learn, and the subsequent production of even short animations can take weeks to months. Here I outline the principles and sequence of steps for producing high-quality PowerPoint animations in less than a day that are suitable for teaching in high school through college/university. After developing the animation it can be easily converted to any appropriate movie file format using Camtasia Studio for Internet or classroom presentations. Thus anyone who can use PowerPoint has the potential to make animations. Students who viewed the approximately 3-min PowerPoint/Camtasia Studio animation “Calcium and the Dual Signalling Pathway” over 15 min scored significantly higher marks on a subsequent quiz than those who had viewed still graphics with text for an equivalent time. In addition, results from student evaluations provided some data validating the use of such animations in cell biology teaching with some interesting caveats. Information is also provided on how such animations can be modified or updated easily or shared with others who can modify them to fit their own needs. PMID:17012217
Ion beam probing of electrostatic fields
NASA Technical Reports Server (NTRS)
Persson, H.
1979-01-01
The determination of a cylindrically symmetric, time-independent electrostatic potential V in a magnetic field B with the same symmetry by measurements of the deflection of a primary beam of ions is analyzed and substantiated by examples. Special attention is given to the requirements on canonical angular momentum and total energy set by an arbitrary, nonmonotone V, to scaling laws obtained by normalization, and to the analogy with ionospheric sounding. The inversion procedure with the Abel analysis of an equivalent problem with a one-dimensional fictitious potential is used in a numerical experiment with application to the NASA Lewis Modified Penning Discharge. The determination of V from a study of secondary beams of ions with increased charge produced by hot plasma electrons is also analyzed, both from a general point of view and with application to the NASA Lewis SUMMA experiment. Simple formulas and geometrical constructions are given for the minimum energy necessary to reach the axis, the whole plasma, and any point in the magnetic field. The common, simplifying assumption that V is a small perturbation is critically and constructively analyzed; an iteration scheme for successively correcting the orbits and points of ionization for the electrostatic potential is suggested.
Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im
2017-02-01
The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.
Kitazawa, Y; Smith, P; Sasaki, N; Kotake, S; Bae, K; Iwamoto, Y
2011-01-01
Purpose The purpose of this study is to compare the safety and intraocular pressure (IOP)-lowering efficacy of travoprost/timolol in a benzalkonium chloride (BAK)-free fixed combination preserved with polyquaternium-1 (TRA/TIM BAK-free), with travoprost/timolol-fixed combination preserved with BAK (TRA/TIM), in patients with open-angle glaucoma or ocular hypertension. Methods In this prospective randomized controlled trial, subjects with IOP of at least 22 mm Hg in one or both eyes at 0900 h, and IOP of at least 21 mm Hg in one or both eyes at 1100 h and 1600 h at two eligibility visits were randomly assigned to receive either TRA/TIM BAK-free (n=195) or TRA/TIM (n=193), dosed once daily in the morning (0900 h) for 6 weeks. IOP was assessed at 0900 h, 1100 h, and 1600 h at each scheduled visit (baseline, 2 and 6 weeks after randomization). Results Mean IOP reduction across all visits and time points was 8.0 mm Hg in the TRA/TIM BAK-free group and 8.4 mm Hg in the TRA/TIM group (P=0.0943). The difference in mean IOP between groups ranged from 0.2 to 0.7 mm Hg across visits and time points, with a mean pooled difference of 0.4 mm Hg (95% CI: −0.1 to 0.8), demonstrating equivalence of the two formulations. The most common drug-related adverse event was hyperemia of the eye (ocular hyperemia and conjunctival hyperemia combined), occurring in 11.8% of the TRA/TIM BAK-free group and 13.0% of the TRA/TIM group. Conclusion Travoprost/timolol BAK-free demonstrated equivalence to travoprost/timolol preserved with BAK in efficacy. No clinically relevant differences in the safety profiles of travoprost/timolol BAK-free and travoprost/timolol preserved with BAK were identified. PMID:21701528
Kitazawa, Y; Smith, P; Sasaki, N; Kotake, S; Bae, K; Iwamoto, Y
2011-09-01
The purpose of this study is to compare the safety and intraocular pressure (IOP)-lowering efficacy of travoprost/timolol in a benzalkonium chloride (BAK)-free fixed combination preserved with polyquaternium-1 (TRA/TIM BAK-free), with travoprost/timolol-fixed combination preserved with BAK (TRA/TIM), in patients with open-angle glaucoma or ocular hypertension. In this prospective randomized controlled trial, subjects with IOP of at least 22 mm Hg in one or both eyes at 0900 h, and IOP of at least 21 mm Hg in one or both eyes at 1100 h and 1600 h at two eligibility visits were randomly assigned to receive either TRA/TIM BAK-free (n=195) or TRA/TIM (n=193), dosed once daily in the morning (0900 h) for 6 weeks. IOP was assessed at 0900 h, 1100 h, and 1600 h at each scheduled visit (baseline, 2 and 6 weeks after randomization). Mean IOP reduction across all visits and time points was 8.0 mm Hg in the TRA/TIM BAK-free group and 8.4 mm Hg in the TRA/TIM group (P=0.0943). The difference in mean IOP between groups ranged from 0.2 to 0.7 mm Hg across visits and time points, with a mean pooled difference of 0.4 mm Hg (95% CI: -0.1 to 0.8), demonstrating equivalence of the two formulations. The most common drug-related adverse event was hyperemia of the eye (ocular hyperemia and conjunctival hyperemia combined), occurring in 11.8% of the TRA/TIM BAK-free group and 13.0% of the TRA/TIM group. Travoprost/timolol BAK-free demonstrated equivalence to travoprost/timolol preserved with BAK in efficacy. No clinically relevant differences in the safety profiles of travoprost/timolol BAK-free and travoprost/timolol preserved with BAK were identified.
Kaur, Primal; Chow, Vincent; Zhang, Nan; Moxness, Michael; Kaliyaperumal, Arunan; Markus, Richard
2017-03-01
To demonstrate pharmacokinetic (PK) similarity of biosimilar candidate ABP 501 relative to adalimumab reference product from the USA and European Union (EU) and evaluate safety, tolerability and immunogenicity of ABP 501. Randomised, single-blind, single-dose, three-arm, parallel-group study; healthy subjects were randomised to receive ABP 501 (n=67), adalimumab (USA) (n=69) or adalimumab (EU) (n=67) 40 mg subcutaneously. Primary end points were area under the serum concentration-time curve from time 0 extrapolated to infinity (AUC inf ) and the maximum observed concentration (C max ). Secondary end points included safety and immunogenicity. AUC inf and C max were similar across the three groups. Geometrical mean ratio (GMR) of AUC inf was 1.11 between ABP 501 and adalimumab (USA), and 1.04 between ABP 501 and adalimumab (EU). GMR of C max was 1.04 between ABP 501 and adalimumab (USA) and 0.96 between ABP 501 and adalimumab (EU). The 90% CIs for the GMRs of AUC inf and C max were within the prespecified standard PK equivalence criteria of 0.80 to 1.25. Treatment-related adverse events were mild to moderate and were reported for 35.8%, 24.6% and 41.8% of subjects in the ABP 501, adalimumab (USA) and adalimumab (EU) groups; incidence of antidrug antibodies (ADAbs) was similar among the study groups. Results of this study demonstrated PK similarity of ABP 501 with adalimumab (USA) and adalimumab (EU) after a single 40-mg subcutaneous injection. No new safety signals with ABP 501 were identified. The safety and tolerability of ABP 501 was similar to the reference products, and similar ADAb rates were observed across the three groups. EudraCT number 2012-000785-37; Results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Research on the time-temperature-damage superposition principle of NEPE propellant
NASA Astrophysics Data System (ADS)
Han, Long; Chen, Xiong; Xu, Jin-sheng; Zhou, Chang-sheng; Yu, Jia-quan
2015-11-01
To describe the relaxation behavior of NEPE (Nitrate Ester Plasticized Polyether) propellant, we analyzed the equivalent relationships between time, temperature, and damage. We conducted a series of uniaxial tensile tests and employed a cumulative damage model to calculate the damage values for relaxation tests at different strain levels. The damage evolution curve of the tensile test at 100 mm/min was obtained through numerical analysis. Relaxation tests were conducted over a range of temperature and strain levels, and the equivalent relationship between time, temperature, and damage was deduced based on free volume theory. The equivalent relationship was then used to generate predictions of the long-term relaxation behavior of the NEPE propellant. Subsequently, the equivalent relationship between time and damage was introduced into the linear viscoelastic model to establish a nonlinear model which is capable of describing the mechanical behavior of composite propellants under a uniaxial tensile load. The comparison between model prediction and experimental data shows that the presented model provides a reliable forecast of the mechanical behavior of propellants.
14 CFR 25.115 - Takeoff flight path.
Code of Federal Regulations, 2010 CFR
2010-01-01
... each point by a gradient of climb equal to— (1) 0.8 percent for two-engine airplanes; (2) 0.9 percent... reduction in climb gradient may be applied as an equivalent reduction in acceleration along that part of the...
Is there a trade-off between longevity and quality of life in Grossman's pure investment model?
Eisenring, C
2000-12-01
The question is posed whether an individual maximizes lifetime or trades off longevity for quality of life in Grossman's pure investment (PI)-model. It is shown that the answer critically hinges on the assumed production function for healthy time. If the production function for healthy time produces a trade-off between life-span and quality of life, one has to solve a sequence of fixed time problems. The one offering maximal intertemporal utility determines optimal longevity. Comparative static results of optimal longevity for a simplified version of the PI-model are derived. The obtained results predict that higher initial endowments of wealth and health, a rise in the wage rate, or improvements in the technology of producing healthy time, all increase the optimal length of life. On the other hand, optimal longevity is decreasing in the depreciation and interest rate. From a technical point of view, the paper illustrates that a discrete time equivalent to the transversality condition for optimal longevity employed in continuous optimal control models does not exist. Copyright 2000 John Wiley & Sons, Ltd.
Alternative nuclear technologies
NASA Astrophysics Data System (ADS)
Schubert, E.
1981-10-01
The lead times required to develop a select group of nuclear fission reactor types and fuel cycles to the point of readiness for full commercialization are compared. Along with lead times, fuel material requirements and comparative costs of producing electric power were estimated. A conservative approach and consistent criteria for all systems were used in estimates of the steps required and the times involved in developing each technology. The impact of the inevitable exhaustion of the low- or reasonable-cost uranium reserves in the United States on the desirability of completing the breeder reactor program, with its favorable long-term result on fission fuel supplies, is discussed. The long times projected to bring the most advanced alternative converter reactor technologies the heavy water reactor and the high-temperature gas-cooled reactor into commercial deployment when compared to the time projected to bring the breeder reactor into equivalent status suggest that the country's best choice is to develop the breeder. The perceived diversion-proliferation problems with the uranium plutonium fuel cycle have workable solutions that can be developed which will enable the use of those materials at substantially reduced levels of diversion risk.
The morphology and electrical geometry of rat jaw-elevator motoneurones.
Moore, J A; Appenteng, K
1991-01-01
1. The aim of this work was to quantify both the morphology and electrical geometry of the dendritic trees of jaw-elevator motoneurones. To do this we have made intracellular recordings from identified motoneurones in anaesthetized rats, determined their membrane properties and then filled them with horseradish peroxidase by ionophoretic ejection. Four neurones were subsequently fully reconstructed and the lengths and diameters of all the dendritic segments measured. 2. The mean soma diameter was 25 microns and values of mean dendritic length for individual cells ranged from 514 to 773 microns. Dendrites branched on average 9.1 times to produce 10.2 end-terminations. Dendritic segments could be represented as constant diameter cylinders between branch points. Values of dendritic surface area ranged from 1.08 to 2.52 x 10(5) microns 2 and values of dendritic to total surface area from 98 to 99%. 3. At branch points the ratio of the summed diameters of the daughter dendrites to the 3/2 power against the parent dendrite to the 3/2 power was exactly 1.0. Therefore the individual branch points could be collapsed into a single cylinder. Furthermore for an individual dendrite the diameter of this cylinder remained constant with increasing electrical distance from the soma. Thus individual dendrites can be represented electrically as cylinders of constant diameter. 4. However dendrites of a given neurone terminated at different electrical distances from the soma. The equivalent-cylinder diameter of the combined dendritic tree remained constant over the proximal half and then showed a pronounced reduction over the distal half. The reduction in equivalent diameter could be ascribed to the termination of dendrites at differing electrical distances from the soma. Therefore the complete dendritic tree of these motoneurones is best represented as a cylinder over the proximal half of their electrical length but as a cone over the distal half. PMID:1804966
The Voronoi volume and molecular representation of molar volume: equilibrium simple fluids.
Hunjan, Jagtar Singh; Eu, Byung Chan
2010-04-07
The Voronoi volume of simple fluids was previously made use of in connection with volume transport phenomena in nonequilibrium simple fluids. To investigate volume transport phenomena, it is important to develop a method to compute the Voronoi volume of fluids in nonequilibrium. In this work, as a first step to this goal, we investigate the equilibrium limit of the nonequilibrium Voronoi volume together with its attendant related molar (molal) and specific volumes. It is proved that the equilibrium Voronoi volume is equivalent to the molar (molal) volume. The latter, in turn, is proved equivalent to the specific volume. This chain of equivalences provides an alternative procedure of computing the equilibrium Voronoi volume from the molar volume/specific volume. We also show approximate methods of computing the Voronoi and molar volumes from the information on the pair correlation function. These methods may be employed for their quick estimation, but also provide some aspects of the fluid structure and its relation to the Voronoi volume. The Voronoi volume obtained from computer simulations is fitted to a function of temperature and pressure in the region above the triple point but below the critical point. Since the fitting function is given in terms of reduced variables for the Lennard-Jones (LJ) model and the kindred volumes (i.e., specific and molar volumes) are in essence equivalent to the equation of state, the formula obtained is a reduced equation state for simple fluids obeying the LJ model potential in the range of temperature and pressure examined and hence can be used for other simple fluids.
Thermal actuation of extinguishing systems
NASA Astrophysics Data System (ADS)
Evans, D. D.
1984-03-01
A brief review of the Response Time Index (RTI) method of characterizing the thermal response of commercial sprinklers and heat detectors is presented. Measured ceiling layer flow temperature and velocity histories from a bedroom fire test are used to illustrate the use of RTI in calculating sprinkler operation times. In small enclosure fires, a quiescent warm gas layer confined by the room walls may accumulate below the ceiling before sprinkler operation. The effects of this warm gas layer on the fire plume and ceiling jet flows are accounted for by substitution of an equivalent point source fire. Encouraging agreement was found between measured ceiling jet temperatures from steady fires in a laboratory scale cylindrical enclosure put into dimensionless form based on parameters of the substitute fire source, and existing empirical correlations from fire tests in large enclosures in which a quiescent warm upper gas layer does not accumulate.
Wood, Colleen L; Clements, Scott A; McFann, Kim; Slover, Robert; Thomas, John F; Wadwa, R Paul
2016-01-01
The American Diabetes Association (ADA) recommends that children with type 1 diabetes (T1D) see a multidisciplinary team and have hemoglobin A1c (A1C) levels measured every 3 months. Patients in rural areas may not follow guidelines because of limited specialty care access. We hypothesized that videoconferencing would result in equivalent A1C compared with in-person visits and increased compliance with ADA recommendations. The Barbara Davis Center (BDC) (Aurora, CO) telemedicine program provides diabetes care to pediatric patients in Casper and Cheyenne, WY, via remote consultation with annual in-person visits. Over 27 months, 70 patients were consented, and 54 patients completed 1 year in the study. Patients were 70% male, with a mean age of 12.1 ± 4.1 years and T1D duration of 5.4 ± 4.1 years. There was no significant change between baseline and 1-year A1C levels for patients with data at both time points. Patients saw diabetes specialists an average of 2.0 ± 1.3 times per year in the year prior to starting telemedicine and 2.9 ± 1.3 times (P < 0.0001) in the year after starting telemedicine. Patients and families missed significantly less school and work time to attend appointments. Our study suggests telemedicine is equivalent to in-person visits to maintain A1C, whereas families increase the number of visits in line with ADA recommendations. Patients and families miss less school and work. Decreased financial burden and increased access may improve overall diabetes care and compliance for rural patients. Further study is needed to detect long-term differences in complications screenings and the financial impact of telemedicine on pediatric diabetes care.
Twedt, D.J.; Smith, W.P.; Cooper, R.J.; Ford, R.P.; Hamel, P.B.; Wiedenfeld, D.A.; Smith, Winston Paul
1993-01-01
Within each of 4 forest stands on Delta Experimental Forest (DEF), 25 points were visited 5 to 7 times from 8 May to 21 May 1991, and 6 times from 30 May to 12 June 1992. During each visit to a point, all birds detected, visuallyor aurally, at any distance were recorded during a 4-minute interval. Using these data, our objectives were to recommend the number of point counts and the number of visits to a point which provide the greatest efficiency for estimating the cumulative number of species in bottomland hardwood forest stands within the Mississippi Alluvial Valley, and to ascertain if increasing the number of visits to points is equivalent to adding more points. Because the total number of species detected in DEF were different between years, 39 species in 1991 and 55 species in 1992, we considered each year independently. Within each stand, we obtained bootstrap estimates of the mean cumulative number of species obtained from all possible combinations of six points and six visits (i.e., 36 means/stand). These bootstrap estimates were subjected to ANOVA; we modelled cumulative number of species as a function of the number of points visited, the number of visits to each point, and their interaction. As part of the same ANOVA we made an a priori, simultaneous comparison of the 15 possible reciprocal treatments (i.e., 1 point-2 visits vs. 2 points-1 visit, etc.). Results of analyses for each year were similar. Although no interaction was detected between the number of points and the number of visits, when reciprocals were compared, more points visited yielded significantly greater cumulative number of species than more visits to each point. Significant differences were detected among both the number of points visited and among the number of visits to a point. Scheffe's test of differences among means indicated that the cumulative number of species increased significantly with each added point, through five points, but six points did not differ from five points in 1991. Similarly, the cumulative number of species increased significantlywith each revisit, up to four visits, but four visits did not differ significantly from five visits. Starting with one point, which yielded about 33 percent of the total species pool when averaged among one through six points, each subsequent point resulted in an increase of about 9 percent, 5 percent, 3 percent, and 3 percent, respectively. Each sequential increase in the number of visits, however, only resulted in increases of 7 percent, 4 percent, 2 percent, and 2 percent of the total species pool.
34 CFR 607.4 - What are low educational and general expenditures?
Code of Federal Regulations, 2010 CFR
2010-07-01
... general expenditures per full-time equivalent undergraduate student in the base year must be less than the average educational and general expenditures per full-time equivalent undergraduate student of comparable... student for institutions with graduate students that do not differentiate between graduate and...
Measurement of cardiopulmonary performance during acute exposure to a 2440-m equivalent atmosphere
NASA Technical Reports Server (NTRS)
Levitan, B. M.; Bungo, M. W.
1982-01-01
Each of 20 subjects (ranging in age from 18 to 38 years, 15 being male, five female) was given two Bruce Protocol symptom-limited maximum treadmill stress tests, breathing sea-level compressed air (20.9% O2) for one test and a 2440-m equivalent (15.5% O2) for the other. A significant difference was found to exist between measured VO2 max (p less than 0.0002) and exercise time (p less than 0.0004) for the two conditions. No significant differences were observed in heart rate or the recovery time to a respiratory quotient of less than 1. Hemoglobin saturation, as measured by an ear oximeter, averaged 95% for sea-level and 91% for the 2440-m equivalent gases. These results support a 2440-m equivalent contingency atmosphere in the Space Shuttle prior to donning a low-pressure suit for the purpose reducing nitrogen washout times.
Narayanaswamy's 1971 aging theory and material time
NASA Astrophysics Data System (ADS)
Dyre, Jeppe C.
2015-09-01
The Bochkov-Kuzovlev nonlinear fluctuation-dissipation theorem is used to derive Narayanaswamy's phenomenological theory of physical aging, in which this highly nonlinear phenomenon is described by a linear material-time convolution integral. A characteristic property of the Narayanaswamy aging description is material-time translational invariance, which is here taken as the basic assumption of the derivation. It is shown that only one possible definition of the material time obeys this invariance, namely, the square of the distance travelled from a configuration of the system far back in time. The paper concludes with suggestions for computer simulations that test for consequences of material-time translational invariance. One of these is the "unique-triangles property" according to which any three points on the system's path form a triangle such that two side lengths determine the third; this is equivalent to the well-known triangular relation for time-autocorrelation functions of aging spin glasses [L. F. Cugliandolo and J. Kurchan, J. Phys. A: Math. Gen. 27, 5749 (1994)]. The unique-triangles property implies a simple geometric interpretation of out-of-equilibrium time-autocorrelation functions, which extends to aging a previously proposed framework for such functions in equilibrium [J. C. Dyre, e-print arXiv:cond-mat/9712222 (1997)].
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
Olender, Gavin; Pfeifer, Ronny; Müller, Christian W; Gösling, Thomas; Barcikowski, Stephan; Hurschler, Christof
2011-05-01
Nitinol is a promising biomaterial based on its remarkable shape changing capacity, biocompatibility, and resilient mechanical properties. Until now, very limited applications have been tested for the use of Nitinol plates for fracture fixation in orthopaedics. Newly designed fracture-fixation plates are tested by four-point bending to examine a change in equivalent bending stiffness before and after shape transformation. The goal of stiffness alterable bone plates is to optimize the healing process during osteosynthesis in situ that is customized in time of onset, percent change as well as being performed non-invasively for the patient. The equivalent bending stiffness in plates of varying thicknesses changed before and after shape transformation in the range of 24-73% (p values <0.05 for all tests). Tests on a Nitinol plate of 3.0 mm increased in stiffness from 0.81 to 0.98 Nm² (corresponding standard deviation 0.08 and 0.05) and shared a good correlation to results from numerical calculation. The stiffness of the tested fracture-fixation plates can be altered in a consistent matter that would be predicted by determining the change of the cross-sectional area moment of inertia.
Eeles, Abbey L; Olsen, Joy E; Walsh, Jennifer M; McInnes, Emma K; Molesworth, Charlotte M L; Cheong, Jeanie L Y; Doyle, Lex W; Spittle, Alicia J
2017-02-01
Neurobehavioral assessments provide insight into the functional integrity of the developing brain and help guide early intervention for preterm (<37 weeks' gestation) infants. In the context of shorter hospital stays, clinicians often need to assess preterm infants prior to term equivalent age. Few neurobehavioral assessments used in the preterm period have established interrater reliability. To evaluate the interrater reliability of the Hammersmith Neonatal Neurological Examination (HNNE) and the NICU Network Neurobehavioral Scale (NNNS), when used both preterm and at term (>36 weeks). Thirty-five preterm infants and 11 term controls were recruited. Five assessors double-scored the HNNE and NNNS administered either preterm or at term. A one-way random effects, absolute, single-measures interclass correlation coefficient (ICC) was calculated to determine interrater reliability. Interrater reliability for the HNNE was excellent (ICC > 0.74) for optimality scores, and good (ICC 0.60-0.74) to excellent for subtotal scores, except for 'Tone Patterns' (ICC 0.54). On the NNNS, interrater reliability was predominantly excellent for all items. Interrater agreement was generally excellent at both time points. Overall, the HNNE and NNNS neurobehavioral assessments demonstrated mostly excellent interrater reliability when used prior to term and at term.
A Prospective Evaluation of Helical Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauman, Glenn; Yartsev, Slav; Rodrigues, George
2007-06-01
Purpose: To report results from two clinical trials evaluating helical tomotherapy (HT). Methods and Materials: Patients were enrolled in one of two prospective trials of HT (one for palliative and one for radical treatment). Both an HT plan and a companion three-dimensional conformal radiotherapy (3D-CRT) plan were generated. Pretreatment megavoltage computed tomography was used for daily image guidance. Results: From September 2004 to January 2006, a total of 61 sites in 60 patients were treated. In all but one case, a clinically acceptable tomotherapy plan for treatment was generated. Helical tomotherapy plans were subjectively equivalent or superior to 3D-CRT inmore » 95% of plans. Helical tomotherapy was deemed equivalent or superior in two thirds of dose-volume point comparisons. In cases of inferiority, differences were either clinically insignificant and/or reflected deliberate tradeoffs to optimize the HT plan. Overall imaging and treatment time (median) was 27 min (range, 16-91 min). According to a patient questionnaire, 78% of patients were satisfied to very satisfied with the treatment process. Conclusions: Helical tomotherapy demonstrated clear advantages over conventional 3D-CRT in this diverse patient group. The prospective trials were helpful in deploying this technology in a busy clinical setting.« less
Simplicity constraints: A 3D toy model for loop quantum gravity
NASA Astrophysics Data System (ADS)
Charles, Christoph
2018-05-01
In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.
Testing of the BipiColombo Antenna Pointing Mechanism
NASA Astrophysics Data System (ADS)
Campo, Pablo; Barrio, Aingeru; Martin, Fernando
2015-09-01
BepiColombo is an ESA mission to Mercury, its planetary orbiter (MPO) has two antenna pointing mechanism, High gain antenna (HGA) pointing mechanism steers and points a large reflector which is integrated at system level by TAS-I Rome. Medium gain antenna (MGA) APM points a 1.5 m boom with a horn antenna. Both radiating elements are exposed to sun fluxes as high as 10 solar constants without protections.A previous paper [1] described the design and development process to solve the challenges of performing in harsh environment.. Current paper is focused on the testing process of the qualification units. Testing performance of antenna pointing mechanism in its specific environmental conditions has required special set-up and techniques. The process has provided valuable feedback on the design and the testing methods which have been included in the PFM design and tests.Some of the technologies and components were developed on dedicated items priort to EQM, but once integrated, test behaviour had relevant differences.Some of the major concerns for the APM testing are:- Create during the thermal vacuum testing the qualification temperature map with gradients along the APM. From of 200oC to 70oC.- Test in that conditions the radio frequency and pointing performances adding also high RF power to check the power handling and self-heating of the rotary joint.- Test in life up to 12000 equivalent APM revolutions, that is 14.3 million motor revolutions in different thermal conditions.- Measure low thermal distortion of the mechanical chain, being at the same time insulated from external environment and interfaces (55 arcsec pointing error)- Perform deployment of large items guaranteeing during the process low humidity, below 5% to protect dry lubrication- Verify stability with representative inertia of large boom or reflector 20 Kgm2.
Shanafelt, Tait D; Mungo, Michelle; Schmitgen, Jaime; Storz, Kristin A; Reeves, David; Hayes, Sharonne N; Sloan, Jeff A; Swensen, Stephen J; Buskirk, Steven J
2016-04-01
To longitudinally evaluate the relationship between burnout and professional satisfaction with changes in physicians' professional effort. Administrative/payroll records were used to longitudinally evaluate the professional work effort of faculty physicians working for Mayo Clinic from October 1, 2008, to October 1, 2014. Professional effort was measured in full-time equivalent (FTE) units. Physicians were longitudinally surveyed in October 2011 and October 2013 with standardized tools to assess burnout and satisfaction. Between 2008 and 2014, the proportion of physicians working less than full-time at our organization increased from 13.5% to 16.0% (P=.05). Of the 2663 physicians surveyed in 2011 and 2776 physicians surveyed in 2013, 1856 (69.7%) and 2132 (76.9%), respectively, returned surveys. Burnout and satisfaction scores in 2011 correlated with actual reductions in FTE over the following 24 months as independently measured by administrative/payroll records. After controlling for age, sex, site, and specialty, each 1-point increase in the 7-point emotional exhaustion scale was associated with a greater likelihood of reducing FTE (odds ratio [OR], 1.43; 95% CI, 1.23-1.67; P<.001) over the following 24 months, and each 1-point decrease in the 5-point satisfaction score was associated with greater likelihood of reducing FTE (OR, 1.34; 95% CI, 1.03-1.74; P=.03). On longitudinal analysis at the individual physician level, each 1-point increase in emotional exhaustion (OR, 1.28; 95% CI, 1.05-1.55; P=.01) or 1-point decrease in satisfaction (OR, 1.67; 95% CI, 1.19-2.35; P=.003) between 2011 and 2013 was associated with a greater likelihood of reducing FTE over the following 12 months. Among physicians in a large health care organization, burnout and declining satisfaction were strongly associated with actual reductions in professional work effort over the following 24 months. Copyright © 2016 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
Unifying Temporal and Structural Credit Assignment Problems
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2004-01-01
Single-agent reinforcement learners in time-extended domains and multi-agent systems share a common dilemma known as the credit assignment problem. Multi-agent systems have the structural credit assignment problem of determining the contributions of a particular agent to a common task. Instead, time-extended single-agent systems have the temporal credit assignment problem of determining the contribution of a particular action to the quality of the full sequence of actions. Traditionally these two problems are considered different and are handled in separate ways. In this article we show how these two forms of the credit assignment problem are equivalent. In this unified frame-work, a single-agent Markov decision process can be broken down into a single-time-step multi-agent process. Furthermore we show that Monte-Carlo estimation or Q-learning (depending on whether the values of resulting actions in the episode are known at the time of learning) are equivalent to different agent utility functions in a multi-agent system. This equivalence shows how an often neglected issue in multi-agent systems is equivalent to a well-known deficiency in multi-time-step learning and lays the basis for solving time-extended multi-agent problems, where both credit assignment problems are present.
NASA Astrophysics Data System (ADS)
Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.
2017-12-01
Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franklin, M.L.; Kittelson, D.B.; Leuer, R.H.
1996-10-01
A two-dimensional optimization process, which simultaneously adjusts the spark timing and equivalence ratio of a lean-burn, natural gas, Hercules G1600 engine, has been demonstrated. First, the three-dimensional surface of thermal efficiency was mapped versus spark timing and equivalence ratio at a single speed and load combination. Then the ability of the control system to find and hold the combination of timing and equivalence ratio that gives the highest thermal efficiency was explored. NO{sub x}, CO, and HC maps were also constructed from the experimental data to determine the tradeoffs between efficiency and emissions. The optimization process adds small synchronous disturbancesmore » to the spark timing and air flow while the fuel injected per cycle is held constant for four cycles. The engine speed response to these disturbances is used to determine the corrections for spark timing and equivalence ratio. The control process, in effect, uses the engine itself as the primary sensor. The control system can adapt to changes in fuel composition, operating conditions, engine wear, or other factors that may not be easily measured. Although this strategy was previously demonstrated in a Volkswagen 1.7 liter light duty engine (Frankling et al., 1994b), until now it has not been demonstrated in a heavy-duty engine. This paper covers the application of the approach to a Hercules G1600 engine.« less
FORESHOCKS AND TIME-DEPENDENT EARTHQUAKE HAZARD ASSESSMENT IN SOUTHERN CALIFORNIA.
Jones, Lucile M.
1985-01-01
The probability that an earthquake in southern California (M greater than equivalent to 3. 0) will be followed by an earthquake of larger magnitude within 5 days and 10 km (i. e. , will be a foreshock) is 6 plus or minus 0. 5 per cent (1 S. D. ), and is not significantly dependent on the magnitude of the possible foreshock between M equals 3 and M equals 5. The probability that an earthquake will be followed by an M greater than equivalent to 5. 0 main shock, however, increases with magnitude of the foreshock from less than 1 per cent at M greater than equivalent to 3 to 6. 5 plus or minus 2. 5 per cent (1 S. D. ) at M greater than equivalent to 5. The main shock will most likely occur in the first hour after the foreshock, and the probability that a main shock will occur in the first hour decreases with elapsed time from the occurrence of the possible foreshock by approximately the inverse of time. Thus, the occurrence of an earthquake of M greater than equivalent to 3. 0 in southern California increases the earthquake hazard within a small space-time window several orders of magnitude above the normal background level.
Mauck, C; Callahan, M; Weiner, D H; Dominik, R
1999-08-01
The FemCap is a new silicone rubber barrier contraceptive shaped like a sailor's hat, with a dome that covers the cervix, a rim that fits into the fornices, and a brim that conforms to the vaginal walls around the cervix. It was designed to result in fewer dislodgments and less pressure on the urethra than the cervical cap and diaphragm, respectively, and to require less clinician time for fitting. This was a phase II/III, multicenter, randomized, open-label, parallel group study of 841 women at risk for pregnancy. A subset of 42 women at one site underwent colposcopy. Women were randomized to use the FemCap or Ortho All-Flex contraceptive diaphragm, both with 2% nonoxynol-9 spermicide, for 28 weeks. The objectives were to compare the two devices with regard to their safety and acceptability and to determine whether the probability of pregnancy among FemCap users was no worse than that of the diaphragm (meaning not more than 6 percentage points higher). The 6-month Kaplan-Meier cumulative unadjusted typical use pregnancy probabilities were 13.5% among FemCap users and 7.9% among diaphragm users. The adjusted risk of pregnancy among FemCap users was 1.96 times that among diaphragm users, with an upper 95% confidence limit of 3.01. Clinical equivalence (noninferiority) of the FemCap compared with the diaphragm, as defined in this study, would mean that the true risk of pregnancy among FemCap users was no more than 1.73 times the pregnancy risk of diaphragm users. Because the observed upper 95% confidence limit (and even the point estimate) exceeded 1.73, the probability of pregnancy among FemCap users, compared with that among diaphragm users, did not meet the definition of clinical equivalence used in this study. The FemCap was believed to be safe and was associated with significantly fewer urinary tract infections. More women reported problems with the FemCap with regard to insertion, dislodgement, and especially removal, although their general assessments were positive. The two devices were comparable with regard to safety and acceptability, but a 6-point difference in the true 6-month pregnancy probabilities of the two devices could not be ruled out. Further studies are needed to determine whether design modifications can simplify insertion and removal.
24 CFR 880.602 - Replacement reserve.
Code of Federal Regulations, 2012 CFR
2012-04-01
... equivalent to .006 of the cost of total structures, including main buildings, accessory buildings, garages... projects, an amount equivalent to at least .006 of the cost of total structures, including main buildings, accessory buildings, garages and other buildings, or any higher rate as required from time to time by: (A...
24 CFR 880.602 - Replacement reserve.
Code of Federal Regulations, 2010 CFR
2010-04-01
... equivalent to .006 of the cost of total structures, including main buildings, accessory buildings, garages... projects, an amount equivalent to at least .006 of the cost of total structures, including main buildings, accessory buildings, garages and other buildings, or any higher rate as required from time to time by: (A...
24 CFR 880.602 - Replacement reserve.
Code of Federal Regulations, 2014 CFR
2014-04-01
... equivalent to .006 of the cost of total structures, including main buildings, accessory buildings, garages... projects, an amount equivalent to at least .006 of the cost of total structures, including main buildings, accessory buildings, garages and other buildings, or any higher rate as required from time to time by: (A...
24 CFR 880.602 - Replacement reserve.
Code of Federal Regulations, 2013 CFR
2013-04-01
... equivalent to .006 of the cost of total structures, including main buildings, accessory buildings, garages... projects, an amount equivalent to at least .006 of the cost of total structures, including main buildings, accessory buildings, garages and other buildings, or any higher rate as required from time to time by: (A...
24 CFR 880.602 - Replacement reserve.
Code of Federal Regulations, 2011 CFR
2011-04-01
... equivalent to .006 of the cost of total structures, including main buildings, accessory buildings, garages... projects, an amount equivalent to at least .006 of the cost of total structures, including main buildings, accessory buildings, garages and other buildings, or any higher rate as required from time to time by: (A...
USDA-ARS?s Scientific Manuscript database
Peanuts in North America and Europe are primarily consumed after dry roasting. Standard industry practice is to roast peanuts to a specific surface color (Hunter L-value) for a given application; however, equivalent surface colors can be attained using different roast temperature/time combinations,...
NASA Astrophysics Data System (ADS)
Ploc, Ondrej; Uchihori, Yukio; Kitamura, H.; Kodaira, S.; Dachev, Tsvetan; Spurny, Frantisek; Jadrnickova, Iva; Mrazova, Zlata; Kubancak, Jan
Liulin type detectors are recently used in a wide range of cosmic radiation measurements, e.g. at alpine observatories, onboard aircrafts and spacecrafts. They provide energy deposition spectra up to 21 MeV, higher energy deposition events are stored in the last (overflow) channel. Their main advantages are portability (about the same size as a pack of cigarettes) and ability to record spectra as a function of time, so they can be used as personal dosimeters. Their well-known limitations are: (i) the fact that they are not tissue equivalent, (ii) they can be used as LET spectrometer only under specific conditions (e.g. broad parallel beam), and (iii) that the energy deposition event from particles of LETH20¿35 keV/µm is stored in the overflow bin only so the spectral information is missing. Tissue equivalent proportional counter (TEPC) Hawk has no of these limitations but on the other hand, it cannot be used as personal dosimeter because of its big size (cylinder of 16 cm diameter and 34 cm long). An important fraction of dose equivalent onboard spacecrafts is caused by heavy ions. This contribution presents results from intercomparison measurements with Liulin and Hawk at Heavy Ion Medical Accelerator in Chiba (HIMAC) and cyclotron beams, and related calculations with PHITS (Particle and Heavy-ion Transport code System). Following particles/ions and energies were used: protons 70 MeV, He 150 MeV, Ne 400 MeV, C 135 MeV, C 290 MeV, and Fe 500 MeV. Calculations of LET spectra by PHITS were performed for both, Liulin and Hawk. In case of Liulin, the dose equivalent was calculated using simulations in which several tissue equivalent materials were used as active volume instead of the silicon diode. Dose equivalents calculated in such way was compared with that measured with Hawk. LET spectra measured with Liulin and Hawk were compared for each ion at several points behind binary filters along the Brag curve. Good agreement was observed for some configurations; for the other configurations, the difference was reasonably described (e.g. thickness of stainless steel of TEPC wall and size of Hawk's active volume).
Frequency Response of an Aircraft Wing with Discrete Source Damage Using Equivalent Plate Analysis
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Eldred, Lloyd B.
2007-01-01
An equivalent plate procedure is developed to provide a computationally efficient means of matching the stiffness and frequencies of flight vehicle wing structures for prescribed loading conditions. Several new approaches are proposed and studied to match the stiffness and first five natural frequencies of the two reference models with and without damage. One approach divides the candidate reference plate into multiple zones in which stiffness and mass can be varied using a variety of materials including aluminum, graphite-epoxy, and foam-core graphite-epoxy sandwiches. Another approach places point masses along the edge of the stiffness-matched plate to tune the natural frequencies. Both approaches are successful at matching the stiffness and natural frequencies of the reference plates and provide useful insight into determination of crucial features in equivalent plate models of aircraft wing structures.
On the wavelet optimized finite difference method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1994-01-01
When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.
Sato, K; Yuan, X-F; Kawakatsu, T
2010-02-01
Numerous numerical and experimental evidence suggest that shear banding behavior looks like first-order phase transitions. In this paper, we demonstrate that this correspondence is actually established in the so-called non-local diffusive Johnson-Segalman model (the DJS model), a typical mechanical constitutive model that has been widely used for describing shear banding phenomena. In the neighborhood of the critical point, we apply the reduction procedure based on the center manifold theory to the governing equations of the DJS model. As a result, we obtain a time evolution equation of the flow field that is equivalent to the time-dependent Ginzburg-Landau (TDGL) equations for modeling thermodynamic first-order phase transitions. This result, for the first time, provides a mathematical proof that there is an analogy between the mechanical instability and thermodynamic phase transition at least in the vicinity of the critical point of the shear banding of DJS model. Within this framework, we can clearly distinguish the metastable branch in the stress-strain rate curve around the shear banding region from the globally stable branch. A simple extension of this analysis to a class of more general constitutive models is also discussed. Numerical simulations for the original DJS model and the reduced TDGL equation is performed to confirm the range of validity of our reduction theory.
Physical and clinical performance of the mCT time-of-flight PET/CT scanner.
Jakoby, B W; Bercier, Y; Conti, M; Casey, M E; Bendriem, B; Townsend, D W
2011-04-21
Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.
Physical and clinical performance of the mCT time-of-flight PET/CT scanner
NASA Astrophysics Data System (ADS)
Jakoby, B. W.; Bercier, Y.; Conti, M.; Casey, M. E.; Bendriem, B.; Townsend, D. W.
2011-04-01
Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.
Patient doses and occupational exposure in a hybrid operating room.
Andrés, C; Pérez-García, H; Agulla, M; Torres, R; Miguel, D; Del Castillo, A; Flota, C M; Alonso, D; de Frutos, J; Vaquero, C
2017-05-01
This study aimed to characterize the radiation exposure to patients and workers in a new vascular hybrid operating room during X-ray-guided procedures. During one year, data from 260 interventions performed in a hybrid operating room equipped with a Siemens Artis Zeego angiography system were monitored. The patient doses were analysed using the following parameters: radiation time, kerma-area product, patient entrance reference point dose and peak skin dose. Staff radiation exposure and ambient dose equivalent were also measured using direct reading dosimeters and thermoluminescent dosimeters. The radiation time, kerma-area product, patient entrance reference point dose and peak skin dose were, on average, 19:15min, 67Gy·cm 2 , 0.41Gy and 0.23Gy, respectively. Although the contribution of the acquisition mode was smaller than 5% in terms of the radiation time, this mode accounted for more than 60% of the effective dose per patient. All of the worker dose measurements remained below the limits established by law. The working conditions in the hybrid operating room HOR are safe in terms of patient and staff radiation protection. Nevertheless, doses are highly dependent on the workload; thus, further research is necessary to evaluate any possible radiological deviation of the daily working conditions in the HOR. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
The MUSE-Wide survey: a measurement of the Ly α emitting fraction among z > 3 galaxies
NASA Astrophysics Data System (ADS)
Caruana, Joseph; Wisotzki, Lutz; Herenz, Edmund Christian; Kerutt, Josephine; Urrutia, Tanya; Schmidt, Kasper Borello; Bouwens, Rychard; Brinchmann, Jarle; Cantalupo, Sebastiano; Carollo, Marcella; Diener, Catrina; Drake, Alyssa; Garel, Thibault; Marino, Raffaella Anna; Richard, Johan; Saust, Rikke; Schaye, Joop; Verhamme, Anne
2018-01-01
We present a measurement of the fraction of Lyman α (Ly α) emitters (XLy α) amongst HST continuum-selected galaxies at 3 < z < 6 with the Multi-Unit Spectroscopic Explorer (MUSE) on the VLT. Making use of the first 24 MUSE-Wide pointings in GOODS-South, each having an integration time of 1 h, we detect 100 Ly α emitters and find XLy α ≳ 0.5 for most of the redshift range covered, with 29 per cent of the Ly α sample exhibiting rest equivalent widths (rest-EWs) ≤ 15 Å. Adopting a range of rest-EW cuts (0-75 Å), we find no evidence of a dependence of XLy α on either redshift or ultraviolet luminosity.
Safety and efficacy of generic drugs with respect to brand formulation.
Gallelli, Luca; Palleria, Caterina; De Vuono, Antonio; Mumoli, Laura; Vasapollo, Piero; Piro, Brunella; Russo, Emilio
2013-12-01
Generic drugs are equivalent to the brand formulation if they have the same active substance, the same pharmaceutical form and the same therapeutic indications and a similar bioequivalence respect to the reference medicinal product. The use of generic drugs is indicated from many countries in order to reduce medication price. However some points, such as bioequivalence and the role of excipients, may be clarified regarding the clinical efficacy and safety during the switch from brand to generic formulations. In conclusion, the use of generic drugs could be related with an increased days of disease (time to relapse) or might lead to a therapeutic failure; on the other hand, a higher drug concentration might expose patients to an increased risk of dose-dependent side-effects.
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Wilson, T. G.
1982-01-01
The present investigation is concerned with an important class of power conditioning networks, taking into account self-oscillating dc-to-square-wave transistor inverters. The considered circuits are widely used both as the principal power converting and processing means in many systems and as low-power analog-to-discrete-time converters for controlling the switching of the output-stage semiconductors in a variety of power conditioning systems. Aspects of piecewise-linear modeling are discussed, taking into consideration component models, and an equivalent-circuit model. Questions of singular point analysis and state plane representation are also investigated, giving attention to limit cycles, starting circuits, the region of attraction, a hard oscillator, and a soft oscillator.
A model of distributed phase aberration for deblurring phase estimated from scattering.
Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C
2010-01-01
Correction of aberration in ultrasound imaging uses the response of a point reflector or its equivalent to characterize the aberration. Because a point reflector is usually unavailable, its equivalent is obtained using statistical methods, such as processing reflections from multiple focal regions in a random medium. However, the validity of methods that use reflections from multiple points is limited to isoplanatic patches for which the aberration is essentially the same. In this study, aberration is modeled by an offset phase screen to relax the isoplanatic restriction. Methods are developed to determine the depth and phase of the screen and to use the model for compensation of aberration as the beam is steered. Use of the model to enhance the performance of the noted statistical estimation procedure is also described. Experimental results obtained with tissue-mimicking phantoms that implement different models and produce different amounts of aberration are presented to show the efficacy of these methods. The improvement in b-scan resolution realized with the model is illustrated. The results show that the isoplanatic patch assumption for estimation of aberration can be relaxed and that propagation-path characteristics and aberration estimation are closely related.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jy-An John; Wang, Hong; Jiang, Hao
The first portion of this report provides a detailed description of fiscal year (FY) 2015 test result corrections and analysis updates based on FY 2016 updates to the Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) program methodology, which is used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal conditions of transport (NCT). The CIRFT consists of a U-frame test setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages connecting to a universal testing machine. The curvature SNF rod bending is obtained through a three-point deflection measurementmore » method. Three linear variable differential transformers (LVDTs) are clamped to the side connecting plates of the U-frame and used to capture deformation of the rod. The second portion of this report provides the latest CIRFT data, including data for the hydride reorientation test. The variations in fatigue life are provided in terms of moment, equivalent stress, curvature, and equivalent strain for the tested SNFs. The equivalent stress plot collapsed the data points from all of the SNF samples into a single zone. A detailed examination revealed that, at the same stress level, fatigue lives display a descending order as follows: H. B. Robinson Nuclear Power Station (HBR), LMK, and mixed uranium-plutonium oxide (MOX). Just looking at the strain, LMK fuel has a slightly longer fatigue life than HBR fuel, but the difference is subtle. The third portion of this report provides finite element analysis (FEA) dynamic deformation simulation of SNF assemblies . In a horizontal layout under NCT, the fuel assembly’s skeleton, which is formed by guide tubes and spacer grids, is the primary load bearing apparatus carrying and transferring vibration loads within an SNF assembly. These vibration loads include interaction forces between the SNF assembly and the canister basket walls. Therefore, the integrity of the guide tubes and spacer grids critically affects the vibration intensity of the fuel assembly during transport and must be considered when developing the multipurpose purpose canister (MPC) design for safe SNF transport.« less
Farkas, Eniko; Szekacs, Andras; Kovacs, Boglarka; Olah, Marianna; Horvath, Robert; Szekacs, Inna
2018-06-05
Rapid and inexpensive biosensor technologies allowing real-time analysis of biomolecular and cellular events have become the basis of next-generation cell-based screening techniques. Our work opens up novel opportunities in the application of the high-throughput label-free Epic BenchTop optical biosensor in cell toxicity studies. The Epic technology records integrated cellular responses about changes in cell morphology and dynamic mass redistribution of cellular contents at the 100-150 nm layer above the sensor surface. The aim of the present study was to apply this novel technology to identify the effect of the herbicide Roundup Classic, its co-formulant polyethoxylated tallow amine (POEA), and its active ingredient glyphosate, on MC3T3-E1 cells adhered on the biosensor surface. The half maximal inhibitory concentrations of Roundup Classic, POEA and glyphosate upon 1 h of exposure were found to be 0.024%, 0.021% and 0.163% in serum-containing medium and 0.028%, 0.019% and 0.538% in serum-free conditions, respectively (at concentrations equivalent to the diluted Roundup solution). These results showed a good correlation with parallel end-point assays, demonstrating the outstanding utility of the Epic technique in cytotoxicity screening, allowing not only high-throughput, real-time detection, but also reduced assay run time and cytotoxicity assessment at end-points far before cell death would occur. Copyright © 2018 Elsevier B.V. All rights reserved.
An approximate method for solution to variable moment of inertia problems
NASA Technical Reports Server (NTRS)
Beans, E. W.
1981-01-01
An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.
Efficient techniques for wave-based sound propagation in interactive applications
NASA Astrophysics Data System (ADS)
Mehra, Ravish
Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.
A model for jet-noise analysis using pressure-gradient correlations on an imaginary cone
NASA Technical Reports Server (NTRS)
Norum, T. D.
1974-01-01
The technique for determining the near and far acoustic field of a jet through measurements of pressure-gradient correlations on an imaginary conical surface surrounding the jet is discussed. The necessary analytical developments are presented, and their feasibility is checked by using a point source as the sound generator. The distribution of the apparent sources on the cone, equivalent to the point source, is determined in terms of the pressure-gradient correlations.
Modular operads and the quantum open-closed homotopy algebra
NASA Astrophysics Data System (ADS)
Doubek, Martin; Jurčo, Branislav; Münster, Korbinian
2015-12-01
We verify that certain algebras appearing in string field theory are algebras over Feynman transform of modular operads which we describe explicitly. Equivalent description in terms of solutions of generalized BV master equations are explained from the operadic point of view.
Noncommutative Line Bundles and Gerbes
NASA Astrophysics Data System (ADS)
Jurčo, B.
We introduce noncommutative line bundles and gerbes within the framework of deformation quantization. The Seiberg-Witten map is used to construct the corresponding noncommutative Čech cocycles. Morita equivalence of star products and quantization of twisted Poisson structures are discussed from this point of view.
Kovalchik, Stephanie A; Reid, Machar
2017-12-01
Differences in the competitive performance characteristics of junior and professional tennis players are not well understood. The present study provides a comprehensive comparative analysis of junior and professional matchplay. The study utilized multiple large-scale datasets covering match, point, and shot outcomes over multiple years of competition. Regression analysis was used to identify differences between junior and professional matchplay. Top professional men and women were found to play significantly more matches, sets, and games compared to junior players of an equivalent ranking. Professional players had a greater serve advantage, men winning 4 and women winning 2 additional percentage points on serve compared to juniors. Clutch ability in break point conversion was 6 to 8 percentage points greater for junior players. In general, shots were more powerful and more accurate at the professional level with the largest differences observed for male players on serve. Serving to the center of the court was more than two times more common for junior players on first serve. While male professionals performed 50% more total work in a Grand Slam match than juniors, junior girls performed 50% more work than professional women. Understanding how competitiveness, play demands, and the physical characteristics of shots differ between junior and professional tennis players can help set realistic expectations and developmentally appropriate training for transitioning players.
Mass, Nathaniel J
2005-04-01
Most executives would say that adding a point of growth and gaining a point of operating-profit margin contribute about equally to shareholder value. Margin improvements hit the bottom line immediately, while growth compounds value over time. But the reality is that the two are rarely equivalent. Growth often is far more valuable than managers think. For some companies, convincing the market that they can grow by just one additional percentage point can be worth six, seven, or even ten points of margin improvement. This article presents a new strategic metric, called the relative value of growth (RVG), which gives managers a clear picture of how growth projects and margin improvement initiatives affect shareholder value. Using basic balance sheet and income sheet data, managers can determine their companies' RVGs, as well as those of their competitors. Calculating RVGs gives managers insights into which corporate strategies are working to deliver value and whether their companies are pulling the most powerful value-creation levers. The author examines a number of well-known companies and explains what their RVG numbers say about their strategies. He reviews the unspoken assumption that growth and profits are incompatible over the long term and shows that a fair number of companies are effective at delivering both. Finally, he explains how managers can use the RVG framework to help them define strategies that balance growth and profitability at both the corporate and business unit levels.
Surface field theories of point group symmetry protected topological phases
NASA Astrophysics Data System (ADS)
Huang, Sheng-Jie; Hermele, Michael
2018-02-01
We identify field theories that describe the surfaces of three-dimensional bosonic point group symmetry protected topological (pgSPT) phases. The anomalous nature of the surface field theories is revealed via a dimensional reduction argument. Specifically, we study three different surface field theories. The first field theory is quantum electrodynamics in three space-time dimensions (QED3) with four flavors of fermions. We show this theory can describe the surfaces of a majority of bosonic pgSPT phases protected by a single mirror reflection, or by Cn v point group symmetry for n =2 ,3 ,4 ,6 . The second field theory is a variant of QED3 with charge-1 and charge-3 Dirac fermions. This field theory can describe the surface of a reflection symmetric pgSPT phase built by placing an E8 state on the mirror plane. The third field theory is an O (4 ) nonlinear sigma model with a topological theta term at θ =π , or, equivalently, a noncompact CP1 model. Using a coupled wire construction, we show this is a surface theory for bosonic pgSPT phases with U (1 ) ×Z2P symmetry. For the latter two field theories, we discuss the connection to gapped surfaces with topological order. Moreover, we conjecture that the latter two field theories can describe surfaces of more general bosonic pgSPT phases with Cn v point group symmetry.
Organ doses from radionuclides on the ground. Part I. Simple time dependences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, P.; Paretzke, H.G.; Rosenbaum, H.
1988-06-01
Organ dose equivalents of mathematical, anthropomorphical phantoms ADAM and EVA for photon exposures from plane sources on the ground have been calculated by Monte Carlo photon transport codes and tabulated in this article. The calculation takes into account the air-ground interface and a typical surface roughness, the energy and angular dependence of the photon fluence impinging on the phantom and the time dependence of the contributions from daughter nuclides. Results are up to 35% higher than data reported in the literature for important radionuclides. This manuscript deals with radionuclides, for which the time dependence of dose equivalent rates and dosemore » equivalents may be approximated by a simple exponential. A companion manuscript treats radionuclides with non-trivial time dependences.« less
Heavy-Load Lifting: Acute Response in Breast Cancer Survivors at Risk for Lymphedema
BLOOMQUIST, KIRA; OTURAI, PETER; STEELE, MEGAN L.; ADAMSEN, LIS; MØLLER, TOM; CHRISTENSEN, KARL BANG; EJLERTSEN, BENT; HAYES, SANDRA C.
2018-01-01
ABSTRACT Purpose Despite a paucity of evidence, prevention guidelines typically advise avoidance of heavy lifting in an effort to protect against breast cancer–related lymphedema. This study compared acute responses in arm swelling and related symptoms after low- and heavy-load resistance exercise among women at risk for lymphedema while receiving adjuvant taxane-based chemotherapy. Methods This is a randomized, crossover equivalence trial. Women receiving adjuvant taxane-based chemotherapy for breast cancer who had undergone axillary lymph node dissection (n = 21) participated in low-load (60%–65% 1-repetition maximum, two sets of 15–20 repetitions) and heavy-load (85%–90% 1-repetition maximum, three sets of 5–8 repetitions) upper-extremity resistance exercise separated by a 1-wk wash-out period. Swelling was determined by bioimpedance spectroscopy and dual-energy x-ray absorptiometry, with breast cancer–related lymphedema symptoms (heaviness, swelling, pain, tightness) reported using a numeric rating scale (0–10). Order of low- versus heavy-load was randomized. All outcomes were assessed before, immediately after, and 24 and 72 h after exercise. Generalized estimating equations were used to evaluate changes over time between groups, with equivalence between resistance exercise loads determined using the principle of confidence interval inclusion. Results The acute response to resistance exercise was equivalent for all outcomes at all time points irrespective of loads lifted, with the exception of extracellular fluid at 72 h after exercise with less swelling after heavy loads (estimated mean difference, −1.00; 95% confidence interval, −3.17 to 1.17). Conclusions Low- and heavy-load resistance exercise elicited similar acute responses in arm swelling and breast cancer–related lymphedema symptoms in women at risk for lymphedema receiving adjuvant taxane-based chemotherapy. These represent important preliminary findings, which can be used to inform future prospective evaluation of the long-term effects of repeated exposure to heavy-load resistance exercise. PMID:28991039
NASA Astrophysics Data System (ADS)
Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.
2016-04-01
In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in reasonable agreement with those obtained by means of alternative analytical and numerical methods. For d > 2, and in particular for d=3, we determine the finite-temperature critical line ending in the quantum critical point and the related TIM-like shift exponent, consistently with recent renormalization group predictions. The main crossover lines between different asymptotic regimes around the quantum critical point are also estimated providing a global phase diagram and a quantum criticality very similar to the conventional ones.
40 CFR 60.2110 - What operating limits must I meet and by when?
Code of Federal Regulations, 2014 CFR
2014-07-01
... for Commercial and Industrial Solid Waste Incineration Units Emission Limitations and Operating Limits... zero to a level equivalent to at least two times your allowable emission limit. If your PM CPMS is an... of reading PM concentration from zero to a level equivalent to two times your allowable emission...
76 FR 14032 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... to the level of Medicare GME support received by other, non-children's hospitals. The legislation... equivalent residents in applicant children's hospitals' training programs to determine the amount of direct... data on the number of full-time equivalent residents a second time during the Federal fiscal year to...
Twin-singleton differences in cognitive abilities in a sample of Africans in Nigeria.
Hur, Yoon-Mi; Lynn, Richard
2013-08-01
Recent studies comparing cognitive abilities between contemporary twins and singletons in developed countries have suggested that twin deficits in cognitive abilities no longer exist. We examined cognitive abilities in a sample of twins and singletons born recently in Nigeria to determine whether recent findings can be replicated in developing countries. Our sample consisted of 413 pairs of twins and 280 singletons collected from over 45 public schools in Abuja and its neighboring states in Nigeria. The ages of twins and singletons ranged from 9 to 20 years with a mean (SD) of 14.6 years (2.2 years) for twins and 16.1 years (1.8 years) for singletons. Zygosity of the same-sex twins was determined by analysis of 16 deoxyribonucleic acid markers. We asked participants to complete a questionnaire booklet that included Standard Progressive Matrices-Plus Version (SPM+), Mill-Hill Vocabulary Scale (MHV), Family Assets Questionnaire, and demographic questions. The data were corrected for sex and age and then analyzed using maximum likelihood model-fitting analysis. Although twins and singletons were comparable in family social class indicators, singletons did better than twins across all the tests (d = 0.10 to 0.35). The average of d for SPM+ total [0.32; equivalent to 4.8 Intelligence Quotient (IQ) points] and d for MHV (0.24; equivalent to 3.6 IQ points) was 0.28 (equivalent to 4.2 IQ points), similar to the twin-singleton gap found in old cohorts in developed countries. We speculate that malnutrition, poor health, and educational systems in Nigeria may explain the persistence of twin deficits in cognitive abilities found in our sample.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.
2010-04-01
Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.
Ewald Electrostatics for Mixtures of Point and Continuous Line Charges.
Antila, Hanne S; Tassel, Paul R Van; Sammalkorpi, Maria
2015-10-15
Many charged macro- or supramolecular systems, such as DNA, are approximately rod-shaped and, to the lowest order, may be treated as continuous line charges. However, the standard method used to calculate electrostatics in molecular simulation, the Ewald summation, is designed to treat systems of point charges. We extend the Ewald concept to a hybrid system containing both point charges and continuous line charges. We find the calculated force between a point charge and (i) a continuous line charge and (ii) a discrete line charge consisting of uniformly spaced point charges to be numerically equivalent when the separation greatly exceeds the discretization length. At shorter separations, discretization induces deviations in the force and energy, and point charge-point charge correlation effects. Because significant computational savings are also possible, the continuous line charge Ewald method presented here offers the possibility of accurate and efficient electrostatic calculations.
Takesh, Thair; Sargsyan, Anik; Lee, Matthew; Anbarani, Afarin; Ho, Jessica; Wilder-Smith, Petra
2017-01-01
Aims The aim of this project was to evaluate the effects of 2 different whitening strips on color, microstructure and roughness of tea stained porcelain and composite surfaces. Methods 54 porcelain and 72 composite chips served as samples for timed application of over-the-counter (OTC) test or control dental whitening strips. Chips were divided randomly into three groups of 18 porcelain and 24 composite chips each. Of these groups, 1 porcelain and 1 composite set served as controls. The remaining 2 groups were randomized to treatment with either Oral Essentials® Whitening Strips or Crest® 3D White Whitestrips™. Sample surface structure was examined by light microscopy, profilometry and Scanning Electron Microscopy (SEM). Additionally, a reflectance spectrophotometer was used to assess color changes in the porcelain and composite samples over 24 hours of whitening. Data points were analyzed at each time point using ANOVA. Results In the light microscopy and SEM images, no discrete physical defects were observed in any of the samples at any time points. However, high-resolution SEM images showed an appearance of increased surface roughness in all composite samples. Using profilometry, significantly increased post-whitening roughness was documented in the composite samples exposed to the control bleaching strips. Composite samples underwent a significant and equivalent shift in color following exposure to Crest® 3D White Whitestrips™ and Oral Essentials® Whitening Strips. Conclusions A novel commercial tooth whitening strip demonstrated a comparable beaching effect to a widely used OTC whitening strip. Neither whitening strip caused physical defects in the sample surfaces. However, the control strip caused roughening of the composite samples whereas the test strip did not. PMID:29226023
Hirsch, Oliver; Donner-Banzhoff, Norbert; Bachmann, Viktoria
2013-07-01
Psychological constructs depend on cultural context. It is therefore important to show the equivalence of measurement instruments in cross-cultural research. There is evidence that in Russian-speaking immigrants, cultural and language issues are important in health care. We examined measurement equivalence of the Patient Health Questionnaire-9 (PHQ-9), the Patient Health Questionnaire-15 (PHQ-15), the Hamburg Self-Care Questionnaire (HamSCQ), and the questionnaire on communication preferences of patients with chronic illness (KOPRA) in native-born Germans, Russian-speaking immigrants living in Germany, and native-born Russians living in the former Soviet Union (FSU). All four questionnaires fulfilled requirements of measurement equivalence in confirmatory factor analyses and analyses of differential item functioning. The Russian translations can be used in Russian-speaking immigrants and native-born Russians. This offers further possibilities for cross-cultural research and for an improvement in health care research in Russian-speaking immigrants in Germany. The most pronounced differences occurred in the KOPRA, which point to differences in German and Russian health care systems.
Cavallaro, Michael C; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten
2017-02-01
Nontarget aquatic insects are susceptible to chronic neonicotinoid insecticide exposure during the early stages of development from repeated runoff events and prolonged persistence of these chemicals. Investigations on the chronic toxicity of neonicotinoids to aquatic invertebrates have been limited to a few species and under different laboratory conditions that often preclude direct comparisons of the relative toxicity of different compounds. In the present study, full life-cycle toxicity tests using Chironomus dilutus were performed to compare the toxicity of 3 commonly used neonicotinoids: imidacloprid, clothianidin, and thiamethoxam. Test conditions followed a static-renewal exposure protocol in which lethal and sublethal endpoints were assessed on days 14 and 40. Reduced emergence success, advanced emergence timing, and male-biased sex ratios were sensitive responses to low-level neonicotinoid exposure. The 14-d median lethal concentrations for imidacloprid, clothianidin, and thiamethoxam were 1.52 μg/L, 2.41 μg/L, and 23.60 μg/L, respectively. The 40-d median effect concentrations (emergence) for imidacloprid, clothianidin, and thiamethoxam were 0.39 μg/L, 0.28 μg/L, and 4.13 μg/L, respectively. Toxic equivalence relative to imidacloprid was estimated through a 3-point response average of equivalencies calculated at 20%, 50%, and 90% lethal and effect concentrations. Relative to imidacloprid (toxic equivalency factor [TEF] = 1.0), chronic (lethality) 14-d TEFs for clothianidin and thiamethoxam were 1.05 and 0.14, respectively, and chronic (emergence inhibition) 40-d TEFs were 1.62 and 0.11, respectively. These population-relevant endpoints and TEFs suggest that imidacloprid and clothianidin exert comparable chronic toxicity to C. dilutus, whereas thiamethoxam induced comparable effects only at concentrations an order of magnitude higher. However, the authors caution that under field conditions, thiamethoxam readily degrades to clothianidin, thereby likely enhancing toxicity. Environ Toxicol Chem 2017;36:372-382. © 2016 SETAC. © 2016 SETAC.
Ferguson, Stephen A; Wang, Xuewei; Meyerhoff, Mark E
2016-08-07
Polymeric quaternary ammonium salts (polyquaterniums) have found increasing use in industrial and cosmetic applications in recent years. More specifically, polyquaternium-10 (PQ-10) is routinely used in cosmetic applications as a conditioner in personal care product formulations. Herein, we demonstrate the use of potentiometric polyion-sensitive polymeric membrane-based electrodes to quantify PQ-10 levels. Mixtures containing both PQ-10 and sodium lauryl sulfate (SLS) are used as model samples to illustrate this new method. SLS is often present in cosmetic samples that contain PQ-10 (e.g., shampoos, etc.) and this surfactant species interferes with the polyion sensor detection chemistry. However, it is shown here that SLS can be readily separated from the PQ-10/SLS mixture by use of an anion-exchange resin and that the PQ-10 can then be titrated with dextran sulphate (DS). This titration is monitored by potentiometric polyanion sensors to provide equivalence points that are directly proportional to PQ-10 concentrations.
Comparing topography-based verbal behavior with stimulus selection-based verbal behavior
Sundberg, Carl T.; Sundberg, Mark L.
1990-01-01
Michael (1985) distinguished between two types of verbal behavior: topography-based and stimulus selection-based verbal behavior. The current research was designed to empirically examine these two types of verbal behavior while addressing the frequently debated question, Which augmentative communication system should be used with the nonverbal developmentally disabled person? Four mentally retarded adults served as subjects. Each subject was taught to tact an object by either pointing to its corresponding symbol (selection-based verbal behavior), or making the corresponding sign (topography-based verbal behavior). They were then taught an intraverbal relation, and were tested for the emergence of stimulus equivalence relations. The results showed that signed responses were acquired more readily than pointing responses as measured by the acquisition of tacts and intraverbals, and the formation of equivalence classes. These results support Michael's (1985) analysis, and have important implications for the design of language intervention programs for the developmentally disabled. ImagesFig. 1Fig. 2 PMID:22477602
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
Wang, Zhiguo; Ullah, Zakir; Gao, Mengqin; Zhang, Dan; Zhang, Yiqi; Gao, Hong; Zhang, Yanpeng
2015-01-01
Optical transistor is a device used to amplify and switch optical signals. Many researchers focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. Electronic transistor is the fundamental building block of modern electronic devices. To replace electronic components with optical ones, an equivalent optical transistor is required. Here we compare the behavior of an optical transistor with the reflection from a photonic band gap structure in an electromagnetically induced transparency medium. A control signal is used to modulate the photonic band gap structure. Power variation of the control signal is used to provide an analogy between the reflection behavior caused by modulating the photonic band gap structure and the shifting of Q-point (Operation point) as well as amplification function of optical transistor. By means of the control signal, the switching function of optical transistor has also been realized. Such experimental schemes could have potential applications in making optical diode and optical transistor used in quantum information processing. PMID:26349444
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Ullah, Zakir; Gao, Mengqin; Zhang, Dan; Zhang, Yiqi; Gao, Hong; Zhang, Yanpeng
2015-09-01
Optical transistor is a device used to amplify and switch optical signals. Many researchers focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. Electronic transistor is the fundamental building block of modern electronic devices. To replace electronic components with optical ones, an equivalent optical transistor is required. Here we compare the behavior of an optical transistor with the reflection from a photonic band gap structure in an electromagnetically induced transparency medium. A control signal is used to modulate the photonic band gap structure. Power variation of the control signal is used to provide an analogy between the reflection behavior caused by modulating the photonic band gap structure and the shifting of Q-point (Operation point) as well as amplification function of optical transistor. By means of the control signal, the switching function of optical transistor has also been realized. Such experimental schemes could have potential applications in making optical diode and optical transistor used in quantum information processing.
NASA Astrophysics Data System (ADS)
Kelleher, Christa A.; Shaw, Stephen B.
2018-02-01
Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.
Pitch chroma discrimination, generalization, and transfer tests of octave equivalence in humans.
Hoeschele, Marisa; Weisman, Ronald G; Sturdy, Christopher B
2012-11-01
Octave equivalence occurs when notes separated by an octave (a doubling in frequency) are judged as being perceptually similar. Considerable evidence points to the importance of the octave in music and speech. Yet, experimental demonstration of octave equivalence has been problematic. Using go/no-go operant discrimination and generalization, we studied octave equivalence in humans. In Experiment 1, we found that a procedure that failed to show octave equivalence in European starlings also failed in humans. In Experiment 2, we modified the procedure to control for the effects of pitch height perception by training participants in Octave 4 and testing in Octave 5. We found that the pattern of responding developed by discrimination training in Octave 4 generalized to Octave 5. We replicated and extended our findings in Experiment 3 by adding a transfer phase: Participants were trained with either the same or a reversed pattern of rewards in Octave 5. Participants transferred easily to the same pattern of reward in Octave 5 but struggled to learn the reversed pattern. We provided minimal instruction, presented no ordered sequences of notes, and used only sine-wave tones, but participants nonetheless constructed pitch chroma information from randomly ordered sequences of notes. Training in music weakly hindered octave generalization but moderately facilitated both positive and negative transfer.
Andreasen, Nancy C; Pressler, Marcus; Nopoulos, Peg; Miller, Del; Ho, Beng-Choon
2010-02-01
A standardized quantitative method for comparing dosages of different drugs is a useful tool for designing clinical trials and for examining the effects of long-term medication side effects such as tardive dyskinesia. Such a method requires establishing dose equivalents. An expert consensus group has published charts of equivalent doses for various antipsychotic medications for first- and second-generation medications. These charts were used in this study. Regression was used to compare each drug in the experts' charts to chlorpromazine and haloperidol and to create formulas for each relationship. The formulas were solved for chlorpromazine 100 mg and haloperidol 2 mg to derive new chlorpromazine and haloperidol equivalents. The formulas were incorporated into our definition of dose-years such that 100 mg/day of chlorpromazine equivalent or 2 mg/day of haloperidol equivalent taken for 1 year is equal to one dose-year. All comparisons to chlorpromazine and haloperidol were highly linear with R(2) values greater than .9. A power transformation further improved linearity. By deriving a unique formula that converts doses to chlorpromazine or haloperidol equivalents, we can compare otherwise dissimilar drugs. These equivalents can be multiplied by the time an individual has been on a given dose to derive a cumulative value measured in dose-years in the form of (chlorpromazine equivalent in mg) x (time on dose measured in years). After each dose has been converted to dose-years, the results can be summed to provide a cumulative quantitative measure of lifetime exposure. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
The importance of being equivalent: Newton's two models of one-body motion
NASA Astrophysics Data System (ADS)
Pourciau, Bruce
2004-05-01
As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions related to Leibniz's "polygonal model" of one-body motion; then to repair Newton's argument for the Area Property in Proposition 1; and finally to clarify and resolve questions related to the transition from impulsive to continuous forces in "De motu" and the Principia.
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee; Qualls, Garry; Slaba, Tony; Cucinotta, Francis A.
Phantom torso experiments have been flown on the space shuttle and International Space Station (ISS) providing validation data for radiation transport models of organ dose and dose equivalents. We describe results for space radiation organ doses using a new human geometry model based on detailed Voxel phantoms models denoted for males and females as MAX (Male Adult voXel) and Fax (Female Adult voXel), respectively. These models represent the human body with much higher fidelity than the CAMERA model currently used at NASA. The MAX and FAX models were implemented for the evaluation of directional body shielding mass for over 1500 target points of major organs. Radiation exposure to solar particle events (SPE), trapped protons, and galactic cosmic rays (GCR) were assessed at each specific site in the human body by coupling space radiation transport models with the detailed body shielding mass of MAX/FAX phantom. The development of multiple-point body-shielding distributions at each organ site made it possible to estimate the mean and variance of space dose equivalents at the specific organ. For the estimate of doses to the blood forming organs (BFOs), active marrow distributions in adult were accounted at bone marrow sites over the human body. We compared the current model results to space shuttle and ISS phantom torso experiments and to calculations using the CAMERA model.
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Qualls, Garry D.; Cucinotta, Francis A.
2008-01-01
Phantom torso experiments have been flown on the space shuttle and International Space Station (ISS) providing validation data for radiation transport models of organ dose and dose equivalents. We describe results for space radiation organ doses using a new human geometry model based on detailed Voxel phantoms models denoted for males and females as MAX (Male Adult voXel) and Fax (Female Adult voXel), respectively. These models represent the human body with much higher fidelity than the CAMERA model currently used at NASA. The MAX and FAX models were implemented for the evaluation of directional body shielding mass for over 1500 target points of major organs. Radiation exposure to solar particle events (SPE), trapped protons, and galactic cosmic rays (GCR) were assessed at each specific site in the human body by coupling space radiation transport models with the detailed body shielding mass of MAX/FAX phantom. The development of multiple-point body-shielding distributions at each organ site made it possible to estimate the mean and variance of space dose equivalents at the specific organ. For the estimate of doses to the blood forming organs (BFOs), active marrow distributions in adult were accounted at bone marrow sites over the human body. We compared the current model results to space shuttle and ISS phantom torso experiments and to calculations using the CAMERA model.
Ghys, Timothy; Goedhuys, Wim; Spincemaille, Katrien; Gorus, Frans; Gerlo, Erik
2007-01-01
Glucose testing at the bedside has become an integral part of the management strategy in diabetes and of the careful maintenance of normoglycemia in all patients in intensive care units. We evaluated two point-of-care glucometers for the determination of plasma-equivalent blood glucose. The Precision PCx and the Accu-Chek Inform glucometers were evaluated. Imprecision and bias relative to the Vitros 950 system were determined using protocols of the Clinical Laboratory Standards Institute (CLSI). The effects of low, normal, and high hematocrit levels were investigated. Interference by maltose was also studied. Within-run precision for both instruments ranged from 2-5%. Total imprecision was less than 5% except for the Accu-Chek Inform at the low level (2.9 mmol/L). Both instruments correlated well with the comparison instrument and showed excellent recovery and linearity. Both systems reported at least 95% of their values within zone A of the Clarke Error Grid, and both fulfilled the CLSI quality criteria. The more stringent goals of the American Diabetes Association, however, were not reached. Both systems showed negative bias at high hematocrit levels. Maltose interfered with the glucose measurements on the Accu-Chek Inform but not on the Precision PCx. Both systems showed satisfactory imprecision and were reliable in reporting plasma-equivalent glucose concentrations. The most stringent performance goals were however not met.
Multiloop Functional Renormalization Group That Sums Up All Parquet Diagrams
NASA Astrophysics Data System (ADS)
Kugler, Fabian B.; von Delft, Jan
2018-02-01
We present a multiloop flow equation for the four-point vertex in the functional renormalization group (FRG) framework. The multiloop flow consists of successive one-loop calculations and sums up all parquet diagrams to arbitrary order. This provides substantial improvement of FRG computations for the four-point vertex and, consequently, the self-energy. Using the x-ray-edge singularity as an example, we show that solving the multiloop FRG flow is equivalent to solving the (first-order) parquet equations and illustrate this with numerical results.
NASA Astrophysics Data System (ADS)
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.
ERIC Educational Resources Information Center
Arntzen, Erik; Haugland, Silje
2012-01-01
Reaction time (RT), thought to be important for acquiring a full understanding of the establishment of equivalence classes, has been reported in a number of studies within the area of stimulus equivalence research. In this study, we trained 3 classes of potentially 3 members, with arbitrary stimuli in a one-to-many training structure in 5 adult…
Effect of Arctic Amplification on Design Snow Loads in Alaska
2016-09-01
snow water equivalent UFC Unified Facilities Criteria UTC Coordinated Universal Time Keywords: Alaska, Arctic amplification, climate change...extreme value analysis, snow loads, snow water equivalent , SWE Acknowledgements: This work was conducted with support from the Strategic... equivalent (SWE) of the snowpack. We acquired SWE data from a number of sources that provide automatic or manual observations, reanalysis data, or
Effects of equivalence ratio variation on lean, stratified methane-air laminar counterflow flames
NASA Astrophysics Data System (ADS)
Richardson, E. S.; Granet, V. E.; Eyssartier, A.; Chen, J. H.
2010-11-01
The effects of equivalence ratio variations on flame structure and propagation have been studied computationally. Equivalence ratio stratification is a key technology for advanced low emission combustors. Laminar counterflow simulations of lean methane-air combustion have been presented which show the effect of strain variations on flames stabilized in an equivalence ratio gradient, and the response of flames propagating into a mixture with a time-varying equivalence ratio. 'Back supported' lean flames, whose products are closer to stoichiometry than their reactants, display increased propagation velocities and reduced thickness compared with flames where the reactants are richer than the products. The radical concentrations in the vicinity of the flame are modified by the effect of an equivalence ratio gradient on the temperature profile and thermal dissociation. Analysis of steady flames stabilized in an equivalence ratio gradient demonstrates that the radical flux through the flame, and the modified radical concentrations in the reaction zone, contribute to the modified propagation speed and thickness of stratified flames. The modified concentrations of radical species in stratified flames mean that, in general, the reaction rate is not accurately parametrized by progress variable and equivalence ratio alone. A definition of stratified flame propagation based upon the displacement speed of a mixture fraction dependent progress variable was seen to be suitable for stratified combustion. The response times of the reaction, diffusion, and cross-dissipation components which contribute to this displacement speed have been used to explain flame response to stratification and unsteady fluid dynamic strain.
Long-term changes in retinal vascular diameter and cognitive impairment in type 1 diabetes.
Nunley, Karen A; Metti, Andrea L; Klein, Ronald; Klein, Barbara E; Saxton, Judith A; Orchard, Trevor J; Costacou, Tina; Aizenstein, Howard J; Rosano, Caterina
2018-05-01
To assess associations between cognitive impairment and longitudinal changes in retinal microvasculature, over 18 years, in adults with type 1 diabetes. Participants of the Pittsburgh Epidemiology of Diabetes Complications Study received ≥3 fundus photographs between baseline (1986-1988) and time of cognitive assessment (2010-2015: N = 119; 52% male; mean age and type 1 diabetes duration 43 and 34 years, respectively). Central retinal arteriolar equivalent and central retinal venular equivalent were estimated via computer-based methods; overall magnitude and speed of narrowing were quantified as cumulative average and slope, respectively. Median regression models estimated associations of central retinal arteriolar equivalent and central retinal venular equivalent measures with cognitive impairment status, adjusted for type 1 diabetes duration. Interactions with HbA1c, proliferative retinopathy and white matter hyperintensities were assessed. Compared with participants without cognitive impairment, those with clinically relevant cognitive impairment experienced 1.8% greater and 31.1% faster central retinal arteriolar equivalent narrowing during prior years (t = -2.93, p = 0.004 and t = -3.97, p < 0.0001, respectively). Interactions with HbA1c, proliferative retinopathy and white matter hyperintensities were not significant. No associations were found between central retinal arteriolar equivalent at baseline, at time of cognitive testing, or any central retinal venular equivalent measures, and cognitive impairment. Long-term arterial retinal changes could indicate type 1 diabetes-related cognitive impairment. Studies examining longitudinal central retinal arteriolar equivalent changes as early biomarkers of cognitive impairment risk are warranted.
USING DOSE ADDITION TO ESTIMATE CUMULATIVE RISKS FROM EXPOSURES TO MULTIPLE CHEMICALS
The Food Quality Protection Act (FQPA) of 1996 requires the EPA to consider the cumulative risk from exposure to multiple chemicals that have a common mechanism of toxicity. Three methods, hazard index (HI), point-of-departure index (PODI), and toxicity equivalence factor (TEF), ...
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach; (2) Decreased frequency for non-continuous parameter monitoring or physical inspections; (3... stream components, not carbon equivalents. Car-seal means a seal that is placed on a device that is used..., flow inducing devices that transport gas or vapor from an emission point to a control device. A closed...
14 CFR 171.263 - Localizer automatic monitor system.
Code of Federal Regulations, 2012 CFR
2012-01-01
... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...
14 CFR 171.263 - Localizer automatic monitor system.
Code of Federal Regulations, 2014 CFR
2014-01-01
... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...
14 CFR 171.263 - Localizer automatic monitor system.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...
14 CFR 171.263 - Localizer automatic monitor system.
Code of Federal Regulations, 2011 CFR
2011-01-01
... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...
14 CFR 171.263 - Localizer automatic monitor system.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...
Court Interpreting: The Anatomy of a Profession.
ERIC Educational Resources Information Center
de Jongh, Elena M.
For both translators and interpreters, language proficiency is only the starting point for professional work. The equivalence of both meaning and style are necessary for faithful translation. The legal interpreter or translator must understand the complex characteristics and style of legal language. Court interpreting is a relatively young…
47 CFR 80.76 - Requirements for land station control points.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... 80.76 Section 80.76 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND... subject to this part must have the following facilities: (a) Except for marine utility stations, a visual indication of antenna current; or a pilot lamp, meter or equivalent device which provides continuous visual...
40 CFR 430.57 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2012 CFR
2012-07-01
... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...
40 CFR 430.57 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2014 CFR
2014-07-01
... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...
40 CFR 466.24 - Pretreatment standards for existing sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...
40 CFR 466.24 - Pretreatment standards for existing sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis Material...
40 CFR 466.24 - Pretreatment standards for existing sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...
40 CFR 466.24 - Pretreatment standards for existing sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis Material...
40 CFR 466.24 - Pretreatment standards for existing sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...
40 CFR 430.57 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2013 CFR
2013-07-01
... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...
NASA Astrophysics Data System (ADS)
Christensen, David B.; Basaeri, Hamid; Roundy, Shad
2017-12-01
In acoustic power transfer systems, a receiver is displaced from a transmitter by an axial depth, a lateral offset (alignment), and a rotation angle (orientation). In systems where the receiver’s position is not fixed, such as a receiver implanted in biological tissue, slight variations in depth, orientation, or alignment can cause significant variations in the received voltage and power. To address this concern, this paper presents a computationally efficient technique to model the effects of depth, orientation, and alignment via ray tracing (DOART) on received voltage and power in acoustic power transfer systems. DOART combines transducer circuit equivalent models, a modified version of Huygens principle, and ray tracing to simulate pressure wave propagation and reflection between a transmitter and a receiver in a homogeneous medium. A reflected grid method is introduced to calculate propagation distances, reflection coefficients, and initial vectors between a point on the transmitter and a point on the receiver for an arbitrary number of reflections. DOART convergence and simulation time per data point is discussed as a function of the number of reflections and elements chosen. Finally, experimental data is compared to DOART simulation data in terms of magnitude and shape of the received voltage signal.
Understanding the Magnetosphere: The Counter-intuitive Simplicity of Cosmic Electrodynamics
NASA Astrophysics Data System (ADS)
Vasyliūnas, V. M.
2008-12-01
Planetary magnetospheres exhibit an amazing variety of phenomena, unlimited in complexity if followed into endlessly fine detail. The challenge of theory is to understand this variety and complexity, ultimately by seeing how the observed effects follow from the basic equations of physics (a point emphasized by Eugene Parker). The basic equations themselves are remarkably simple, only their consequences being exceedingly complex (a point emphasized by Fred Hoyle). In this lecture I trace the development of electrodynamics as an essential ingredient of magnetospheric physics, through the three stages it has undergone to date. Stage I is the initial application of MHD concepts and constraints (sometimes phrased in equivalent single-particle terms). Stage II is the classical formulation of self-consistent coupling between magnetosphere and ionosphere. Stage III is the more recent recognition that properly elucidating time sequence and cause-effect relations requires Maxwell's equations combined with the unique constraints of large-scale plasma. Problems and controversies underlie the transition from each stage to the following. For each stage, there are specific observed aspects of the magnetosphere that can be understood at its level; also, each stage implies a specific way to formulate unresolved questions (particularly important in this age of extensive multi-point observations and ever-more-detailed numerical simulations).
Nonlinear response from transport theory and quantum field theory at finite temperature
NASA Astrophysics Data System (ADS)
Carrington, M. E.; Defu, Hou; Kobes, R.
2001-07-01
We study the nonlinear response in weakly coupled hot φ4 theory. We obtain an expression for a quadratic shear viscous response coefficient using two different formalisms: transport theory and response theory. The transport theory calculation is done by assuming a local equilibrium form for the distribution function and expanding in the gradient of the local four dimensional velocity field. By performing a Chapman-Enskog expansion on the Boltzmann equation we obtain a hierarchy of equations for the coefficients of the expanded distribution function. To do the response theory calculation we use Zubarev's techniques in nonequilibrium statistical mechanics to derive a generalized Kubo formula. Using this formula allows us to obtain the quadratic shear viscous response from the three-point retarded Green function of the viscous shear stress tensor. We use the closed time path formalism of real time finite temperature field theory to show that this three-point function can be calculated by writing it as an integral equation involving a four-point vertex. This four-point vertex can in turn be obtained from an integral equation which represents the resummation of an infinite series of ladder and extended-ladder diagrams. The connection between transport theory and response theory is made when we show that the integral equation for this four-point vertex has exactly the same form as the equation obtained from the Boltzmann equation for the coefficient of the quadratic term of the gradient expansion of the distribution function. We conclude that calculating the quadratic shear viscous response using transport theory and keeping terms that are quadratic in the gradient of the velocity field in the Chapman-Enskog expansion of the Boltzmann equation is equivalent to calculating the quadratic shear viscous response from response theory using the next-to-linear response Kubo formula, with a vertex given by an infinite resummation of ladder and extended-ladder diagrams.
NASA Technical Reports Server (NTRS)
Mittra, R.; Rushdi, A.
1979-01-01
An approach for computing the geometrical optic fields reflected from a numerically specified surface is presented. The approach includes the step of deriving a specular point and begins with computing the reflected rays off the surface at the points where their coordinates, as well as the partial derivatives (or equivalently, the direction of the normal), are numerically specified. Then, a cluster of three adjacent rays are chosen to define a 'mean ray' and the divergence factor associated with this mean ray. Finally, the ampilitude, phase, and vector direction of the reflected field at a given observation point are derived by associating this point with the nearest mean ray and determining its position relative to such a ray.
Features of HF Radio Wave Attenuation in the Midlatitude Ionosphere Near the Skip Zone Boundary
NASA Astrophysics Data System (ADS)
Denisenko, P. F.; Skazik, A. I.
2017-06-01
We briefly describe the history of studying the decameter radio wave attenuation by different methods in the midlatitude ionosphere. A new method of estimating the attenuation of HF radio waves in the ionospheric F region near the skip zone boundary is presented. This method is based on an analysis of the time structure of the interference field generated by highly stable monochromatic X-mode radio waves at the observation point. The main parameter is the effective electron collision frequency νeff, which allows for all energy losses in the form of equivalent heat loss. The frequency νeff is estimated by matching the assumed (model) and the experimentally observed structures. Model calculations are performed using the geometrical-optics approximation. The spatial attenuation caused by the influence of the medium-scale traveling ionospheric disturbances is taken into account. Spherical shape of the ionosphere and the Earth's magnetic field are roughly allowed for. The results of recording of the level of signals from the RWM (Moscow) station at a frequency of 9.996 MHz at point Rostov are used.
Roncone, Alessandro; Hoffmann, Matej; Pattacini, Ugo; Fadiga, Luciano; Metta, Giorgio
2016-01-01
This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.
Fermion-induced quantum critical points in two-dimensional Dirac semimetals
NASA Astrophysics Data System (ADS)
Jian, Shao-Kai; Yao, Hong
2017-11-01
In this paper we investigate the nature of quantum phase transitions between two-dimensional Dirac semimetals and Z3-ordered phases (e.g., Kekule valence-bond solid), where cubic terms of the order parameter are allowed in the quantum Landau-Ginzberg theory and the transitions are putatively first order. From large-N renormalization-group (RG) analysis, we find that fermion-induced quantum critical points (FIQCPs) [Z.-X. Li et al., Nat. Commun. 8, 314 (2017), 10.1038/s41467-017-00167-6] occur when N (the number of flavors of four-component Dirac fermions) is larger than a critical value Nc. Remarkably, from the knowledge of space-time supersymmetry, we obtain an exact lower bound for Nc, i.e., Nc>1 /2 . (Here the "1/2" flavor of four-component Dirac fermions is equivalent to one flavor of four-component Majorana fermions). Moreover, we show that the emergence of two length scales is a typical phenomenon of FIQCPs and obtain two different critical exponents, i.e., ν ≠ν' , by large-N RG calculations. We further give a brief discussion of possible experimental realizations of FIQCPs.
Analysis of psychological factors for quality assessment of interactive multimodal service
NASA Astrophysics Data System (ADS)
Yamagishi, Kazuhisa; Hayashi, Takanori
2005-03-01
We proposed a subjective quality assessment model for interactive multimodal services. First, psychological factors of an audiovisual communication service were extracted by using the semantic differential (SD) technique and factor analysis. Forty subjects participated in subjective tests and performed point-to-point conversational tasks on a PC-based TV phone that exhibits various network qualities. The subjects assessed those qualities on the basis of 25 pairs of adjectives. Two psychological factors, i.e., an aesthetic feeling and a feeling of activity, were extracted from the results. Then, quality impairment factors affecting these two psychological factors were analyzed. We found that the aesthetic feeling is mainly affected by IP packet loss and video coding bit rate, and the feeling of activity depends on delay time and video frame rate. We then proposed an opinion model derived from the relationships among quality impairment factors, psychological factors, and overall quality. The results indicated that the estimation error of the proposed model is almost equivalent to the statistical reliability of the subjective score. Finally, using the proposed model, we discuss guidelines for quality design of interactive audiovisual communication services.
NASA Astrophysics Data System (ADS)
Cheng, Jierong; Jafar-Zanjani, Samad; Mosallaei, Hossein
2016-12-01
Metasurfaces are ideal candidates for conformal wave manipulation on curved objects due to their low profiles and rich functionalities. Here we design and analyze conformal metasurfaces for practical optical applications at 532 nm visible band for the first time. The inclusions are silicon disk nanoantennas embedded in a flexible supporting layer of polydimethylsiloxane (PDMS). They behave as local phase controllers in subwavelength dimensions for successful modification of electromagnetic responses point by point, with merits of high efficiency, at visible regime, ultrathin films, good tolerance to the incidence angle and the grid stretching due to the curvy substrate. An efficient modeling technique based on field equivalence principle is systematically proposed for characterizing metasurfaces with huge arrays of nanoantennas oriented in a conformal manner. Utilizing the robust nanoantenna inclusions and benefiting from the powerful analyzing tool, we successfully demonstrate the superior performances of the conformal metasurfaces in two specific areas, with one for lensing and compensation of spherical aberration, and the other carpet cloak, both at 532 nm visible spectrum.
Ensuring Cross-Cultural Equivalence in Translation of Research Consents and Clinical Documents
Lee, Cheng-Chih; Li, Denise; Arai, Shoshana; Puntillo, Kathleen
2010-01-01
The aim of this article is to describe a formal process used to translate research study materials from English into traditional Chinese characters. This process may be useful for translating documents for use by both research participants and clinical patients. A modified Brislin model was used as the systematic translation process. Four bilingual translators were involved, and a Flaherty 3-point scale was used to evaluate the translated documents. The linguistic discrepancies that arise in the process of ensuring cross-cultural congruency or equivalency between the two languages are presented to promote the development of patient-accessible cross-cultural documents. PMID:18948451
Relative Impact of Incorporating Pharmacokinetics on ...
The use of high-throughput in vitro assays has been proposed to play a significant role in the future of toxicity testing. In this study, rat hepatic metabolic clearance and plasma protein binding were measured for 59 ToxCast phase I chemicals. Computational in vitro-to-in vivo extrapolation was used to estimate the daily dose in a rat, called the oral equivalent dose, which would result in steady-state in vivo blood concentrations equivalent to the AC50 or lowest effective concentration (LEC) across more than 600 ToxCast phase I in vitro assays. Statistical classification analysis was performed using either oral equivalent doses or unadjusted AC50/LEC values for the in vitro assays to predict the in vivo effects of the 59 chemicals. Adjusting the in vitro assays for pharmacokinetics did not improve the ability to predict in vivo effects as either a discrete (yes or no) response or a low effect level (LEL) on a continuous dose scale. Interestingly, a comparison of the in vitro assay with the lowest oral equivalent dose with the in vivo endpoint with the lowest LEL suggested that the lowest oral equivalent dose may provide a conservative estimate of the point of departure for a chemical in a dose-response assessment. Furthermore, comparing the oral equivalent doses for the in vitro assays with the in vivo dose range that resulted in adverse effects identified more coincident in vitro assays across chemicals than expected by chance, suggesting that the approach ma
Hardarson, Thorir; Bungum, Mona; Conaghan, Joe; Meintjes, Marius; Chantilis, Samuel J; Molnar, Laszlo; Gunnarsson, Kristina; Wikland, Matts
2015-12-01
To study whether a culture medium that allows undisturbed culture supports human embryo development to the blastocyst stage equivalently to a well-established sequential media. Randomized, double-blinded sibling trial. Independent in vitro fertilization (IVF) clinics. One hundred twenty-eight patients, with 1,356 zygotes randomized into two study arms. Embryos randomly allocated into two study arms to compare embryo development on a time-lapse system using a single-step medium or sequential media. Percentage of good-quality blastocysts on day 5. Percentage of day 5 good-quality blastocysts was 21.1% (standard deviation [SD] ± 21.6%) and 22.2% (SD ± 22.1%) in the single-step time-lapse medium (G-TL) and the sequential media (G-1/G-2) groups, respectively. The mean difference (-1.2; 95% CI, -6.0; 3.6) between the two media systems for the primary end point was less than the noninferiority margin of -8%. There was a statistically significantly lower number of good-quality embryos on day 3 in the G-TL group [50.7% (SD ± 30.6%) vs. 60.8% (SD ± 30.7%)]. Four out of the 11 measured morphokinetic parameters were statistically significantly different for the two media used. The mean levels of ammonium concentration in the media at the end of the culture period was statistically significantly lower in the G-TL group as compared with the G-2 group. We have shown that a single-step culture medium supports blastocyst development equivalently to established sequential media. The ammonium concentrations were lower in the single-step media, and the measured morphokinetic parameters were modified somewhat. NCT01939626. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Kammalla, Ananth Kumar; Ramasamy, Mohan Kumar; Inampudi, Jyothi; Dubey, Govind Prasad; Agrawal, Aruna; Kaliappan, Ilango
2015-04-01
The US patented polyherbal formulation for the prevention and management of type II diabetes and its vascular complications was used for the present study. The xanthone glycoside mangiferin is one of the major effector constituents in the Salacia species with potential anti-diabetic activity. The pharmacokinetic differences of mangiferin following oral administration of pure mangiferin and polyherbal formulation containing Salacia species were studied with approximately the same dose 30 mg/kg mangiferin and its distribution among the major tissue in Wistar rats. Plasma samples were collected at different time points (15, 30, 60, 120, 180, 240, 360, 480, 600, 1,440, 2,160, and 2880 min) and subsequently analyzed using a validated simple and rapid LC-MS method. Plasma concentration versus time profiles were explored by non-compartmental analysis. Mangiferin plasma exposure was significantly increased when administered from formulation compared to the standard mangiferin. Mangiferin resided significantly longer in the body (last mean residence time (MRTlast)) when given in the form of the formulation (3.65 h). Cmax values of formulation (44.16 μg/mL) administration were elevated when compared to equivalent dose of the pure mangiferin (15.23 μg/mL). Tissue distribution study of mangiferin from polyherbal formulation was also studied. In conclusion, the exposure of mangiferin is enhanced after formulation and administration and could result in superior efficacy of polyherbal formulation when compared to an equivalent dose of mangiferin. The results indicate that the reason which delays the elimination of mangiferin and enhances its bioavailability might the interactions of the some other constituents present in the polyherbal formulation. Distribution study results indicate that mangiferin was extensively bound to the various tissues like the small intestine, heart, kidney, spleen, and liver except brain tissue.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
Process for magnetic beneficiating petroleum cracking catalyst
Doctor, R.D.
1993-10-05
A process is described for beneficiating a particulate zeolite petroleum cracking catalyst having metal values in excess of 1000 ppm nickel equivalents. The particulate catalyst is passed through a magnetic field in the range of from about 2 Tesla to about 5 Tesla generated by a superconducting quadrupole open-gradient magnetic system for a time sufficient to effect separation of said catalyst into a plurality of zones having different nickel equivalent concentrations. A first zone has nickel equivalents of about 6,000 ppm and greater, a second zone has nickel equivalents in the range of from about 2000 ppm to about 6000 ppm, and a third zone has nickel equivalents of about 2000 ppm and less. The zones of catalyst are separated and the second zone material is recycled to a fluidized bed of zeolite petroleum cracking catalyst. The low nickel equivalent zone is treated while the high nickel equivalent zone is discarded. 1 figures.
Process for magnetic beneficiating petroleum cracking catalyst
Doctor, Richard D.
1993-01-01
A process for beneficiating a particulate zeolite petroleum cracking catalyst having metal values in excess of 1000 ppm nickel equivalents. The particulate catalyst is passed through a magnetic field in the range of from about 2 Tesla to about 5 Tesla generated by a superconducting quadrupole open-gradient magnetic system for a time sufficient to effect separation of said catalyst into a plurality of zones having different nickel equivalent concentrations. A first zone has nickel equivalents of about 6,000 ppm and greater, a second zone has nickel equivalents in the range of from about 2000 ppm to about 6000 ppm, and a third zone has nickel equivalents of about 2000 ppm and less. The zones of catalyst are separated and the second zone material is recycled to a fluidized bed of zeolite petroleum cracking catalyst. The low nickel equivalent zone is treated while the high nickel equivalent zone is discarded.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
Barringer, J.L.; Johnsson, P.A.
1996-01-01
Titrations for alkalinity and acidity using the technique described by Gran (1952, Determination of the equivalence point in potentiometric titrations, Part II: The Analyst, v. 77, p. 661-671) have been employed in the analysis of low-pH natural waters. This report includes a synopsis of the theory and calculations associated with Gran's technique and presents a simple and inexpensive method for performing alkalinity and acidity determinations. However, potential sources of error introduced by the chemical character of some waters may limit the utility of Gran's technique. Therefore, the cost- and time-efficient method for performing alkalinity and acidity determinations described in this report is useful for exploring the suitability of Gran's technique in studies of water chemistry.
Mars' surface radiation environment measured with the Mars Science Laboratory's Curiosity rover.
Hassler, Donald M; Zeitlin, Cary; Wimmer-Schweingruber, Robert F; Ehresmann, Bent; Rafkin, Scot; Eigenbrode, Jennifer L; Brinza, David E; Weigle, Gerald; Böttcher, Stephan; Böhm, Eckart; Burmeister, Soenke; Guo, Jingnan; Köhler, Jan; Martin, Cesar; Reitz, Guenther; Cucinotta, Francis A; Kim, Myung-Hee; Grinspoon, David; Bullock, Mark A; Posner, Arik; Gómez-Elvira, Javier; Vasavada, Ashwin; Grotzinger, John P
2014-01-24
The Radiation Assessment Detector (RAD) on the Mars Science Laboratory's Curiosity rover began making detailed measurements of the cosmic ray and energetic particle radiation environment on the surface of Mars on 7 August 2012. We report and discuss measurements of the absorbed dose and dose equivalent from galactic cosmic rays and solar energetic particles on the martian surface for ~300 days of observations during the current solar maximum. These measurements provide insight into the radiation hazards associated with a human mission to the surface of Mars and provide an anchor point with which to model the subsurface radiation environment, with implications for microbial survival times of any possible extant or past life, as well as for the preservation of potential organic biosignatures of the ancient martian environment.
Safety and efficacy of generic drugs with respect to brand formulation
Gallelli, Luca; Palleria, Caterina; De Vuono, Antonio; Mumoli, Laura; Vasapollo, Piero; Piro, Brunella; Russo, Emilio
2013-01-01
Generic drugs are equivalent to the brand formulation if they have the same active substance, the same pharmaceutical form and the same therapeutic indications and a similar bioequivalence respect to the reference medicinal product. The use of generic drugs is indicated from many countries in order to reduce medication price. However some points, such as bioequivalence and the role of excipients, may be clarified regarding the clinical efficacy and safety during the switch from brand to generic formulations. In conclusion, the use of generic drugs could be related with an increased days of disease (time to relapse) or might lead to a therapeutic failure; on the other hand, a higher drug concentration might expose patients to an increased risk of dose-dependent side-effects. PMID:24347975
Stress wave calculations in composite plates using the fast Fourier transform.
NASA Technical Reports Server (NTRS)
Moon, F. C.
1973-01-01
The protection of composite turbine fan blades against impact forces has prompted the study of dynamic stresses in composites due to transient loads. The mathematical model treats the laminated plate as an equivalent anisotropic material. The use of Mindlin's approximate theory of crystal plates results in five two-dimensional stress waves. Three of the waves are flexural and two involve in-plane extensional strains. The initial value problem due to a transient distributed transverse force on the plate is solved using Laplace and Fourier transforms. A fast computer program for inverting the two-dimensional Fourier transform is used. Stress contours for various stresses and times after application of load are obtained for a graphite fiber-epoxy matrix composite plate. Results indicate that the points of maximum stress travel along the fiber directions.
Olivero, Ofelia A; Torres, Lorangelly Rivera; Gorjifard, Sayeh; Momot, Dariya; Marrogi, Eryney; Divi, Rao L; Liu, Yongmin; Woodward, Ruth A; Sowers, Marsha J; Poirier, Miriam C
2013-07-15
Erythrocebus patas (patas) monkeys were used to model antiretroviral (ARV) drug in human immunodeficiency virus type 1-infected pregnant women. Pregnant patas dams were given human-equivalent doses of ARVs daily during 50% of gestation. Mesenchymal cells, cultured from bone marrow of patas offspring obtained at birth and at 1 and 3 years of age, were examined for genotoxicity, including centrosomal amplification, micronuclei, and micronuclei containing whole chromosomes. Compared with controls, statistically significant increases (P < .05) in centrosomal amplification, micronuclei, and micronuclei containing whole chromosomes were found in mesenchymal cells from most groups of offspring at the 3 time points. Transplacental nucleoside reverse-transcriptase inhibitor exposures induced fetal genotoxicity that was persistent for 3 years.
Measuring moderate-intensity walking in older adults using the ActiGraph accelerometer.
Barnett, Anthony; van den Hoek, Daniel; Barnett, David; Cerin, Ester
2016-12-08
Accelerometry is the method of choice for objectively assessing physical activity in older adults. Many studies have used an accelerometer count cut point corresponding to 3 metabolic equivalents (METs) derived in young adults during treadmill walking and running with a resting metabolic rate (RMR) assumed at 3.5 mL · kg -1 · min -1 (corresponding to 1 MET). RMR is lower in older adults; therefore, their 3 MET level occurs at a lower absolute energy expenditure making the cut point derived from young adults inappropriate for this population. The few studies determining older adult specific moderate-to-vigorous intensity physical activity (MVPA) cut points had methodological limitations, such as not measuring RMR and using treadmill walking. This study determined a MVPA hip-worn accelerometer cut point for older adults using measured RMR and overground walking. Following determination of RMR, 45 older adults (mean age 70.2 ± 7 years, range 60-87.6 years) undertook an outdoor, overground walking protocol with accelerometer count and energy expenditure determined at five walking speeds. Mean RMR was 2.8 ± 0.6 mL · kg -1 · min -1 . The MVPA cut points (95% CI) determined using linear mixed models were: vertical axis 1013 (734, 1292) counts · min -1 ; vector magnitude 1924 (1657, 2192) counts · min -1 ; and walking speed 2.5 (2.2, 2.8) km · hr -1 . High levels of inter-individual variability in cut points were found. These MVPA accelerometer and speed cut points for walking, the most popular physical activity in older adults, were lower than those for younger adults. Using cut points determined in younger adults for older adult population studies is likely to underestimate time spent engaged in MVPA. In addition, prescription of walking speed based on the adult cut point is likely to result in older adults working at a higher intensity than intended.
Narayanaswamy’s 1971 aging theory and material time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyre, Jeppe C., E-mail: dyre@ruc.dk
2015-09-21
The Bochkov-Kuzovlev nonlinear fluctuation-dissipation theorem is used to derive Narayanaswamy’s phenomenological theory of physical aging, in which this highly nonlinear phenomenon is described by a linear material-time convolution integral. A characteristic property of the Narayanaswamy aging description is material-time translational invariance, which is here taken as the basic assumption of the derivation. It is shown that only one possible definition of the material time obeys this invariance, namely, the square of the distance travelled from a configuration of the system far back in time. The paper concludes with suggestions for computer simulations that test for consequences of material-time translational invariance.more » One of these is the “unique-triangles property” according to which any three points on the system’s path form a triangle such that two side lengths determine the third; this is equivalent to the well-known triangular relation for time-autocorrelation functions of aging spin glasses [L. F. Cugliandolo and J. Kurchan, J. Phys. A: Math. Gen. 27, 5749 (1994)]. The unique-triangles property implies a simple geometric interpretation of out-of-equilibrium time-autocorrelation functions, which extends to aging a previously proposed framework for such functions in equilibrium [J. C. Dyre, e-print arXiv:cond-mat/9712222 (1997)].« less
Determination of Dynamic Recrystallization Process by Equivalent Strain
NASA Astrophysics Data System (ADS)
Qin, Xiaomei; Deng, Wei
Based on Tpнoвckiй's displacement field, equivalent strain expression was derived. And according to the dynamic recrystallization (DRX) critical strain, DRX process was determined by equivalent strain. It was found that equivalent strain distribution in deformed specimen is inhomogeneous, and it increases with increasing true strain. Under a certain true strain, equivalent strains at the center, demisemi radius or on tangential plane just below the surface of the specimen are higher than the true strain. Thus, micrographs at those positions can not exactly reflect the true microstructures under the certain true strain. With increasing strain rate, the initial and finish time of DRX decrease. The frozen microstructures of 20Mn23AlV steel with the experimental condition validate the feasibility of predicting DRX process by equivalent strain.
Abbas, Ahmar S; Moseley, Douglas; Kassam, Zahra; Kim, Sun Mo; Cho, Charles
2013-05-06
Recently, volumetric-modulated arc therapy (VMAT) has demonstrated the ability to deliver radiation dose precisely and accurately with a shorter delivery time compared to conventional intensity-modulated fixed-field treatment (IMRT). We applied the hypothesis of VMAT technique for the treatment of thoracic esophageal carcinoma to determine superior or equivalent conformal dose coverage for a large thoracic esophageal planning target volume (PTV) with superior or equivalent sparing of organs-at-risk (OARs) doses, and reduce delivery time and monitor units (MUs), in comparison with conventional fixed-field IMRT plans. We also analyzed and compared some other important metrics of treatment planning and treatment delivery for both IMRT and VMAT techniques. These metrics include: 1) the integral dose and the volume receiving intermediate dose levels between IMRT and VMATI plans; 2) the use of 4D CT to determine the internal motion margin; and 3) evaluating the dosimetry of every plan through patient-specific QA. These factors may impact the overall treatment plan quality and outcomes from the individual planning technique used. In this study, we also examined the significance of using two arcs vs. a single-arc VMAT technique for PTV coverage, OARs doses, monitor units and delivery time. Thirteen patients, stage T2-T3 N0-N1 (TNM AJCC 7th edn.), PTV volume median 395 cc (range 281-601 cc), median age 69 years (range 53 to 85), were treated from July 2010 to June 2011 with a four-field (n = 4) or five-field (n = 9) step-and-shoot IMRT technique using a 6 MV beam to a prescribed dose of 50 Gy in 20 to 25 F. These patients were retrospectively replanned using single arc (VMATI, 91 control points) and two arcs (VMATII, 182 control points). All treatment plans of the 13 study cases were evaluated using various dose-volume metrics. These included PTV D99, PTV D95, PTV V9547.5Gy(95%), PTV mean dose, Dmax, PTV dose conformity (Van't Riet conformation number (CN)), mean lung dose, lung V20 and V5, liver V30, and Dmax to the spinal canal prv3mm. Also examined were the total plan monitor units (MUs) and the beam delivery time. Equivalent target coverage was observed with both VMAT single and two-arc plans. The comparison of VMATI with fixed-field IMRT demonstrated equivalent target coverage; statistically no significant difference were found in PTV D99 (p = 0.47), PTV mean (p = 0.12), PTV D95 and PTV V9547.5Gy (95%) (p = 0.38). However, Dmax in VMATI plans was significantly lower compared to IMRT (p = 0.02). The Van't Riet dose conformation number (CN) was also statistically in favor of VMATI plans (p = 0.04). VMATI achieved lower lung V20 (p = 0.05), whereas lung V5 (p = 0.35) and mean lung dose (p = 0.62) were not significantly different. The other OARs, including spinal canal, liver, heart, and kidneys showed no statistically significant differences between the two techniques. Treatment time delivery for VMATI plans was reduced by up to 55% (p = 5.8E-10) and MUs reduced by up to 16% (p = 0.001). Integral dose was not statistically different between the two planning techniques (p = 0.99). There were no statistically significant differences found in dose distribution of the two VMAT techniques (VMATI vs. VMATII) Dose statistics for both VMAT techniques were: PTV D99 (p = 0.76), PTV D95 (p = 0.95), mean PTV dose (p = 0.78), conformation number (CN) (p = 0.26), and MUs (p = 0.1). However, the treatment delivery time for VMATII increased significantly by two-fold (p = 3.0E-11) compared to VMATI. VMAT-based treatment planning is safe and deliverable for patients with thoracic esophageal cancer with similar planning goals, when compared to standard IMRT. The key benefit for VMATI was the reduction in treatment delivery time and MUs, and improvement in dose conformality. In our study, we found no significant difference in VMATII over single-arc VMATI for PTV coverage or OARs doses. However, we observed significant increase in delivery time for VMATII compared to VMATI.
Iino, Fukuya; Takasuga, Takumi; Touati, Abderrahmane; Gullett, Brian K
2003-01-01
The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and operating municipal waste incinerators (114 data points). Regardless of the three scales and types of equipment, the different temperature profiles, sampling emissions and/or solids (fly ash), and the various chemical and physical properties of the fuels, all the PCDF plots showed highly linear correlations (R(2)>0.99). The fitting lines of the reactor and the boiler data were almost linear with slope of unity, whereas the slope of the municipal waste incinerator data was 0.86, which is caused by higher predicted values for samples with high measured TEQ. The strong correlation also implies that each of the 10 toxic PCDF congeners has a constant concentration relative to its respective total homologue concentration despite a wide range of facility types and combustion conditions. The PCDD plots showed significant scatter and poor linearity, which implies that the relative concentration of PCDD TEQ congeners is more sensitive to variations in reaction conditions than that of the PCDF congeners.
Code of Federal Regulations, 2010 CFR
2010-07-01
... equivalent level of safety. (c) As a third option, other technical analysis procedures, as approved by the... Equivalent Level of Safety Analysis § 102-80.115 Is there more than one option for establishing that an... areas of safety. Available safe egress times would be developed based on analysis of a number of assumed...
NASA Astrophysics Data System (ADS)
Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng
2018-07-01
Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.
NASA Technical Reports Server (NTRS)
Wu, Xiaolin; Delgado, Guillermo; Krishnamurthy, Ramanarayanan; Eschenmoser, Albert
2003-01-01
Replacement of adenine by 2,6-diaminopurine-two nucleobases to be considered equivalent from an etlological point of view-strongly enhances the stability of TNA/TNA, TNA/RNA, or TNA/DNA duplexes and efficiently accelerates template-directed ligation of TNA ligands.
Code of Federal Regulations, 2011 CFR
2011-07-01
... stream components, not carbon equivalents. Car-seal means a seal that is placed on a device that is used..., flow inducing devices that transport gas or vapor from an emission point to a control device. A closed...), analyze, and provide a record of process or control system parameters. Continuous record means...
Code of Federal Regulations, 2014 CFR
2014-07-01
... stream components, not carbon equivalents. Car-seal means a seal that is placed on a device that is used..., flow inducing devices that transport gas or vapor from an emission point to a control device. A closed...), analyze, and provide a record of process or control system parameters. Continuous record means...
Code of Federal Regulations, 2013 CFR
2013-07-01
... stream components, not carbon equivalents. Car-seal means a seal that is placed on a device that is used..., flow inducing devices that transport gas or vapor from an emission point to a control device. A closed...), analyze, and provide a record of process or control system parameters. Continuous record means...
42 CFR 435.603 - Application of modified adjusted gross income (MAGI).
Code of Federal Regulations, 2014 CFR
2014-10-01
... equivalent to 5 percentage points of the Federal poverty level for the applicable family size only to determine the eligibility of an individual for medical assistance under the eligibility group with the... determine eligibility for a particular eligibility group. (e) MAGI-based income. For the purposes of this...
Code of Federal Regulations, 2014 CFR
2014-07-01
... the basis of one or more of the following factors: (i) The design and use of the device primarily to... onsite, or to a point of shipment for disposal off-site. Aquifer means a geologic formation, group of... equivalent responsibility. Battery means a device consisting of one or more electrically connected...
Acid Rain Analysis by Standard Addition Titration.
ERIC Educational Resources Information Center
Ophardt, Charles E.
1985-01-01
The standard addition titration is a precise and rapid method for the determination of the acidity in rain or snow samples. The method requires use of a standard buret, a pH meter, and Gran's plot to determine the equivalence point. Experimental procedures used and typical results obtained are presented. (JN)
40 CFR 63.653 - Monitoring, recordkeeping, and implementation plan for emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... § 63.120 of subpart G; and (ii) For closed vent systems with control devices, conduct an initial design..., monitoring, recordkeeping, and reporting equivalent to that required for Group 1 emission points complying... control device. (2) The source shall implement the following procedures for each miscellaneous process...
Preparing Educators to Teach Effectively in Inclusive Settings
ERIC Educational Resources Information Center
Reyes, Maria E.; Hutchinson, Cynthia J.; Little, Mary
2017-01-01
Florida Senate Bill 1108, requires educators applying for re-certification to earn one college credit or the equivalent in-service points in teaching students with disabilities (SWD). To assist educators in fulfilling this requirement, a college of education designed an online course and an educator's summer institute (ESI). This article examines…
40 CFR 430.107 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Secondary Fiber Non...: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is produced... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...
40 CFR 430.107 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2011 CFR
2011-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Secondary Fiber Non...: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is produced... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...
14 CFR 25.1337 - Powerplant instruments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... supplying reciprocating engines, at a point downstream of any fuel pump except fuel injection pumps. In... hazard. (b) Fuel quantity indicator. There must be means to indicate to the flight crewmembers, the quantity, in gallons or equivalent units, of usable fuel in each tank during flight. In addition— (1) Each...
14 CFR 25.1337 - Powerplant instruments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... supplying reciprocating engines, at a point downstream of any fuel pump except fuel injection pumps. In... hazard. (b) Fuel quantity indicator. There must be means to indicate to the flight crewmembers, the quantity, in gallons or equivalent units, of usable fuel in each tank during flight. In addition— (1) Each...
14 CFR 25.1337 - Powerplant instruments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... supplying reciprocating engines, at a point downstream of any fuel pump except fuel injection pumps. In... hazard. (b) Fuel quantity indicator. There must be means to indicate to the flight crewmembers, the quantity, in gallons or equivalent units, of usable fuel in each tank during flight. In addition— (1) Each...
14 CFR 25.1337 - Powerplant instruments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... supplying reciprocating engines, at a point downstream of any fuel pump except fuel injection pumps. In... hazard. (b) Fuel quantity indicator. There must be means to indicate to the flight crewmembers, the quantity, in gallons or equivalent units, of usable fuel in each tank during flight. In addition— (1) Each...
14 CFR 25.1337 - Powerplant instruments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... supplying reciprocating engines, at a point downstream of any fuel pump except fuel injection pumps. In... hazard. (b) Fuel quantity indicator. There must be means to indicate to the flight crewmembers, the quantity, in gallons or equivalent units, of usable fuel in each tank during flight. In addition— (1) Each...
14 CFR 171.315 - Azimuth monitor system requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Microwave Landing System... system must cause the radiation to cease and a warning must be provided at the designated control point... following procedure. The integral monitor alarm limit should be set to the angular equivalent of ±10 ft. at...
14 CFR 171.315 - Azimuth monitor system requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Microwave Landing System... system must cause the radiation to cease and a warning must be provided at the designated control point... following procedure. The integral monitor alarm limit should be set to the angular equivalent of ±10 ft. at...
The Telephone Interview for Cognitive Status: Creating a crosswalk with the Mini-Mental State Exam
Fong, Tamara G.; Fearing, Michael A.; Jones, Richard N.; Shi, Peilin; Marcantonio, Edward R.; Rudolph, James L.; Yang, Frances M.; Kiely, Dan K.; Inouye, Sharon K.
2009-01-01
Background Brief cognitive screening measures are valuable tools for both research and clinical applications. The most widely used instrument, the Mini-Mental State Examination (MMSE) is limited in that it must be administered face-to-face, cannot be used in participants with visual or motor impairments, and is protected by copyright. Alternative screening instruments, such as the Telephone Interview for Cognitive Status (TICS) have been developed and may provide a valid alternative with comparable cut point scores to rate global cognitive function. Methods MMSE, TICS-30, and TICS-40 scores from 746 community dwelling elders who participated in the Aging, Demographics, and Memory Study (ADAMS) were analyzed with equipercentile equating, a statistical process of determining comparable scores based on percentile equivalents on different forms of an examination. Results Scores from the MMSE and the TICS-30 and TICS-40 corresponded well and clinically relevant cut point scores were determined; for example, an MMSE score of 23 is equivalent to 17 and 20 on the TICS-30 and TICS-40, respectively. Conclusions These findings provide scores that can be used to link TICS and MMSE scores directly. Clinically relevant and important MMSE cut points and the respective ADAMS TICS-30 and TICS-40 cut point scores have been included to identify the degree of cognitive impairment among respondents with any type of cognitive disorder. These results will help with the widespread application of the TICS in both research and clinical practice. PMID:19647495
NASA Astrophysics Data System (ADS)
Li, Jingxiang; Zhao, Shengdun; Ishihara, Kunihiko
2013-05-01
A novel approach is presented to study the acoustical properties of sintered bronze material, especially used to suppress the transient noise generated by the pneumatic exhaust of pneumatic friction clutch and brake (PFC/B) systems. The transient exhaust noise is impulsive and harmful due to the large sound pressure level (SPL) that has high-frequency. In this paper, the exhaust noise is related to the transient impulsive exhaust, which is described by a one-dimensional aerodynamic model combining with a pressure drop expression of the Ergun equation. A relation of flow parameters and sound source is set up. Additionally, the piston acoustic source approximation of sintered bronze silencer with cylindrical geometry is presented to predict SPL spectrum at a far-field observation point. A semi-phenomenological model is introduced to analyze the sound propagation and reduction in the sintered bronze materials assumed as an equivalent fluid with rigid frame. Experiment results under different initial cylinder pressures are shown to corroborate the validity of the proposed aerodynamic model. In addition, the calculated sound pressures according to the equivalent sound source are compared with the measured noise signals both in time-domain and frequency-domain. Influences of porosity of the sintered bronze material are also discussed.
Development of performance matrix for generic product equivalence of acyclovir topical creams.
Krishnaiah, Yellela S R; Xu, Xiaoming; Rahman, Ziyaur; Yang, Yang; Katragadda, Usha; Lionberger, Robert; Peters, John R; Uhl, Kathleen; Khan, Mansoor A
2014-11-20
The effect of process variability on physicochemical characteristics and in vitro performance of qualitatively (Q1) and quantitatively (Q2) equivalent generic acyclovir topical dermatological creams was investigated to develop a matrix of standards for determining their in vitro bioequivalence with reference listed drug (RLD) product (Zovirax®). A fractional factorial design of experiment (DOE) with triplicate center point was used to create 11 acyclovir cream formulations with manufacturing variables such as pH of aqueous phase, emulsification time, homogenization speed, and emulsification temperature. Three more formulations (F-12-F-14) with drug particle size representing RLD were also prepared where the pH of the final product was adjusted. The formulations were subjected to physicochemical characterization (drug particle size, spreadability, viscosity, pH, and drug concentration in aqueous phase) and in vitro drug release studies against RLD. The results demonstrated that DOE formulations were structurally and functionally (e.g., drug release) similar (Q3) to RLD. Moreover, in vitro drug permeation studies showed that extent of drug bioavailability/retention in human epidermis from F-12-F-14 were similar to RLD, although differed in rate of permeation. The results suggested generic acyclovir creams can be manufactured to obtain identical performance as that of RLD with Q1/Q2/Q3. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Khee Looe, Hui; Harder, Dietrich; Poppe, Björn
2011-07-01
The subject of this study is the 'shift of the effective point of measurement', Δz, well known as a method of correction compensating for the 'displacement effect' in photon and electron beam dosimetry. Radiochromic EBT 1 films have been used to measure the 'true' TPR curves of 6 and 15 MV photons and 6 and 9 MeV electrons in the solid water-equivalent material RW3. For the Roos and Markus chambers, the cylindrical 'PinPoint', 'Semiflex' and 'Rigid-Stem' chambers, the 2D-Array and the E-type silicon diode (all from PTW-Freiburg), the positions of the effective points of measurement have been determined by direct or indirect comparison between their TPR curves and those of the EBT 1 film. Both for the Roos and Markus chambers, we found Δz = (0.4 ± 0.1) mm, which confirms earlier experimental and Monte Carlo results, but means a shortcoming of the 'water-equivalent window thickness' formula. For the cylindrical chambers, the ratio Δz/r was observed to increase with r, confirming a recent Monte Carlo prediction by Tessier (2010 E2-CN-182, Paper no 147, IDOS, Vienna) as well as the experimental observations by Johansson et al (1978 IAEA Symp. Proc. (Vienna) IAEA-SM-222/35 pp 243-70). According to a theoretical consideration, the shift of the effective point of measurement from the reference point of the detector is caused by a gradient of the fluence of the ionizing particles. As the experiments have shown, the value of Δz depends on the construction of the detector, but remains invariant under changes of radiation quality and depth. Other disturbances, which do not belong to the class of 'gradient effects', are not corrected by shifting the effective point of measurement.
Looe, Hui Khee; Harder, Dietrich; Poppe, Björn
2011-07-21
The subject of this study is the 'shift of the effective point of measurement', Δz, well known as a method of correction compensating for the 'displacement effect' in photon and electron beam dosimetry. Radiochromic EBT 1 films have been used to measure the 'true' TPR curves of 6 and 15 MV photons and 6 and 9 MeV electrons in the solid water-equivalent material RW3. For the Roos and Markus chambers, the cylindrical 'PinPoint', 'Semiflex' and 'Rigid-Stem' chambers, the 2D-Array and the E-type silicon diode (all from PTW-Freiburg), the positions of the effective points of measurement have been determined by direct or indirect comparison between their TPR curves and those of the EBT 1 film. Both for the Roos and Markus chambers, we found Δz = (0.4 ± 0.1) mm, which confirms earlier experimental and Monte Carlo results, but means a shortcoming of the 'water-equivalent window thickness' formula. For the cylindrical chambers, the ratio Δz/r was observed to increase with r, confirming a recent Monte Carlo prediction by Tessier (2010 E2-CN-182, Paper no 147, IDOS, Vienna) as well as the experimental observations by Johansson et al (1978 IAEA Symp. Proc. (Vienna) IAEA-SM-222/35 pp 243-70). According to a theoretical consideration, the shift of the effective point of measurement from the reference point of the detector is caused by a gradient of the fluence of the ionizing particles. As the experiments have shown, the value of Δz depends on the construction of the detector, but remains invariant under changes of radiation quality and depth. Other disturbances, which do not belong to the class of 'gradient effects', are not corrected by shifting the effective point of measurement.
Biomechanical and Histologic Evaluation of LifeMesh™: A Novel Self-Fixating Mesh Adhesive.
Shahan, Charles P; Stoikes, Nathaniel N; Roan, Esra; Tatum, James; Webb, David L; Voeller, Guy R
2018-04-01
Mesh fixation with the use of adhesives results in an immediate and total surface area adhesion of the mesh, removing the need for penetrating fixation points. The purpose of this study was to evaluate LifeMesh™, a prototype mesh adhesive technology which coats polypropylene mesh. The strength of the interface between mesh and tissue, inflammatory responses, and histology were measured at varying time points in a swine model, and these results were compared with sutures. Twenty Mongrel swine underwent implantation of LifeMesh™ and one piece of bare polypropylene mesh secured with suture (control). One additional piece of either LifeMesh™ or control was used for histopathologic evaluation. The implants were retrieved at 3, 7, and 14 days. Only 3- and 7-day specimens underwent lap shear testing. On Day 3, LifeMesh™ samples showed considerably less contraction than sutured samples. The interfacial strength of Day 3 LifeMesh™ samples was similar to that of sutured samples. At seven days, LifeMesh™ samples continued to show significantly less contraction than sutured samples. The strength of fixation at seven days was greater in the control samples. The histologic findings were similar in LifeMesh™ and control samples. LifeMesh™ showed significantly less contraction than sutured samples at all measured time points. Although fixation strength was similar at three days, the interfacial strength of LifeMesh™ remained unchanged, whereas sutured controls increased by day 7. With histologic equivalence, considerably less contraction, and similar early fixation strength, LifeMesh™ is a viable mesh fixation technology.
NASA Astrophysics Data System (ADS)
Vassiliev, Dmitri
2017-04-01
We consider an infinite three-dimensional elastic continuum whose material points experience no displacements, only rotations. This framework is a special case of the Cosserat theory of elasticity. Rotations of material points are described mathematically by attaching to each geometric point an orthonormal basis that gives a field of orthonormal bases called the coframe. As the dynamical variables (unknowns) of our theory, we choose the coframe and a density. We write down the general dynamic variational functional for our rotational theory of elasticity, assuming our material to be physically linear but the kinematic model geometrically nonlinear. Allowing geometric nonlinearity is natural when dealing with rotations because rotations in dimension three are inherently nonlinear (rotations about different axes do not commute) and because there is no reason to exclude from our study large rotations such as full turns. The main result of the talk is an explicit construction of a class of time-dependent solutions that we call plane wave solutions; these are travelling waves of rotations. The existence of such explicit closed-form solutions is a non-trivial fact given that our system of Euler-Lagrange equations is highly nonlinear. We also consider a special case of our rotational theory of elasticity which in the stationary setting (harmonic time dependence and arbitrary dependence on spatial coordinates) turns out to be equivalent to a pair of massless Dirac equations. The talk is based on the paper [1]. [1] C.G.Boehmer, R.J.Downes and D.Vassiliev, Rotational elasticity, Quarterly Journal of Mechanics and Applied Mathematics, 2011, vol. 64, p. 415-439. The paper is a heavily revised version of preprint https://arxiv.org/abs/1008.3833
Comparison of physically- and economically-based CO2-equivalences for methane
NASA Astrophysics Data System (ADS)
Boucher, O.
2012-05-01
There is a controversy on the role methane (and other short-lived species) should play in climate mitigation policies, and there is no consensus on what an optimal methane CO2-equivalence should be. We revisit this question by discussing some aspects of physically-based (i.e. global- warming potential or GWP and global temperature change potential or GTP) and socio-economically-based climate metrics. To this effect we use a simplified global damage potential (GDP) that was introduced by earlier authors and investigate the uncertainties in the methane CO2-equivalence that arise from physical and socio-economic factors. The median value of the methane GDP comes out very close to the widely used methane 100-yr GWP because of various compensating effects. However, there is a large spread in possible methane CO2-equivalences from this metric (1-99% interval: 10.0-42.5; 5-95% interval: 12.5-38.0) that is essentially due to the choice in some socio-economic parameters (i.e. the damage cost function and the discount rate). The main factor differentiating the methane 100-yr GTP from the methane 100-yr GWP and the GDP is the fact that the former metric is an end-point metric, whereas the latter are cumulative metrics. There is some rationale for an increase in the methane CO2-equivalence in the future as global warming unfolds, as implied by a convex damage function in the case of the GDP metric. We also show that a methane CO2-equivalence based on a pulse emission is sufficient to inform multi-year climate policies and emissions reductions, as long as there is enough visibility on CO2 prices and CO2-equivalences for the stakeholders.
Puchalska, Monika; Bilski, Pawel; Berger, Thomas; Hajek, Michael; Horwacik, Tomasz; Körner, Christine; Olko, Pawel; Shurshakov, Vyacheslav; Reitz, Günther
2014-11-01
The health effects of cosmic radiation on astronauts need to be precisely quantified and controlled. This task is important not only in perspective of the increasing human presence at the International Space Station (ISS), but also for the preparation of safe human missions beyond low earth orbit. From a radiation protection point of view, the baseline quantity for radiation risk assessment in space is the effective dose equivalent. The present work reports the first successful attempt of the experimental determination of the effective dose equivalent in space, both for extra-vehicular activity (EVA) and intra-vehicular activity (IVA). This was achieved using the anthropomorphic torso phantom RANDO(®) equipped with more than 6,000 passive thermoluminescent detectors and plastic nuclear track detectors, which have been exposed to cosmic radiation inside the European Space Agency MATROSHKA facility both outside and inside the ISS. In order to calculate the effective dose equivalent, a numerical model of the RANDO(®) phantom, based on computer tomography scans of the actual phantom, was developed. It was found that the effective dose equivalent rate during an EVA approaches 700 μSv/d, while during an IVA about 20 % lower values were observed. It is shown that the individual dose based on a personal dosimeter reading for an astronaut during IVA results in an overestimate of the effective dose equivalent of about 15 %, whereas under an EVA conditions the overestimate is more than 200 %. A personal dosemeter can therefore deliver quite good exposure records during IVA, but may overestimate the effective dose equivalent received during an EVA considerably.
Estimating Equivalency of Explosives Through A Thermochemical Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maienschein, J L
2002-07-08
The Cheetah thermochemical computer code provides an accurate method for estimating the TNT equivalency of any explosive, evaluated either with respect to peak pressure or the quasi-static pressure at long time in a confined volume. Cheetah calculates the detonation energy and heat of combustion for virtually any explosive (pure or formulation). Comparing the detonation energy for an explosive with that of TNT allows estimation of the TNT equivalency with respect to peak pressure, while comparison of the heat of combustion allows estimation of TNT equivalency with respect to quasi-static pressure. We discuss the methodology, present results for many explosives, andmore » show comparisons with equivalency data from other sources.« less
SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriya, S; Sato, M; Tachibana, H
Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less
Lotka-Volterra competition models for sessile organisms.
Spencer, Matthew; Tanner, Jason E
2008-04-01
Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.