Sample records for minimum distance criterion

  1. Modeling the long-term evolution of space debris

    DOEpatents

    Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.

    2017-03-07

    A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.

  2. Influence of the geomembrane on time-lapse ERT measurements for leachate injection monitoring.

    PubMed

    Audebert, M; Clément, R; Grossin-Debattista, J; Günther, T; Touze-Foltz, N; Moreau, S

    2014-04-01

    Leachate recirculation is a key process in the operation of municipal waste landfills as bioreactors. To quantify the water content and to evaluate the leachate injection system, in situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). However, this method can present false variations in the observations due to several parameters. This study investigates the impact of the geomembrane on ERT measurements. Indeed, the geomembrane tends to be ignored in the inversion process in most previously conducted studies. The presence of the geomembrane can change the boundary conditions of the inversion models, which have classically infinite boundary conditions. Using a numerical modelling approach, the authors demonstrate that a minimum distance is required between the electrode line and the geomembrane to satisfy the good conditions of use of the classical inversion tools. This distance is a function of the electrode line length (i.e. of the unit electrode spacing) used, the array type and the orientation of the electrode line. Moreover, this study shows that if this criterion on the minimum distance is not satisfied, it is possible to significantly improve the inversion process by introducing the complex geometry and the geomembrane location into the inversion tools. These results are finally validated on a field data set gathered on a small municipal solid waste landfill cell where this minimum distance criterion cannot be satisfied. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. [Acoustic conditions in open plan offices - Pilot test results].

    PubMed

    Mikulski, Witold

    The main source of noise in open plan office are conversations. Office work standards in such premises are attained by applying specific acoustic adaptation. This article presents the results of pilot tests and acoustic evaluation of open space rooms. Acoustic properties of 6 open plan office rooms were the subject of the tests. Evaluation parameters, measurement methods and criterial values were adopted according to the following standards: PN-EN ISO 3382- 3:2012, PN-EN ISO 3382-2:2010, PN-B-02151-4:2015-06 and PN-B-02151-3:2015-10. The reverberation time was 0.33- 0.55 s (maximum permissible value in offices - 0.6 s; the criterion was met), sound absorption coefficient in relation to 1 m2 of the room's plan was 0.77-1.58 m2 (minimum permissible value - 1.1 m2; 2 out of 6 rooms met the criterion), distraction distance was 8.5-14 m (maximum permissible value - 5 m; none of the rooms met the criterion), A-weighted sound pressure level of speech at a distance of 4 m was 43.8-54.7 dB (maximum permissible value - 48 dB; 2 out of 6 rooms met the criterion), spatial decay rate of the speech was 1.8-6.3 dB (minimum permissible value - 7 dB; none of the rooms met the criterion). Standard acoustic treatment, containing sound absorbing suspended ceiling, sound absorbing materials on the walls, carpet flooring and sound absorbing workplace barriers, is not sufficient. These rooms require specific advanced acoustic solutions. Med Pr 2016;67(5):653-662. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  4. Four-Dimensional Golden Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.

    2015-02-25

    The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less

  5. On the complexity of search for keys in quantum cryptography

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2016-03-01

    The trace distance is used as a security criterion in proofs of security of keys in quantum cryptography. Some authors doubted that this criterion can be reduced to criteria used in classical cryptography. The following question has been answered in this work. Let a quantum cryptography system provide an ɛ-secure key such that ½‖ρ XE - ρ U ⊗ ρ E ‖1 < ɛ, which will be repeatedly used in classical encryption algorithms. To what extent does the ɛ-secure key reduce the number of search steps (guesswork) as compared to the use of ideal keys? A direct relation has been demonstrated between the complexity of the complete consideration of keys, which is one of the main security criteria in classical systems, and the trace distance used in quantum cryptography. Bounds for the minimum and maximum numbers of search steps for the determination of the actual key have been presented.

  6. Finding local genome rearrangements.

    PubMed

    Simonaitis, Pijus; Swenson, Krister M

    2018-01-01

    The double cut and join (DCJ) model of genome rearrangement is well studied due to its mathematical simplicity and power to account for the many events that transform gene order. These studies have mostly been devoted to the understanding of minimum length scenarios transforming one genome into another. In this paper we search instead for rearrangement scenarios that minimize the number of rearrangements whose breakpoints are unlikely due to some biological criteria. One such criterion has recently become accessible due to the advent of the Hi-C experiment, facilitating the study of 3D spacial distance between breakpoint regions. We establish a link between the minimum number of unlikely rearrangements required by a scenario and the problem of finding a maximum edge-disjoint cycle packing on a certain transformed version of the adjacency graph. This link leads to a 3/2-approximation as well as an exact integer linear programming formulation for our problem, which we prove to be NP-complete. We also present experimental results on fruit flies, showing that Hi-C data is informative when used as a criterion for rearrangements. A new variant of the weighted DCJ distance problem is addressed that ignores scenario length in its objective function. A solution to this problem provides a lower bound on the number of unlikely moves necessary when transforming one gene order into another. This lower bound aids in the study of rearrangement scenarios with respect to chromatin structure, and could eventually be used in the design of a fixed parameter algorithm with a more general objective function.

  7. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  8. Criterion distances and environmental correlates of active commuting to school in children

    PubMed Central

    2011-01-01

    Background Active commuting to school can contribute to daily physical activity levels in children. Insight into the determinants of active commuting is needed, to promote such behavior in children living within a feasible commuting distance from school. This study determined feasible distances for walking and cycling to school (criterion distances) in 11- to 12-year-old Belgian children. For children living within these criterion distances from school, the correlation between parental perceptions of the environment, the number of motorized vehicles per family and the commuting mode (active/passive) to school was investigated. Methods Parents (n = 696) were contacted through 44 randomly selected classes of the final year (sixth grade) in elementary schools in East- and West-Flanders. Parental environmental perceptions were obtained using the parent version of Neighborhood Environment Walkability Scale for Youth (NEWS-Y). Information about active commuting to school was obtained using a self-reported questionnaire for parents. Distances from the children's home to school were objectively measured with Routenet online route planner. Criterion distances were set at the distance in which at least 85% of the active commuters lived. After the determination of these criterion distances, multilevel analyses were conducted to determine correlates of active commuting to school within these distances. Results Almost sixty percent (59.3%) of the total sample commuted actively to school. Criterion distances were set at 1.5 kilometers for walking and 3.0 kilometers for cycling. In the range of 2.01 - 2.50 kilometers household distance from school, the number of passive commuters exceeded the number of active commuters. For children who were living less than 3.0 kilometers away from school, only perceived accessibility by the parents was positively associated with active commuting to school. Within the group of active commuters, a longer distance to school was associated with more cycling to school compared to walking to school. Conclusions Household distance from school is an important correlate of transport mode to school in children. Interventions to promote active commuting in 11-12 year olds should be focusing on children who are living within the criterion distance of 3.0 kilometers from school by improving the accessibility en route from children's home to school. PMID:21831276

  9. Joint Transmitter and Receiver Power Allocation under Minimax MSE Criterion with Perfect and Imperfect CSI for MC-CDMA Transmissions

    NASA Astrophysics Data System (ADS)

    Kotchasarn, Chirawat; Saengudomlert, Poompat

    We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.

  10. Transitions between corona, glow, and spark regimes of nanosecond repetitively pulsed discharges in air at atmospheric pressure

    NASA Astrophysics Data System (ADS)

    Pai, David Z.; Lacoste, Deanna A.; Laux, Christophe O.

    2010-05-01

    In atmospheric pressure air preheated from 300 to 1000 K, the nanosecond repetitively pulsed (NRP) method has been used to generate corona, glow, and spark discharges. Experiments have been performed to determine the parameter space (applied voltage, pulse repetition frequency, ambient gas temperature, and interelectrode gap distance) of each discharge regime. In particular, the experimental conditions necessary for the glow regime of NRP discharges have been determined, with the notable result that there exists a minimum and maximum gap distance for its existence at a given ambient gas temperature. The minimum gap distance increases with decreasing gas temperature, whereas the maximum does not vary appreciably. To explain the experimental results, an analytical model is developed to explain the corona-to-glow (C-G) and glow-to-spark (G-S) transitions. The C-G transition is analyzed in terms of the avalanche-to-streamer transition and the breakdown field during the conduction phase following the establishment of a conducting channel across the discharge gap. The G-S transition is determined by the thermal ionization instability, and we show analytically that this transition occurs at a certain reduced electric field for the NRP discharges studied here. This model shows that the electrode geometry plays an important role in the existence of the NRP glow regime at a given gas temperature. We derive a criterion for the existence of the NRP glow regime as a function of the ambient gas temperature, pulse repetition frequency, electrode radius of curvature, and interelectrode gap distance.

  11. A mathematical programming method for formulating a fuzzy regression model based on distance criterion.

    PubMed

    Chen, Liang-Hsuan; Hsueh, Chan-Ching

    2007-06-01

    Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.

  12. Multi-image acquisition-based distance sensor using agile laser spot beam.

    PubMed

    Riza, Nabeel A; Amin, M Junaid

    2014-09-01

    We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.

  13. Object aggregation using Neyman-Pearson analysis

    NASA Astrophysics Data System (ADS)

    Bai, Li; Hinman, Michael L.

    2003-04-01

    This paper presents a novel approach to: 1) distinguish military vehicle groups, and 2) identify names of military vehicle convoys in the level-2 fusion process. The data is generated from a generic Ground Moving Target Indication (GMTI) simulator that utilizes Matlab and Microsoft Access. This data is processed to identify the convoys and number of vehicles in the convoy, using the minimum timed distance variance (MTDV) measurement. Once the vehicle groups are formed, convoy association is done using hypothesis techniques based upon Neyman Pearson (NP) criterion. One characteristic of NP is the low error probability when a-priori information is unknown. The NP approach was demonstrated with this advantage over a Bayesian technique.

  14. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  15. Minimum distance classification in remote sensing

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1972-01-01

    The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.

  16. Automatic discovery of optimal classes

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew

    1986-01-01

    A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.

  17. Multi-resolution analysis for ear recognition using wavelet features

    NASA Astrophysics Data System (ADS)

    Shoaib, M.; Basit, A.; Faye, I.

    2016-11-01

    Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.

  18. A universal theory for gas breakdown from microscale to the classical Paschen law

    NASA Astrophysics Data System (ADS)

    Loveless, Amanda M.; Garner, Allen L.

    2017-11-01

    While well established for larger gaps, Paschen's law (PL) fails to accurately predict breakdown for microscale gaps, where field emission becomes important. This deviation from PL is characterized by the absence of a minimum breakdown voltage as a function of the product of pressure and gap distance, which has been demonstrated analytically for microscale and smaller gaps with no secondary emission at atmospheric pressure [A. M. Loveless and A. L. Garner, IEEE Trans. Plasma Sci. 45, 574-583 (2017)]. We extend these previous results by deriving analytic expressions that incorporate the nonzero secondary emission coefficient, γS E, that are valid for gap distances larger than those at which quantum effects become important (˜100 nm) while remaining below those at which streamers arise. We demonstrate the validity of this model by benchmarking to particle-in-cell simulations with γSE = 0 and comparing numerical results to an experiment with argon, while additionally predicting a minimum voltage that was masked by fixing the gap pressure in previous analyses. Incorporating γSE demonstrates the smooth transition from field emission dominated breakdown to the classical PL once the combination of electric field, pressure, and gap distance satisfies the conventional criterion for the Townsend avalanche; however, such a condition generally requires supra-atmospheric pressures for breakdown at the microscale. Therefore, this study provides a single universal breakdown theory for any gas at any pressure dominated by field emission or Townsend avalanche to guide engineers in avoiding breakdown when designing microscale and larger devices, or inducing breakdown for generating microplasmas.

  19. Evaluation of entropy and JM-distance criterions as features selection methods using spectral and spatial features derived from LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.

  20. An Independent and Coordinated Criterion for Kinematic Aircraft Maneuvers

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.; Hagen, George

    2014-01-01

    This paper proposes a mathematical definition of an aircraft-separation criterion for kinematic-based horizontal maneuvers. It has been formally proved that kinematic maneu- vers that satisfy the new criterion are independent and coordinated for repulsiveness, i.e., the distance at closest point of approach increases whether one or both aircraft maneuver according to the criterion. The proposed criterion is currently used in NASA's Airborne Coordinated Resolution and Detection (ACCoRD) set of tools for the design and analysis of separation assurance systems.

  1. Polynomial-Time Approximation Algorithm for the Problem of Cardinality-Weighted Variance-Based 2-Clustering with a Given Center

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Motkova, A. V.

    2018-01-01

    A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.

  2. Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box

    NASA Astrophysics Data System (ADS)

    Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan

    2016-09-01

    The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.

  3. Relations between the efficiency, power and dissipation for linear irreversible heat engine at maximum trade-off figure of merit

    NASA Astrophysics Data System (ADS)

    Iyyappan, I.; Ponmurugan, M.

    2018-03-01

    A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \

  4. A Simple Criterion to Estimate Performance of Pulse Jet Mixed Vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pease, Leonard F.; Bamberger, Judith A.; Mahoney, Lenna A.

    Pulse jet mixed process vessels comprise a key element of the U.S. Department of Energy’s strategy to process millions of gallons of legacy nuclear waste slurries. Slurry suctioned into a pulse jet mixer (PJM) tube at the end of one pulse is pneumatically driven from the PJM toward the bottom of the vessel at the beginning of the next pulse, forming a jet. The jet front traverses the distance from nozzle outlet to the bottom of the vessel and spreads out radially. Varying numbers of PJMs are typically arranged in a ring configuration within the vessel at a selected radiusmore » and operated concurrently. Centrally directed radial flows from neighboring jets collide to create a central upwell that elevates the solids in the center of the vessel when the PJM tubes expel their contents. An essential goal of PJM operation is to elevate solids to the liquid surface to minimize stratification. Solids stratification may adversely affect throughput of the waste processing plant. Unacceptably high slurry densities at the base of the vessel may plug the pipeline through which the slurry exits the vessel. Additionally, chemical reactions required for processing may not achieve complete conversion. To avoid these conditions, a means of predicting the elevation to which the solids rise in the central upwell that can be used during vessel design remains essential. In this paper we present a simple criterion to evaluate the extent of solids elevation achieved by a turbulent upwell jet. The criterion asserts that at any location in the central upwell the local velocity must be in excess of a cutoff velocity to remain turbulent. We find that local velocities in excess of 0.6 m/s are necessary for turbulent jet flow through both Newtonian and yield stress slurries. By coupling this criterion with the free jet velocity equation relating the local velocity to elevation in the central upwell, we estimate the elevation at which turbulence fails, and consequently the elevation at which the upwell fails to further lift the slurry. Comparing this elevation to the vessel fill level predicts whether the jet flow will achieve the full vertical extent of the vessel at the center. This simple local-velocity criterion determines a minimum PJM nozzle velocity at which the full vertical extent of the central upwell in PJM vessels will be turbulent. The criterion determines a minimum because flow in regions peripheral to the central upwelling jet may not be turbulent, even when the center of the vessel in the upwell is turbulent, if the jet pulse duration is too short. The local-velocity criterion ensures only that there is sufficient wherewithal for the turbulent jet flow to drive solids to the surface in the center of the vessel in the central upwell.« less

  5. The relation between peak response magnitudes and agreement in diagnoses obtained from two different phallometric tests for pedophilia.

    PubMed

    Lykins, Amy D; Cantor, James M; Kuban, Michael E; Blak, Thomas; Dickey, Robert; Klassen, Philip E; Blanchard, Ray

    2010-03-01

    Phallometric testing is widely considered the best psychophysiological procedure for assessing erotic preferences in men. Researchers have differed, however, on the necessity of setting some minimum criterion of penile response for ascertaining the interpretability of a phallometric test result. Proponents of a minimum criterion have generally based their view on the intuitive notion that "more is better" rather than any formal demonstration of this. The present study was conducted to investigate whether there is any empirical evidence for this intuitive notion, by examining the relation between magnitude of penile response and the agreement in diagnoses obtained in two test sessions using different laboratory stimuli. The results showed that examinees with inconsistent diagnoses responded less on both tests and that examinees with inconsistent diagnoses responded less on the second test after controlling for their response on the first test. Results also indicated that at response levels less than 1 cm(3), diagnostic consistency was no better than chance, supporting the establishment of a minimum response level criterion.

  6. An approximate spin design criterion for monoplanes, 1 May 1939

    NASA Technical Reports Server (NTRS)

    Seidman, O.; Donlan, C. J.

    1976-01-01

    An approximate empirical criterion, based on the projected side area and the mass distribution of the airplane, was formulated. The British results were analyzed and applied to American designs. A simpler design criterion, based solely on the type and the dimensions of the tail, was developed; it is useful in a rapid estimation of whether a new design is likely to comply with the minimum requirements for safety in spinning.

  7. Specificity vs. Generalizability: Emergence of Especial Skills in Classical Archery

    PubMed Central

    Czyż, Stanisław H.; Moss, Sarah J.

    2016-01-01

    There is evidence that the recall schema becomes more refined after constant practice. It is also believed that massive amounts of constant practice eventually leads to the emergence of especial skills, i.e., skills that have an advantage in performance over other actions from within the same class of actions. This advantage in performance was noticed when one-criterion practice, e.g., basketball free throws, was compared to non-practiced variations of the skill. However, there is no evidence whether multi-criterion massive amounts of practice would give an advantage to the trained variations of the skill over non-trained, i.e., whether such practice would eventually lead to the development of (multi)-especial skills. The purpose of this study was to determine whether massive amount of practice involving four criterion variations of the skill will give an advantage in performance to the criterions over the class of actions. In two experiments, we analyzed data from female (n = 8) and male classical archers (n = 10), who were required to shoot 30 shots from four accustomed distances, i.e., males at 30, 50, 70, and 90 m and females at 30, 50, 60, and 70 m. The shooting accuracy for the untrained distances (16 distances in men and 14 in women) was used to compile a regression line for distance over shooting accuracy. Regression determined (expected) values were then compared to the shooting accuracy of the trained distances. Data revealed no significant differences between real and expected results at trained distances, except for the 70 m shooting distance in men. The F-test for lack of fit showed that the regression computed for trained and non-trained shooting distances was linear. It can be concluded that especial skills emerge only after very specific practice, i.e., constant practice limited to only one variation of the skill. PMID:27547196

  8. Specificity vs. Generalizability: Emergence of Especial Skills in Classical Archery.

    PubMed

    Czyż, Stanisław H; Moss, Sarah J

    2016-01-01

    There is evidence that the recall schema becomes more refined after constant practice. It is also believed that massive amounts of constant practice eventually leads to the emergence of especial skills, i.e., skills that have an advantage in performance over other actions from within the same class of actions. This advantage in performance was noticed when one-criterion practice, e.g., basketball free throws, was compared to non-practiced variations of the skill. However, there is no evidence whether multi-criterion massive amounts of practice would give an advantage to the trained variations of the skill over non-trained, i.e., whether such practice would eventually lead to the development of (multi)-especial skills. The purpose of this study was to determine whether massive amount of practice involving four criterion variations of the skill will give an advantage in performance to the criterions over the class of actions. In two experiments, we analyzed data from female (n = 8) and male classical archers (n = 10), who were required to shoot 30 shots from four accustomed distances, i.e., males at 30, 50, 70, and 90 m and females at 30, 50, 60, and 70 m. The shooting accuracy for the untrained distances (16 distances in men and 14 in women) was used to compile a regression line for distance over shooting accuracy. Regression determined (expected) values were then compared to the shooting accuracy of the trained distances. Data revealed no significant differences between real and expected results at trained distances, except for the 70 m shooting distance in men. The F-test for lack of fit showed that the regression computed for trained and non-trained shooting distances was linear. It can be concluded that especial skills emerge only after very specific practice, i.e., constant practice limited to only one variation of the skill.

  9. Prediction of acoustic feature parameters using myoelectric signals.

    PubMed

    Lee, Ki-Seung

    2010-07-01

    It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.

  10. Energy Efficiency Building Code for Commercial Buildings in Sri Lanka

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, John; Greenberg, Steve; Rubinstein, Francis

    2000-09-30

    1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.

  11. Criterion validity and accuracy of global positioning satellite and data logging devices for wheelchair tennis court movement

    PubMed Central

    Sindall, Paul; Lenton, John P.; Whytock, Katie; Tolfrey, Keith; Oyster, Michelle L.; Cooper, Rory A.; Goosey-Tolfrey, Victoria L.

    2013-01-01

    Purpose To compare the criterion validity and accuracy of a 1 Hz non-differential global positioning system (GPS) and data logger device (DL) for the measurement of wheelchair tennis court movement variables. Methods Initial validation of the DL device was performed. GPS and DL were fitted to the wheelchair and used to record distance (m) and speed (m/second) during (a) tennis field (b) linear track, and (c) match-play test scenarios. Fifteen participants were monitored at the Wheelchair British Tennis Open. Results Data logging validation showed underestimations for distance in right (DLR) and left (DLL) logging devices at speeds >2.5 m/second. In tennis-field tests, GPS underestimated distance in five drills. DLL was lower than both (a) criterion and (b) DLR in drills moving forward. Reversing drill direction showed that DLR was lower than (a) criterion and (b) DLL. GPS values for distance and average speed for match play were significantly lower than equivalent values obtained by DL (distance: 2816 (844) vs. 3952 (1109) m, P = 0.0001; average speed: 0.7 (0.2) vs. 1.0 (0.2) m/second, P = 0.0001). Higher peak speeds were observed in DL (3.4 (0.4) vs. 3.1 (0.5) m/second, P = 0.004) during tennis match play. Conclusions Sampling frequencies of 1 Hz are too low to accurately measure distance and speed during wheelchair tennis. GPS units with a higher sampling rate should be advocated in further studies. Modifications to existing DL devices may be required to increase measurement precision. Further research into the validity of movement devices during match play will further inform the demands and movement patterns associated with wheelchair tennis. PMID:23820154

  12. The minimum distance approach to classification

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1971-01-01

    The work to advance the state-of-the-art of miminum distance classification is reportd. This is accomplished through a combination of theoretical and comprehensive experimental investigations based on multispectral scanner data. A survey of the literature for suitable distance measures was conducted and the results of this survey are presented. It is shown that minimum distance classification, using density estimators and Kullback-Leibler numbers as the distance measure, is equivalent to a form of maximum likelihood sample classification. It is also shown that for the parametric case, minimum distance classification is equivalent to nearest neighbor classification in the parameter space.

  13. A variational dynamic programming approach to robot-path planning with a distance-safety criterion

    NASA Technical Reports Server (NTRS)

    Suh, Suk-Hwan; Shin, Kang G.

    1988-01-01

    An approach to robot-path planning is developed by considering both the traveling distance and the safety of the robot. A computationally-efficient algorithm is developed to find a near-optimal path with a weighted distance-safety criterion by using a variational calculus and dynamic programming (VCDP) method. The algorithm is readily applicable to any factory environment by representing the free workspace as channels. A method for deriving these channels is also proposed. Although it is developed mainly for two-dimensional problems, this method can be easily extended to a class of three-dimensional problems. Numerical examples are presented to demonstrate the utility and power of this method.

  14. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis.

    PubMed

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79), with the 1.5 mile (rp = 0.79, 0.73-0.85) and 12 min walk/run tests (rp = 0.78, 0.72-0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.

  15. Geometric Structure of 3D Spinal Curves: Plane Regions and Connecting Zones

    PubMed Central

    Berthonnaud, E.; Hilmi, R.; Dimnet, J.

    2012-01-01

    This paper presents a new study of the geometric structure of 3D spinal curves. The spine is considered as an heterogeneous beam, compound of vertebrae and intervertebral discs. The spine is modeled as a deformable wire along which vertebrae are beads rotating about the wire. 3D spinal curves are compound of plane regions connected together by zones of transition. The 3D spinal curve is uniquely flexed along the plane regions. The angular offsets between adjacent regions are concentrated at level of the middle zones of transition, so illustrating the heterogeneity of the spinal geometric structure. The plane regions along the 3D spinal curve must satisfy two criteria: (i) a criterion of minimum distance between the curve and the regional plane and (ii) a criterion controlling that the curve is continuously plane at the level of the region. The geometric structure of each 3D spinal curve is characterized by the sizes and orientations of regional planes, by the parameters representing flexed regions and by the sizes and functions of zones of transition. Spinal curves of asymptomatic subjects show three plane regions corresponding to spinal curvatures: lumbar, thoracic and cervical curvatures. In some scoliotic spines, four plane regions may be detected. PMID:25031873

  16. Influences on Academic Achievement Across High and Low Income Countries: A Re-Analysis of IEA Data.

    ERIC Educational Resources Information Center

    Heyneman, S.; Loxley, W.

    Previous international studies of science achievement put the data through a process of winnowing to decide which variables to keep in the final regressions. Variables were allowed to enter the final regressions if they met a minimum beta coefficient criterion of 0.05 averaged across rich and poor countries alike. The criterion was an average…

  17. Using Norm-Referenced Data to Set Standards for a Minimum Competency Program in the State of South Carolina.

    ERIC Educational Resources Information Center

    Garcia-Quintana, Roan A.; Mappus, M. Lynne

    1980-01-01

    Norm referenced data were utilized for determining the mastery cutoff score on a criterion referenced test. Once a cutoff score on the norm referenced measure is selected, the cutoff score on the criterion referenced measure becomes that score which maximizes proportion of consistent classifications and proportion of improvement beyond change. (CP)

  18. Resolution Limits of Nanoimprinted Patterns by Fluorescence Microscopy

    NASA Astrophysics Data System (ADS)

    Kubo, Shoichi; Tomioka, Tatsuya; Nakagawa, Masaru

    2013-06-01

    The authors investigated optical resolution limits to identify minimum distances between convex lines of fluorescent dye-doped nanoimprinted resist patterns by fluorescence microscopy. Fluorescent ultraviolet (UV)-curable resin and thermoplastic resin films were transformed into line-and-space patterns by UV nanoimprinting and thermal nanoimprinting, respectively. Fluorescence immersion observation needed an immersion medium immiscible to the resist films, and an ionic liquid of triisobutyl methylphosphonium tosylate was appropriate for soluble thermoplastic polystyrene patterns. Observation with various numerical aperture (NA) values and two detection wavelength ranges showed that the resolution limits were smaller than the values estimated by the Sparrow criterion. The space width to identify line patterns became narrower as the line width increased. The space width of 100 nm was demonstrated to be sufficient to resolve 300-nm-wide lines in the detection wavelength range of 575-625 nm using an objective lens of NA= 1.40.

  19. Elastic models: a comparative study applied to retinal images.

    PubMed

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  20. Fuzzy approaches to supplier selection problem

    NASA Astrophysics Data System (ADS)

    Ozkok, Beyza Ahlatcioglu; Kocken, Hale Gonce

    2013-09-01

    Supplier selection problem is a multi-criteria decision making problem which includes both qualitative and quantitative factors. In the selection process many criteria may conflict with each other, therefore decision-making process becomes complicated. In this study, we handled the supplier selection problem under uncertainty. In this context; we used minimum criterion, arithmetic mean criterion, regret criterion, optimistic criterion, geometric mean and harmonic mean. The membership functions created with the help of the characteristics of used criteria, and we tried to provide consistent supplier selection decisions by using these memberships for evaluating alternative suppliers. During the analysis, no need to use expert opinion is a strong aspect of the methodology used in the decision-making.

  1. Physical Employment Standards for UK Firefighters

    PubMed Central

    Stevenson, Richard D.M.; Siddall, Andrew G.; Turner, Philip F.J.; Bilzon, James L.J.

    2017-01-01

    Objective: The aim of this study was to assess sensitivity and specificity of surrogate physical ability tests as predictors of criterion firefighting task performance and to identify corresponding minimum muscular strength and endurance standards. Methods: Fifty-one (26 male; 25 female) participants completed three criterion tasks (ladder lift, ladder lower, ladder extension) and three corresponding surrogate tests [one-repetition maximum (1RM) seated shoulder press; 1RM seated rope pull-down; repeated 28 kg seated rope pull-down]. Surrogate test standards were calculated that best identified individuals who passed (sensitivity; true positives) and failed (specificity; true negatives) criterion tasks. Results: Best sensitivity/specificity achieved were 1.00/1.00 for a 35 kg seated shoulder press, 0.79/0.92 for a 60 kg rope pull-down, and 0.83/0.93 for 23 repetitions of the 28 kg rope pull-down. Conclusions: These standards represent performance on surrogate tests commensurate with minimum acceptable performance of essential strength-based occupational tasks in UK firefighters. PMID:28045801

  2. On Correlations, Distances and Error Rates.

    ERIC Educational Resources Information Center

    Dorans, Neil J.

    The nature of the criterion (dependent) variable may play a useful role in structuring a list of classification/prediction problems. Such criteria are continuous in nature, binary dichotomous, or multichotomous. In this paper, discussion is limited to the continuous normally distributed criterion scenarios. For both cases, it is assumed that the…

  3. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  4. Approximation for the Rayleigh Resolution of a Circular Aperture

    ERIC Educational Resources Information Center

    Mungan, Carl E.

    2009-01-01

    Rayleigh's criterion states that a pair of point sources are barely resolved by an optical instrument when the central maximum of the diffraction pattern due to one source coincides with the first minimum of the pattern of the other source. As derived in standard introductory physics textbooks, the first minimum for a rectangular slit of width "a"…

  5. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    Objectives The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Materials and Methods Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt’s psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. Results From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42–0.79), with the 1.5 mile (rp = 0.79, 0.73–0.85) and 12 min walk/run tests (rp = 0.78, 0.72–0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. Conclusions When the evaluation of an individual’s maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness. PMID:26987118

  6. Analysis of higher education policy frameworks for open and distance education in Pakistan.

    PubMed

    Ellahi, Abida; Zaka, Bilal

    2015-04-01

    The constant rise in demand for higher education has become the biggest challenge for educational planners. This high demand has paved a way for distance education across the globe. This article innovatively analyzes the policy documentation of a major distance education initiative in Pakistan for validity that will identify the utility of policy linkages. The study adopted a qualitative research design that consisted of two steps. In the first step, a content analysis of distance learning policy framework was made. For this purpose, two documents were accessed titled "Framework for Launching Distance Learning Programs in HEIs of Pakistan" and "Guideline on Quality of Distance Education for External Students at the HEIs of Pakistan." In the second step, the policy guidelines mentioned in these two documents were evaluated at two levels. At the first level, the overall policy documents were assessed against a criterion proposed by Cheung, Mirzaei, and Leeder. At the second level, the proposed program of distance learning was assessed against a criterion set by Gellman-Danley and Fetzner and Berge. The distance education program initiative in Pakistan is of promising nature which needs to be assessed regularly. This study has made an initial attempt to assess the policy document against a criterion identified from literature. The analysis shows that the current policy documents do offer some strengths at this initial level, however, they cannot be considered a comprehensive policy guide. The inclusion or correction of missing or vague areas identified in this study would make this policy guideline document a treasured tool for Higher Education Commission (HEC). For distance education policy makers, this distance education policy framework model recognizes several fundamental areas with which they should be concerned. The findings of this study in the light of two different policy framework measures highlight certain opportunities that can help strengthening the distance education policies. The criteria and findings are useful for the reviewers of policy proposals to identify the gaps where policy documents can be improved to bring the desired outcomes. © The Author(s) 2015.

  7. Effect of Weight Transfer on a Vehicle's Stopping Distance.

    ERIC Educational Resources Information Center

    Whitmire, Daniel P.; Alleman, Timothy J.

    1979-01-01

    An analysis of the minimum stopping distance problem is presented taking into account the effect of weight transfer on nonskidding vehicles and front- or rear-wheels-skidding vehicles. Expressions for the minimum stopping distances are given in terms of vehicle geometry and the coefficients of friction. (Author/BB)

  8. High resolution ion chamber array delivery quality assurance for robotic radiosurgery: Commissioning and validation.

    PubMed

    Blanck, Oliver; Masi, Laura; Chan, Mark K H; Adamczyk, Sebastian; Albrecht, Christian; Damme, Marie-Christin; Loutfi-Krauss, Britta; Alraun, Manfred; Fehr, Roman; Ramm, Ulla; Siebert, Frank-Andre; Stelljes, Tenzin Sonam; Poppinga, Daniela; Poppe, Björn

    2016-06-01

    High precision radiosurgery demands comprehensive delivery-quality-assurance techniques. The use of a liquid-filled ion-chamber-array for robotic-radiosurgery delivery-quality-assurance was investigated and validated using several test scenarios and routine patient plans. Preliminary evaluation consisted of beam profile validation and analysis of source-detector-distance and beam-incidence-angle response dependence. The delivery-quality-assurance analysis is performed in four steps: (1) Array-to-plan registration, (2) Evaluation with standard Gamma-Index criteria (local-dose-difference⩽2%, distance-to-agreement⩽2mm, pass-rate⩾90%), (3) Dose profile alignment and dose distribution shift until maximum pass-rate is found, and (4) Final evaluation with 1mm distance-to-agreement criterion. Test scenarios consisted of intended phantom misalignments, dose miscalibrations, and undelivered Monitor Units. Preliminary method validation was performed on 55 clinical plans in five institutions. The 1000SRS profile measurements showed sufficient agreement compared with a microDiamond detector for all collimator sizes. The relative response changes can be up to 2.2% per 10cm source-detector-distance change, but remains within 1% for the clinically relevant source-detector-distance range. Planned and measured dose under different beam-incidence-angles showed deviations below 1% for angles between 0° and 80°. Small-intended errors were detected by 1mm distance-to-agreement criterion while 2mm criteria failed to reveal some of these deviations. All analyzed delivery-quality-assurance clinical patient plans were within our tight tolerance criteria. We demonstrated that a high-resolution liquid-filled ion-chamber-array can be suitable for robotic radiosurgery delivery-quality-assurance and that small errors can be detected with tight distance-to-agreement criterion. Further improvement may come from beam specific correction for incidence angle and source-detector-distance response. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. New true-triaxial rock strength criteria considering intrinsic material characteristics

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Li, Cheng; Quan, Xiaowei; Wang, Yanning; Yu, Liyuan; Jiang, Binsong

    2018-02-01

    A reasonable strength criterion should reflect the hydrostatic pressure effect, minimum principal stress effect, and intermediate principal stress effect. The former two effects can be described by the meridian curves, and the last one mainly depends on the Lode angle dependence function. Among three conventional strength criteria, i.e. Mohr-Coulomb (MC), Hoek-Brown (HB), and Exponent (EP) criteria, the difference between generalized compression and extension strength of EP criterion experience a firstly increase then decrease process, and tends to be zero when hydrostatic pressure is big enough. This is in accordance with intrinsic rock strength characterization. Moreover, the critical hydrostatic pressure I_c corresponding to the maximum difference of between generalized compression and extension strength can be easily adjusted by minimum principal stress influence parameter K. So, the exponent function is a more reasonable meridian curves, which well reflects the hydrostatic pressure effect and is employed to describe the generalized compression and extension strength. Meanwhile, three Lode angle dependence functions of L_{{MN}}, L_{{WW}}, and L_{{YMH}}, which unconditionally satisfy the convexity and differential requirements, are employed to represent the intermediate principal stress effect. Realizing the actual strength surface should be located between the generalized compression and extension surface, new true-triaxial criteria are proposed by combining the two states of EP criterion by Lode angle dependence function with a same lode angle. The proposed new true-triaxial criteria have the same strength parameters as EP criterion. Finally, 14 groups of triaxial test data are employed to validate the proposed criteria. The results show that the three new true-triaxial exponent criteria, especially the Exponent Willam-Warnke criterion (EPWW) criterion, give much lower misfits, which illustrates that the EP criterion and L_{{WW}} have more reasonable meridian and deviatoric function form, respectively. The proposed new true-triaxial strength criteria can provide theoretical foundation for stability analysis and optimization of support design of rock engineering.

  10. Testing the Distance-Duality Relation in the Rh = ct Universe

    NASA Astrophysics Data System (ADS)

    Hu, J.; Wang, F. Y.

    2018-04-01

    In this paper, we test the cosmic distance duality (CDD) relation using the luminosity distances from joint light-curve analysis (JLA) type Ia supernovae (SNe Ia) sample and angular diameter distance sample from galaxy clusters. The Rh = ct and ΛCDM models are considered. In order to compare the two models, we constrain the CCD relation and the SNe Ia light-curve parameters simultaneously. Considering the effects of Hubble constant, we find that η ≡ DA(1 + z)2/DL = 1 is valid at the 2σ confidence level in both models with H0 = 67.8 ± 0.9 km/s/Mpc. However, the CDD relation is valid at 3σ confidence level with H0 = 73.45 ± 1.66 km/s/Mpc. Using the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), we find that the ΛCDM model is very strongly preferred over the Rh = ct model with these data sets for the CDD relation test.

  11. Testing the distance-duality relation in the Rh = ct universe

    NASA Astrophysics Data System (ADS)

    Hu, J.; Wang, F. Y.

    2018-07-01

    In this paper, we test the cosmic distance-duality (CDD) relation using the luminosity distances from joint light-curve analysis Type Ia supernovae (SNe Ia) sample and angular diameter distance sample from galaxy clusters. The Rh = ct and Λ cold dark matter (CDM) models are considered. In order to compare the two models, we constrain the CDD relation and the SNe Ia light-curve parameters simultaneously. Considering the effects of Hubble constant, we find that η ≡ DA(1 + z)2/DL = 1 is valid at the 2σ confidence level in both models with H0= 67.8 ± 0.9 km -1s-1 Mpc. However, the CDD relation is valid at 3σ confidence level with H0= 73.45 ± 1.66 km -1s-1Mpc. Using the Akaike Information Criterion and the Bayesian Information Criterion, we find that the ΛCDM model is very stongly preferred over the Rh = ct model with these data sets for the CDD relation test.

  12. Fidelity criterion for quantum-domain transmission and storage of coherent states beyond the unit-gain constraint.

    PubMed

    Namiki, Ryo; Koashi, Masato; Imoto, Nobuyuki

    2008-09-05

    We generalize the experimental success criterion for quantum teleportation (memory) in continuous-variable quantum systems to be suitable for a non-unit-gain condition by considering attenuation (amplification) of the coherent-state amplitude. The new criterion can be used for a nonideal quantum memory and long distance quantum communication as well as quantum devices with amplification process. It is also shown that the framework to measure the average fidelity is capable of detecting all Gaussian channels in the quantum domain.

  13. The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule

    NASA Astrophysics Data System (ADS)

    Eskandari, M. R.; Faghihi, F.; Mahdavi, M.

    The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.

  14. Quantitative assessment of mineral resources with an application to petroleum geology

    USGS Publications Warehouse

    Harff, Jan; Davis, J.C.; Olea, R.A.

    1992-01-01

    The probability of occurrence of natural resources, such as petroleum deposits, can be assessed by a combination of multivariate statistical and geostatistical techniques. The area of study is partitioned into regions that are as homogeneous as possible internally while simultaneously as distinct as possible. Fisher's discriminant criterion is used to select geological variables that best distinguish productive from nonproductive localities, based on a sample of previously drilled exploratory wells. On the basis of these geological variables, each wildcat well is assigned to the production class (dry or producer in the two-class case) for which the Mahalanobis' distance from the observation to the class centroid is a minimum. Universal kriging is used to interpolate values of the Mahalanobis' distances to all locations not yet drilled. The probability that an undrilled locality belongs to the productive class can be found, using the kriging estimation variances to assess the probability of misclassification. Finally, Bayes' relationship can be used to determine the probability that an undrilled location will be a discovery, regardless of the production class in which it is placed. The method is illustrated with a study of oil prospects in the Lansing/Kansas City interval of western Kansas, using geological variables derived from well logs. ?? 1992 Oxford University Press.

  15. The evaluation of alternate methodologies for land cover classification in an urbanizing area

    NASA Technical Reports Server (NTRS)

    Smekofski, R. M.

    1981-01-01

    The usefulness of LANDSAT in classifying land cover and in identifying and classifying land use change was investigated using an urbanizing area as the study area. The question of what was the best technique for classification was the primary focus of the study. The many computer-assisted techniques available to analyze LANDSAT data were evaluated. Techniques of statistical training (polygons from CRT, unsupervised clustering, polygons from digitizer and binary masks) were tested with minimum distance to the mean, maximum likelihood and canonical analysis with minimum distance to the mean classifiers. The twelve output images were compared to photointerpreted samples, ground verified samples and a current land use data base. Results indicate that for a reconnaissance inventory, the unsupervised training with canonical analysis-minimum distance classifier is the most efficient. If more detailed ground truth and ground verification is available, the polygons from the digitizer training with the canonical analysis minimum distance is more accurate.

  16. Maximum projection designs for computer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan; Gul, Evren; Ba, Shan

    Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less

  17. Maximum projection designs for computer experiments

    DOE PAGES

    Joseph, V. Roshan; Gul, Evren; Ba, Shan

    2015-03-18

    Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less

  18. Unveiling the Biometric Potential of Finger-Based ECG Signals

    PubMed Central

    Lourenço, André; Silva, Hugo; Fred, Ana

    2011-01-01

    The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications. PMID:21837235

  19. Unveiling the biometric potential of finger-based ECG signals.

    PubMed

    Lourenço, André; Silva, Hugo; Fred, Ana

    2011-01-01

    The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.

  20. Entropic criterion for model selection

    NASA Astrophysics Data System (ADS)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  1. DETERMINING MINIMUM IGNITION ENERGIES AND QUENCHING DISTANCES OF DIFFICULT-TO-IGNITE COMPOUNDS

    EPA Science Inventory

    Minimum spark energies and corresponding flat-plate electrode quenching distances required to initiate propagation of a combustion wave have been experimentally measured for four flammable hydrofluorocarbon (HFC) refrigerants and propane using ASTM (American Society for Testing a...

  2. Three-dimensional modeling and animation of two carpal bones: a technique.

    PubMed

    Green, Jason K; Werner, Frederick W; Wang, Haoyu; Weiner, Marsha M; Sacks, Jonathan M; Short, Walter H

    2004-05-01

    The objectives of this study were to (a). create 3D reconstructions of two carpal bones from single CT data sets and animate these bones with experimental in vitro motion data collected during dynamic loading of the wrist joint, (b). develop a technique to calculate the minimum interbone distance between the two carpal bones, and (c). validate the interbone distance calculation process. This method utilized commercial software to create the animations and an in-house program to interface with three-dimensional CAD software to calculate the minimum distance between the irregular geometries of the bones. This interbone minimum distance provides quantitative information regarding the motion of the bones studied and may help to understand and quantify the effects of ligamentous injury.

  3. Proposed modification of the criterion for the region of validity of the inverse-power expansion in diatomic long-range potentials

    NASA Astrophysics Data System (ADS)

    Ji, Bing; Tsai, Chin-Chun; Stwalley, William C.

    1995-04-01

    A modified internuclear distance criterion, RLR- m, as the lower bound for the region of validity of the inverse-power expansion of the diatomic long-range potential is proposed. This new criterion takes into account the spatial orientation of the atomic orbitals while retaining the simplicity of the traditional Le Roy radius, RLR for the interaction of S state atoms. Recent experimental and theoretical results for various excited states in Na 2 suggest that this proposed RLR- m is an appropriate generalization of RLR.

  4. 41 CFR 302-4.704 - Must we require a minimum driving distance per day?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Federal Travel Regulation System RELOCATION ALLOWANCES PERMANENT CHANGE OF STATION (PCS) ALLOWANCES FOR... driving distance not less than an average of 300 miles per day. However, an exception to the daily minimum... reasons acceptable to you. ...

  5. Comparison and verification of two models which predict minimum principal in situ stress from triaxial data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harikrishnan, R.; Hareland, G.; Warpinski, N.R.

    This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives themore » minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.« less

  6. Fatigue acceptance test limit criterion for larger diameter rolled thread fasteners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kephart, A.R.

    1997-05-01

    This document describes a fatigue lifetime acceptance test criterion by which studs having rolled threads, larger than 1.0 inches in diameter, can be assured to meet minimum quality attributes associated with a controlled rolling process. This criterion is derived from a stress dependent, room temperature air fatigue database for test studs having a 0.625 inch diameter threads of Alloys X-750 HTH and direct aged 625. Anticipated fatigue lives of larger threads are based on thread root elastic stress concentration factors which increase with increasing thread diameters. Over the thread size range of interest, a 30% increase in notch stress ismore » equivalent to a factor of five (5X) reduction in fatigue life. The resulting diameter dependent fatigue acceptance criterion is normalized to the aerospace rolled thread acceptance standards for a 1.0 inch diameter, 0.125 inch pitch, Unified National thread with a controlled Root radius (UNR). Testing was conducted at a stress of 50% of the minimum specified material ultimate strength, 80 Ksi, and at a stress ratio (R) of 0.10. Limited test data for fastener diameters of 1.00 to 2.25 inches are compared to the acceptance criterion. Sensitivity of fatigue life of threads to test nut geometry variables was also shown to be dependent on notch stress conditions. Bearing surface concavity of the compression nuts and thread flank contact mismatch conditions can significantly affect the fastener fatigue life. Without improved controls these conditions could potentially provide misleading acceptance data. Alternate test nut geometry features are described and implemented in the rolled thread stud specification, MIL-DTL-24789(SH), to mitigate the potential effects on fatigue acceptance data.« less

  7. A Bibliography of Writings on Distance Education.

    ERIC Educational Resources Information Center

    Holmberg, Borje

    This bibliography lists over 1,400 publications on distance education primarily from the 1960s, 1970s, and 1980s. The selection criterion is what the compiler, a scholar and a practitioner, found relevant. Books in English dominate; a considerable number of works in German are listed; and some in French, Spanish, and Scandinavian languages are…

  8. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  9. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  10. Flaw Tolerance In Lap Shear Brazed Joints. Part 2

    NASA Technical Reports Server (NTRS)

    Wang, Len; Flom, Yury

    2003-01-01

    This paper presents results of the second part of an on-going effort to gain better understanding of defect tolerance in braze joints. In the first part of this three-part series, we mechanically tested and modeled the strength of the lap joints as a function of the overlap distance. A failure criterion was established based on the zone damage theory, which predicts the dependence of the lap joint shear strength on the overlap distance, based on the critical size of a finite damage zone or an overloaded region in the joint. In this second part of the study, we experimentally verified the applicability of the damage zone criterion on prediction of the shear strength of the lap joint and introduced controlled flaws into the lap joints. The purpose of the study was to evaluate the lap joint strength as a function of flaw size and its location through mechanical testing and nonlinear finite element analysis (FEA) employing damage zone criterion for definition of failure. The results obtained from the second part of the investigation confirmed that the failure of the ductile lap shear brazed joints occurs when the damage zone reaches approximately 10% of the overlap width. The same failure criterion was applicable to the lap joints containing flaws.

  11. Optimization of solar cell contacts by system cost-per-watt minimization

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.

  12. A Review of ETS Differential Item Functioning Assessment Procedures: Flagging Rules, Minimum Sample Size Requirements, and Criterion Refinement. Research Report. ETS RR-12-08

    ERIC Educational Resources Information Center

    Zwick, Rebecca

    2012-01-01

    Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…

  13. Distance Measurements In X-Ray Pictures

    NASA Astrophysics Data System (ADS)

    Forsgren, Per-Ola

    1987-10-01

    In this paper, a measurement method for the distance between binary objects will be presented. It has been developed for a specific purpose, the evaluation of rheumatic disease, but should be useful also in other applications. It is based on a distance map in the area between binary objects. A skeleton is extracted from the distance map by searching for local maxima. The distance measure is based on the average of skelton points in a defined measurement area. An objective criterion for selection of measurement points on the skeleton is proposed. Preliminary results indicate that good repeatability is attained.

  14. Development and evaluation of a gyroscope-based wheel rotation monitor for manual wheelchair users.

    PubMed

    Hiremath, Shivayogi V; Ding, Dan; Cooper, Rory A

    2013-07-01

    To develop and evaluate a wireless gyroscope-based wheel rotation monitor (G-WRM) that can estimate speeds and distances traveled by wheelchair users during regular wheelchair propulsion as well as wheelchair sports such as handcycling, and provide users with real-time feedback through a smartphone application. The speeds and the distances estimated by the G-WRM were compared with the criterion measures by calculating absolute difference, mean difference, and percentage errors during a series of laboratory-based tests. Intraclass correlations (ICC) and the Bland-Altman plots were also used to assess the agreements between the G-WRM and the criterion measures. In addition, battery life and wireless data transmission tests under a number of usage conditions were performed. The percentage errors for the angular velocities, speeds, and distances obtained from three prototype G-WRMs were less than 3% for all the test trials. The high ICC values (ICC (3,1) > 0.94) and the Bland-Altman plots indicate excellent agreement between the estimated speeds and distances by the G-WRMs and the criterion measures. The battery life tests showed that the device could last for 35 hours in wireless mode and 139 hours in secure digital card mode. The wireless data transmission tests indicated less than 0.3% of data loss. The results indicate that the G-WRM is an appropriate tool for tracking a spectrum of wheelchair-related activities from regular wheelchair propulsion to wheelchair sports such as handcycling. The real-time feedback provided by the G-WRM can help wheelchair users self-monitor their everyday activities.

  15. Investigating ground vibration to calculate the permissible charge weight for blasting operations of Gotvand-Olya dam underground structures / Badania drgań gruntu w celu określenia dopuszczalnego ciężaru ładunku wybuchowego przy pracach strzałowych w podziemnych elementach tamy w Gotvand-Olya

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Bakhshandeh Amnieh, Hassan; Bahadori, Moein

    2012-12-01

    Ground vibration, air vibration, fly rock, undesirable displacement and fragmentation are some inevitable side effects of blasting operations that can cause serious damage to the surrounding environment. Peak Particle Velocity (PPV) is the main criterion in the assessment of the amount of damage caused by ground vibration. There are different standards for the determination of the safe level of the PPV. To calculate the permissible amount of the explosive to control the damage to the underground structures of Gotvand Olya dam, use was made of sixteen 3-component (totally 48) records generated from 4 blasts. These operations were recorded in 3 directions (radial, transverse and vertical) by four PG-2002 seismographs having GS-11D 3-component seismometers and the records were analyzed with the help of the DADISP software. To predict the PPV, use was made of the scaled distance and the Simulated Annealing (SA) hybrid methods. Using the scaled distance resulted in a relation for the prediction of the PPV; the precision of the relation was then increased to 0.94 with the help of the SA hybrid method. Relying on the high correlation of this relation and considering a minimum distance of 56.2 m to the center of the blast site and a permissible PPV of 178 mm/s (for a 2-day old concrete), the maximum charge weight per delay came out to be 212 Kg.

  16. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  17. 47 CFR 73.807 - Minimum distance separation between stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and the right-hand column lists (for informational purposes only) the minimum distance necessary for...) Within 320 km of the Mexican border, LP100 stations must meet the following separations with respect to any Mexican stations: Mexican station class Co-channel (km) First-adjacent channel (km) Second-third...

  18. Cold weather paving requirements for bituminous concrete.

    DOT National Transportation Integrated Search

    1973-01-01

    Cold weather paving specifications were developed from work by Corlew and Dickson, who used a computer solution to predict the cooling rate of bituminous concrete. Virginia had used a minimum atmospheric temperature as a criterion; however, it was ev...

  19. Noiseless method for checking the Peres separability criterion by local operations and classical communication

    NASA Astrophysics Data System (ADS)

    Bai, Yan-Kui; Li, Shu-Shen; Zheng, Hou-Zhi

    2005-11-01

    We present a method for checking the Peres separability criterion in an arbitrary bipartite quantum state ρAB within local operations and classical communication scenario. The method does not require noise operation which is needed in making the partial transposition map physically implementable. The main task for the two observers, Alice and Bob, is to measure some specific functions of the partial transposed matrix. With these functions, they can determine the eigenvalues of ρABTB , among which the minimum serves as an entanglement witness.

  20. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  1. On the design of turbo codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.

  2. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  3. Automated Reconstruction of Neural Trees Using Front Re-initialization

    PubMed Central

    Mukherjee, Amit; Stepanyants, Armen

    2013-01-01

    This paper proposes a greedy algorithm for automated reconstruction of neural arbors from light microscopy stacks of images. The algorithm is based on the minimum cost path method. While the minimum cost path, obtained using the Fast Marching Method, results in a trace with the least cumulative cost between the start and the end points, it is not sufficient for the reconstruction of neural trees. This is because sections of the minimum cost path can erroneously travel through the image background with undetectable detriment to the cumulative cost. To circumvent this problem we propose an algorithm that grows a neural tree from a specified root by iteratively re-initializing the Fast Marching fronts. The speed image used in the Fast Marching Method is generated by computing the average outward flux of the gradient vector flow field. Each iteration of the algorithm produces a candidate extension by allowing the front to travel a specified distance and then tracking from the farthest point of the front back to the tree. Robust likelihood ratio test is used to evaluate the quality of the candidate extension by comparing voxel intensities along the extension to those in the foreground and the background. The qualified extensions are appended to the current tree, the front is re-initialized, and Fast Marching is continued until the stopping criterion is met. To evaluate the performance of the algorithm we reconstructed 6 stacks of two-photon microscopy images and compared the results to the ground truth reconstructions by using the DIADEM metric. The average comparison score was 0.82 out of 1.0, which is on par with the performance achieved by expert manual tracers. PMID:24386539

  4. Impact of the reduced vertical separation minimum on the domestic United States

    DOT National Transportation Integrated Search

    2009-01-31

    Aviation regulatory bodies have enacted the reduced vertical separation minimum standard over most of the globe. The reduced vertical separation minimum is a technique that reduces the minimum vertical separation distance between aircraft from 2000 t...

  5. Online Distance Teaching of Undergraduate Finance: A Case for Musashi University and Konan University, Japan

    ERIC Educational Resources Information Center

    Kubota, Keiichi; Fujikawa, Kiyoshi

    2007-01-01

    We implemented a synchronous distance course entitled: Introductory Finance designed for undergraduate students. This course was held between two Japanese universities. Stable Internet connections allowing minimum delay and minimum interruptions of the audio-video streaming signals were used. Students were equipped with their own PCs with…

  6. Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey

    1995-01-01

    An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.

  7. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  8. Live Authority in the Classroom in Video Conference-Based Synchronous Distance Education: The Teaching Assistant

    ERIC Educational Resources Information Center

    Karal, Hasan; Çebi, Ayça; Turgut, Yigit Emrah

    2010-01-01

    The aim of this study was to define the role of the assistant in a classroom environment where students are taught using video conference-based synchronous distance education. Qualitative research approach was adopted and, among purposeful sampling methods, criterion sampling method was preferred in the scope of the study. The study was carried…

  9. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    PubMed

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  10. Interspecific geographic range size-body size relationship and the diversification dynamics of Neotropical furnariid birds.

    PubMed

    Inostroza-Michael, Oscar; Hernández, Cristián E; Rodríguez-Serrano, Enrique; Avaria-Llautureo, Jorge; Rivadeneira, Marcelo M

    2018-05-01

    Among the earliest macroecological patterns documented, is the range and body size relationship, characterized by a minimum geographic range size imposed by the species' body size. This boundary for the geographic range size increases linearly with body size and has been proposed to have implications in lineages evolution and conservation. Nevertheless, the macroevolutionary processes involved in the origin of this boundary and its consequences on lineage diversification have been poorly explored. We evaluate the macroevolutionary consequences of the difference (hereafter the distance) between the observed and the minimum range sizes required by the species' body size, to untangle its role on the diversification of a Neotropical species-rich bird clade using trait-dependent diversification models. We show that speciation rate is a positive hump-shaped function of the distance to the lower boundary. The species with highest and lowest distances to minimum range size had lower speciation rates, while species close to medium distances values had the highest speciation rates. Further, our results suggest that the distance to the minimum range size is a macroevolutionary constraint that affects the diversification process responsible for the origin of this macroecological pattern in a more complex way than previously envisioned. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  11. Dynamic Portfolio Strategy Using Clustering Approach

    PubMed Central

    Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333

  12. Dynamic Portfolio Strategy Using Clustering Approach.

    PubMed

    Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.

  13. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  14. On the minimum orbital intersection distance computation: a new effective method

    NASA Astrophysics Data System (ADS)

    Hedo, José M.; Ruíz, Manuel; Peláez, Jesús

    2018-06-01

    The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.

  15. Reading Skill and the Minimum Distance Principle: A Comparison of Sentence Comprehension in Context and in Isolation.

    ERIC Educational Resources Information Center

    Goldman, Susan R.

    The comprehension of the Minimum Distance Principle was examined in three experiments, using the "tell/promise" sentence construction. Experiment one compared the listening and reading comprehension of singly presented sentences, e.g. "John tells Bill to bake the cake" and "John promises Bill to bake the cake." The…

  16. 30 CFR 77.807-3 - Movement of equipment; minimum distance from high-voltage lines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... high-voltage lines. 77.807-3 Section 77.807-3 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.807-3 Movement of equipment; minimum distance from high-voltage lines. When any part of any equipment operated on the surface of any...

  17. 30 CFR 77.807-2 - Booms and masts; minimum distance from high-voltage lines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-voltage lines. 77.807-2 Section 77.807-2 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.807-2 Booms and masts; minimum distance from high-voltage lines. The booms and masts of equipment operated on the surface of any...

  18. An opening criterion for dust gaps in protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Dipierro, Giovanni; Laibe, Guillaume

    2017-08-01

    We aim to understand under which conditions a low-mass planet can open a gap in viscous dusty protoplanetary discs. For this purpose, we extend the theory of dust radial drift to include the contribution from the tides of an embedded planet and from the gas viscous forces. From this formalism, we derive (I) a grain-size-dependent criterion for dust gap opening in discs, (II) an estimate of the location of the outer edge of the dust gap and (III) an estimate of the minimum Stokes number above which low-mass planets are able to carve gaps that appear only in the dust disc. These analytical estimates are particularly helpful to appraise the minimum mass of a hypothetical planet carving gaps in discs observed at long wavelengths and high resolution. We validate the theory against 3D smoothed particle hydrodynamics simulations of planet-disc interaction in a broad range of dusty protoplanetary discs. We find a remarkable agreement between the theoretical model and the numerical experiments.

  19. Striking Distance Determined From High-Speed Videos and Measured Currents in Negative Cloud-to-Ground Lightning

    NASA Astrophysics Data System (ADS)

    Visacro, Silverio; Guimaraes, Miguel; Murta Vale, Maria Helena

    2017-12-01

    First and subsequent return strokes' striking distances (SDs) were determined for negative cloud-to-ground flashes from high-speed videos exhibiting the development of positive and negative leaders and the pre-return stroke phase of currents measured along a short tower. In order to improve the results, a new criterion was used for the initiation and propagation of the sustained upward connecting leader, consisting of a 4 A continuous current threshold. An advanced approach developed from the combined use of this criterion and a reverse propagation procedure, which considers the calculated propagation speeds of the leaders, was applied and revealed that SDs determined solely from the first video frame showing the upward leader can be significantly underestimated. An original approach was proposed for a rough estimate of first strokes' SD using solely records of current. This approach combines the 4 A criterion and a representative composite three-dimensional propagation speed of 0.34 × 106 m/s for the leaders in the last 300 m propagated distance. SDs determined under this approach showed to be consistent with those of the advanced procedure. This approach was applied to determine the SD of 17 first return strokes of negative flashes measured at MCS, covering a wide peak-current range, from 18 to 153 kA. The estimated SDs exhibit very high dispersion and reveal great differences in relation to the SDs estimated for subsequent return strokes and strokes in triggered lightning.

  20. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  1. Non-classic multiscale modeling of manipulation based on AFM, in aqueous and humid ambient

    NASA Astrophysics Data System (ADS)

    Korayem, M. H.; Homayooni, A.; Hefzabad, R. N.

    2018-05-01

    To achieve a precise manipulation, it is important that an accurate model consisting the size effect and environmental conditions be employed. In this paper, the non-classical multiscale modeling is developed to investigate the manipulation in a vacuum, aqueous and humid ambient. The manipulation structure is considered into two parts as a macro-field (MF) and a nano-field (NF). The governing equations of the AFM components (consist of the cantilever and tip) in the MF are derived based on the modified couple stress theory. The material length scale parameter is used to study the size effect. The fluid flow in the MF is assumed as the Couette and Creeping flows. Moreover, the NF is modeled using the molecular dynamics. The Electro-Based (ELBA) model is considered to model the ambient condition in the NF. The nanoparticle in the different conditions is taken into account to study the manipulation. The results of the manipulation indicate that the predicted deflection of the non-classical model is less than the classical one. Comparison of the nanoparticle travelled distance on substrate shows that the manipulation in the submerged condition is close to the ideal manipulation. The results of humid condition illustrate that by increasing the relative humidity (RH) the manipulation force decreases. Furthermore, Root Mean Square (RMS) as a criterion of damage demonstrates that the submerged nanoparticle has the minimum damage, however, the minimum manipulation force occurs in superlative humid ambient.

  2. Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array

    NASA Astrophysics Data System (ADS)

    Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.

    2018-01-01

    The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.

  3. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  4. Molecular markers for establishing distinctness in vegetatively propagated crops: a case study in grapevine.

    PubMed

    Ibáñez, Javier; Vélez, M Dolores; de Andrés, M Teresa; Borrego, Joaquín

    2009-11-01

    Distinctness, uniformity and stability (DUS) testing of varieties is usually required to apply for Plant Breeders' Rights. This exam is currently carried out using morphological traits, where the establishment of distinctness through a minimum distance is the key issue. In this study, the possibility of using microsatellite markers for establishing the minimum distance in a vegetatively propagated crop (grapevine) has been evaluated. A collection of 991 accessions have been studied with nine microsatellite markers and pair-wise compared, and the highest intra-variety distance and the lowest inter-variety distance determined. The collection included 489 different genotypes, and synonyms and sports. Average values for number of alleles per locus (19), Polymorphic Information Content (0.764) and heterozygosities observed (0.773) and expected (0.785) indicated the high level of polymorphism existing in grapevine. The maximum intra-variety variability found was one allele between two accessions of the same variety, of a total of 3,171 pair-wise comparisons. The minimum inter-variety variability found was two alleles between two pairs of varieties, of a total of 119,316 pair-wise comparisons. In base to these results, the minimum distance required to set distinctness in grapevine with the nine microsatellite markers used could be established in two alleles. General rules for the use of the system as a support for establishing distinctness in vegetatively propagated crops are discussed.

  5. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  6. Monte Carlo simulations on marker grouping and ordering.

    PubMed

    Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C

    2003-08-01

    Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.

  7. The Impact of Age on Quality Measure Adherence in Colon Cancer

    PubMed Central

    Steele, Scott R.; Chen, Steven L.; Stojadinovic, Alexander; Nissan, Aviram; Zhu, Kangmin; Peoples, George E.; Bilchik, Anton

    2012-01-01

    BACKGROUND Recently lymph node yield (LNY) has been endorsed as a quality measure of CC resection adequacy. It is unclear whether this measure is relevant to all ages. We hypothesized that total lymph node yield (LNY) is negatively correlated with increasing age and overall survival (OS). STUDY DESIGN The Surveillance, Epidemiology and End Results (SEER) database was queried for all non-metastatic CC patients diagnosed from 1992–2004 (n=101,767), grouped by age (<40, 41–45, 46–50, and in 5-year increments until 86+ years). Proportions of patients meeting the 12 LNY minimum criterion were determined in each age group, and analyzed with multivariate linear regression adjusting for demographics and AJCC 6th Edition stage. Overall survival (OS) comparisons in each age category were based on the guideline of 12 LNY. RESULTS Mean LNY decreased with increasing age (18.7 vs. 11.4 nodes/patient, youngest vs. oldest group, P<0.001). The proportion of patients meeting the 12 LNY criterion also declined with each incremental age group (61.9% vs. 35.2% compliance, youngest vs. oldest, P<0.001). Multivariate regression demonstrated a negative effect of each additional year in age and log (LNY) with coefficient of −0.003 (95% CI −0.003 to −0.002). When stratified by age and nodal yield using the 12 LNY criterion, OS was lower for all age groups in Stage II CC with <12LNY, and each age group over 60 years with <12LNY for Stage III CC (P<0.05). CONCLUSIONS Every attempt to adhere to proper oncological principles should be made at time of CC resection regardless of age. The prognostic significance of the 12 LN minimum criterion should be applied even to elderly CC patients. PMID:21601492

  8. Electrofishing distance needed to estimate consistent Index of Biotic Integrity (IBI) scores in raftable Oregon rivers

    EPA Science Inventory

    An important issue surrounding assessment of riverine fish assemblages is the minimum amount of sampling distance needed to adequately determine biotic condition. Determining adequate sampling distance is important because sampling distance affects estimates of fish assemblage c...

  9. New presentation method for magnetic resonance angiography images based on skeletonization

    NASA Astrophysics Data System (ADS)

    Nystroem, Ingela; Smedby, Orjan

    2000-04-01

    Magnetic resonance angiography (MRA) images are usually presented as maximum intensity projections (MIP), and the choice of viewing direction is then critical for the detection of stenoses. We propose a presentation method that uses skeletonization and distance transformations, which visualizes variations in vessel width independent of viewing direction. In the skeletonization, the object is reduced to a surface skeleton and further to a curve skeleton. The skeletal voxels are labeled with their distance to the original background. For the curve skeleton, the distance values correspond to the minimum radius of the object at that point, i.e., half the minimum diameter of the blood vessel at that level. The following image processing steps are performed: resampling to cubic voxels, segmentation of the blood vessels, skeletonization ,and reverse distance transformation on the curve skeleton. The reconstructed vessels may be visualized with any projection method. Preliminary results are shown. They indicate that locations of possible stenoses may be identified by presenting the vessels as a structure with the minimum radius at each point.

  10. An Algorithm for Finding Candidate Synaptic Sites in Computer Generated Networks of Neurons with Realistic Morphologies

    PubMed Central

    van Pelt, Jaap; Carnell, Andrew; de Ridder, Sander; Mansvelder, Huibert D.; van Ooyen, Arjen

    2010-01-01

    Neurons make synaptic connections at locations where axons and dendrites are sufficiently close in space. Typically the required proximity is based on the dimensions of dendritic spines and axonal boutons. Based on this principle one can search those locations in networks formed by reconstructed neurons or computer generated neurons. Candidate synapses are then located where axons and dendrites are within a given criterion distance from each other. Both experimentally reconstructed and model generated neurons are usually represented morphologically by piecewise-linear structures (line pieces or cylinders). Proximity tests are then performed on all pairs of line pieces from both axonal and dendritic branches. Applying just a test on the distance between line pieces may result in local clusters of synaptic sites when more than one pair of nearby line pieces from axonal and dendritic branches is sufficient close, and may introduce a dependency on the length scale of the individual line pieces. The present paper describes a new algorithm for defining locations of candidate synapses which is based on the crossing requirement of a line piece pair, while the length of the orthogonal distance between the line pieces is subjected to the distance criterion for testing 3D proximity. PMID:21160548

  11. Safety distance assessment of industrial toxic releases based on frequency and consequence: a case study in Shanghai, China.

    PubMed

    Yu, Q; Zhang, Y; Wang, X; Ma, W C; Chen, L M

    2009-09-15

    A case study on the safety distance assessment of a chemical industry park in Shanghai, China, is presented in this paper. Toxic releases were taken into consideration. A safety criterion based on frequency and consequence of major hazard accidents was set up for consequence analysis. The exposure limits for the accidents with the frequency of more than 10(-4), 10(-5)-10(-4) and 10(-6)-10(-5) per year were mortalities of 1% (or SLOT), 50% (SLOD) and 75% (twice of SLOD) respectively. Accidents with the frequency of less than 10(-6) per year were considered incredible and ignored in the consequence analysis. Taking the safety distance of all the hazard installations in a chemical plant into consideration, the results based on the new criterion were almost smaller than those based on LC50 or SLOD. The combination of the consequence and risk based results indicated that the hazard installations in two of the chemical plants may be dangerous to the protection targets and measurements had to be taken to reduce the risk. The case study showed that taking account of the frequency of occurrence in the consequence analysis would give more feasible safety distances for major hazard accidents and the results were more comparable to those calculated by risk assessment.

  12. Hybrid Stochastic Models for Remaining Lifetime Prognosis

    DTIC Science & Technology

    2004-08-01

    literature for techniques and comparisons. Os- ogami and Harchol-Balter [70], Perros [73], Johnson [36], and Altiok [5] provide excellent summaries of...and type of PH-distribution approximation for c2 > 0.5 is not as obvious. In order to use the minimum distance estimation, Perros [73] indicated that...moment-matching techniques. Perros [73] indicated that the maximum likelihood and minimum distance techniques require nonlinear optimization. Johnson

  13. Roton Minimum as a Fingerprint of Magnon-Higgs Scattering in Ordered Quantum Antiferromagnets.

    PubMed

    Powalski, M; Uhrig, G S; Schmidt, K P

    2015-11-13

    A quantitative description of magnons in long-range ordered quantum antiferromagnets is presented which is consistent from low to high energies. It is illustrated for the generic S=1/2 Heisenberg model on the square lattice. The approach is based on a continuous similarity transformation in momentum space using the scaling dimension as the truncation criterion. Evidence is found for significant magnon-magnon attraction inducing a Higgs resonance. The high-energy roton minimum in the magnon dispersion appears to be induced by strong magnon-Higgs scattering.

  14. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  15. 46 CFR 173.095 - Towline pull criterion.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... diameter in feet (meters). s=that fraction of the propeller circle cylinder which would be intercepted by... shaft centerline at rudder to towing bitts in feet (meters). Δ=displacement in long tons (metric tons). f=minimum freeboard along the length of the vessel in feet (meters). B=molded beam in feet (meters...

  16. What Is the Minimum Information Needed to Estimate Average Treatment Effects in Education RCTs?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2014-01-01

    Randomized controlled trials (RCTs) are considered the "gold standard" for evaluating an intervention's effectiveness. Recently, the federal government has placed increased emphasis on the use of opportunistic experiments. A key criterion for conducting opportunistic experiments, however, is that there is relatively easy access to data…

  17. The bingo model of survivorship: 1. probabilistic aspects.

    PubMed

    Murphy, E A; Trojak, J E; Hou, W; Rohde, C A

    1981-01-01

    A "bingo" model is one in which the pattern of survival of a system is determined by whichever of several components, each with its own particular distribution for survival, fails first. The model is motivated by the study of lifespan in animals. A number of properties of such systems are discussed in general. They include the use of a special criterion of skewness that probably corresponds more closely than traditional measures to what the eye observes in casually inspecting data. This criterion is the ratio, r(h), of the probability density at a point an arbitrary distance, h, above the mode to that an equal distance below the mode. If this ratio is positive for all positive arguments, the distribution is considered positively asymmetrical and conversely. Details of the bingo model are worked out for several types of base distributions: the rectangular, the triangular, the logistic, and by numerical methods, the normal, lognormal, and gamma.

  18. Deriving the number of jobs in proximity services from the number of inhabitants in French rural municipalities.

    PubMed

    Lenormand, Maxime; Huet, Sylvie; Deffuant, Guillaume

    2012-01-01

    We use a minimum requirement approach to derive the number of jobs in proximity services per inhabitant in French rural municipalities. We first classify the municipalities according to their time distance in minutes by car to the municipality where the inhabitants go the most frequently to get services (called MFM). For each set corresponding to a range of time distance to MFM, we perform a quantile regression estimating the minimum number of service jobs per inhabitant that we interpret as an estimation of the number of proximity jobs per inhabitant. We observe that the minimum number of service jobs per inhabitant is smaller in small municipalities. Moreover, for municipalities of similar sizes, when the distance to the MFM increases, the number of jobs of proximity services per inhabitant increases.

  19. Destructive examination of shipping package 9975-02644

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daugherty, W. L.

    Destructive and non-destructive examinations have been performed on the components of shipping package 9975-02644 as part of a comprehensive SRS surveillance program for plutonium material stored in the K-Area Complex (KAC). During the field surveillance inspection of this package in KAC, three non-conforming conditions were noted: the axial gap of 1.389 inch exceeded the 1 inch maximum criterion, the exposed height of the lead shield was greater than the 4.65 inch maximum criterion, and the difference between the upper assembly inside height and the exposed height of the lead shield was less than the 0.425 inch minimum criterion. All threemore » of these observations relate to axial shrinkage of the lower fiberboard assembly. In addition, liquid water (condensation) was observed on the interior of the drum lid, the thermal blanket and the air shield.« less

  20. On thermonuclear ignition criterion at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Baolian; Kwan, Thomas J. T.; Wang, Yi-Ming

    2014-10-15

    Sustained thermonuclear fusion at the National Ignition Facility remains elusive. Although recent experiments approached or exceeded the anticipated ignition thresholds, the nuclear performance of the laser-driven capsules was well below predictions in terms of energy and neutron production. Such discrepancies between expectations and reality motivate a reassessment of the physics of ignition. We have developed a predictive analytical model from fundamental physics principles. Based on the model, we obtained a general thermonuclear ignition criterion in terms of the areal density and temperature of the hot fuel. This newly derived ignition threshold and its alternative forms explicitly show the minimum requirementsmore » of the hot fuel pressure, mass, areal density, and burn fraction for achieving ignition. Comparison of our criterion with existing theories, simulations, and the experimental data shows that our ignition threshold is more stringent than those in the existing literature and that our results are consistent with the experiments.« less

  1. Reliability Based Geometric Design of Horizontal Circular Curves

    NASA Astrophysics Data System (ADS)

    Rajbongshi, Pabitra; Kalita, Kuldeep

    2018-06-01

    Geometric design of horizontal circular curve primarily involves with radius of the curve and stopping sight distance at the curve section. Minimum radius is decided based on lateral thrust exerted on the vehicles and the minimum stopping sight distance is provided to maintain the safety in longitudinal direction of vehicles. Available sight distance at site can be regulated by changing the radius and middle ordinate at the curve section. Both radius and sight distance depend on design speed. Speed of vehicles at any road section is a variable parameter and therefore, normally the 98th percentile speed is taken as the design speed. This work presents a probabilistic approach for evaluating stopping sight distance, considering the variability of all input parameters of sight distance. It is observed that the 98th percentile sight distance value is much lower than the sight distance corresponding to 98th percentile speed. The distribution of sight distance parameter is also studied and found to follow a lognormal distribution. Finally, the reliability based design charts are presented for both plain and hill regions, and considering the effect of lateral thrust.

  2. A unique concept for automatically controlling the braking action of wheeled vehicles during minimum distance stops

    NASA Technical Reports Server (NTRS)

    Barthlome, D. E.

    1975-01-01

    Test results of a unique automatic brake control system are outlined and a comparison is made of its mode of operation to that of an existing skid control system. The purpose of the test system is to provide automatic control of braking action such that hydraulic brake pressure is maintained at a near constant, optimum value during minimum distance stops.

  3. Relating Sensitivity and Criterion Effects to the Internal Mechanisms of Visual Spatial Attention

    DTIC Science & Technology

    1988-04-30

    Hughes & Zimba , 1987; Rizzolatti, Riggio, Descola & *, .. Umilta, 1987). Further, deficits for uncued locations are a function 4. of the distance...Wilson, 1986; Rizzolatti, et. al., 1987; Hughes & Zimba , 1987, argue that this effect depends upon the use of an articulated visual field). Distance...Hughes, H. & Zimba , L. (1987) Nat,ral boundaries for the spatial spread of directed visual attention. Neuropsychologia, 25, 5-18. O Jonides, J. (1976

  4. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  5. State of dispersion of magnetic nanoparticles in an aqueous medium: experiments and Monte Carlo simulation.

    PubMed

    Kumar, Santosh; Ravikumar, Chettiannan; Bandyopadhyaya, Rajdip

    2010-12-07

    Monte Carlo simulation results predicting the state of dispersion (single, dimer, trimer, and so on) of coated superparamagnetic iron oxide (Fe(3)O(4)) nanoparticles in an aqueous medium are compared with our experimental data for the same. Measured values of the volume percentage of particles in the dispersion, core particle diameter, coating-shell thickness, grafting density of the coating agent, saturation magnetization, and zeta potential for the citric acid-coated and poly(acrylic acid) [PAA]-coated particles have been used in our simulation. The simulation was performed by calculating the total interaction potential between two nanoparticles as a function of their interparticle distance and applying a criterion for the two particles to aggregate, with the criterion being that the minimum depth of the secondary minima in the total interaction potential must be at least equal to k(B)T. Simulation results successfully predicted both experimental trends-aggregates for citric acid-coated particles and an individual isolated state for PAA-coated particles. We have also investigated how this state changes for both kind of coating agents by varying the particle volume percentage from 0.01 to 25%, the particle diameter from 2 to 19 nm, the shell thickness from 1 to 14 nm, and grafting density from 10(15) to 10(22) molecules/m(2). We find that the use of a lower shell thickness and a higher particle volume percentage leads to the formation of larger aggregates. The possible range of values of these four variables, which can be used experimentally to prepare a stable aqueous dispersion of isolated particles, is recommended on the basis of predictions from our simulation.

  6. A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Steinley, Douglas

    2007-01-01

    Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…

  7. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  8. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  9. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  10. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  11. "Good Work Awards:" Effects on Children's Families. Technical Report #12.

    ERIC Educational Resources Information Center

    Chun, Sherlyn; Mays, Violet

    This brief report describes parental reaction to a reinforcement strategy used with children in the Kamehameha Early Education Program (KEEP). Staff members report that "Good Work Awards" (GWAs) are viewed favorably by mothers of students. GWAs are dittoed notes sent home with children when they have met a minimum criterion for daily…

  12. 75 FR 48370 - Biweekly Notice Applications and Amendments to Facility Operating Licenses Involving No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-10

    ... revise the minimum Emergency Diesel Generator (EDG) output voltage acceptance criterion in Surveillance... ensures the timely transfer of plant safety system loads to the Emergency Diesel Generators in the event a... from the emergency diesel generators in a timely manner. This change is needed to bring Fermi 2 into...

  13. Family Living and Parenthood. Performance Objectives and Criterion-Referenced Test Items.

    ERIC Educational Resources Information Center

    Missouri Univ., Columbia. Instructional Materials Lab.

    This guide was developed to assist home economics teachers in implementing the Missouri Vocational Instructional Management System into the home economics curriculum at the local level through a family living and parenthood semester course. The course contains a minimum of two performance objectives for each competency developed and validated by…

  14. 49 CFR 175.706 - Separation distances for undeveloped film from packages containing Class 7 (radioactive) materials.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...

  15. 49 CFR 175.706 - Separation distances for undeveloped film from packages containing Class 7 (radioactive) materials.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...

  16. 49 CFR 175.706 - Separation distances for undeveloped film from packages containing Class 7 (radioactive) materials.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 2 2013-10-01 2013-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...

  17. 49 CFR 175.706 - Separation distances for undeveloped film from packages containing Class 7 (radioactive) materials.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 2 2012-10-01 2012-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...

  18. 49 CFR 175.706 - Separation distances for undeveloped film from packages containing Class 7 (radioactive) materials.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...

  19. Similarity analysis of spectra obtained via reflectance spectrometry in legal medicine.

    PubMed

    Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael

    2014-02-01

    In the present study, a series of reflectance spectra of postmortem lividity, pallor, and putrefaction-affected skin for 195 investigated cases in the course of cooling down the corpse has been collected. The reflectance spectrometric measurements were stored together with their respective metadata in a MySQL database. The latter has been managed via a scientific information repository. We propose similarity measures and a criterion of similarity that capture similar spectra recorded at corpse skin. We systematically clustered reflectance spectra from the database as well as their metadata, such as case number, age, sex, skin temperature, duration of cooling, and postmortem time, with respect to the given criterion of similarity. Altogether, more than 500 reflectance spectra have been pairwisely compared. The measures that have been used to compare a pair of reflectance curve samples include the Euclidean distance between curves and the Euclidean distance between derivatives of the functions represented by the reflectance curves at the same wavelengths in the spectral range of visible light between 380 and 750 nm. For each case, using the recorded reflectance curves and the similarity criterion, the postmortem time interval during which a characteristic change in the shape of reflectance spectrum takes place is estimated. The latter is carried out via a software package composed of Java, Python, and MatLab scripts that query the MySQL database. We show that in legal medicine, matching and clustering of reflectance curves obtained by means of reflectance spectrometry with respect to a given criterion of similarity can be used to estimate the postmortem interval.

  20. Minimum separation distances for natural gas pipeline and boilers in the 300 area, Hanford Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daling, P.M.; Graham, T.M.

    1997-08-01

    The U.S. Department of Energy (DOE) is proposing actions to reduce energy expenditures and improve energy system reliability at the 300 Area of the Hanford Site. These actions include replacing the centralized heating system with heating units for individual buildings or groups of buildings, constructing a new natural gas distribution system to provide a fuel source for many of these units, and constructing a central control building to operate and maintain the system. The individual heating units will include steam boilers that are to be housed in individual annex buildings located at some distance away from nearby 300 Area nuclearmore » facilities. This analysis develops the basis for siting the package boilers and natural gas distribution systems to be used to supply steam to 300 Area nuclear facilities. The effects of four potential fire and explosion scenarios involving the boiler and natural gas pipeline were quantified to determine minimum separation distances that would reduce the risks to nearby nuclear facilities. The resulting minimum separation distances are shown in Table ES.1.« less

  1. 30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...

  2. 30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...

  3. 30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...

  4. 30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...

  5. Spatial variability in airborne pollen concentrations.

    PubMed

    Raynor, G S; Ogden, E C; Hayes, J V

    1975-03-01

    Tests were conducted to determine the relationship between airborne pollen concentrations and distance. Simultaneous samples were taken in 171 tests with sets of eight rotoslide samplers spaced from one to 486 M. apart in straight lines. Use of all possible pairs gave 28 separation distances. Tests were conducted over a 2-year period in urban and rural locations distant from major pollen sources during both tree and ragweed pollen seasons. Samples were taken at a height of 1.5 M. during 5-to 20-minute periods. Tests were grouped by pollen type, location, year, and direction of the wind relative to the line. Data were analyzed to evaluate variability without regard to sampler spacing and variability as a function of separation distance. The mean, standard deviation, coefficient of variation, ratio of maximum to the mean, and ratio of minimum to the mean were calculated for each test, each group of tests, and all cases. The average coefficient of variation is 0.21, the maximum over the mean, 1.39 and the minimum over the mean, 0.69. No relationship was found with experimental conditions. Samples taken at the minimum separation distance had a mean difference of 18 per cent. Differences between pairs of samples increased with distance in 10 of 13 groups. These results suggest that airborne pollens are not always well mixed in the lower atmosphere and that a sample becomes less representative with increasing distance from the sampling location.

  6. Role of optimization criterion in static asymmetric analysis of lumbar spine load.

    PubMed

    Daniel, Matej

    2011-10-01

    A common method for load estimation in biomechanics is the inverse dynamics optimization, where the muscle activation pattern is found by minimizing or maximizing the optimization criterion. It has been shown that various optimization criteria predict remarkably similar muscle activation pattern and intra-articular contact forces during leg motion. The aim of this paper is to study the effect of the choice of optimization criterion on L4/L5 loading during static asymmetric loading. Upright standing with weight in one stretched arm was taken as a representative position. Musculoskeletal model of lumbar spine model was created from CT images of Visible Human Project. Several criteria were tested based on the minimization of muscle forces, muscle stresses, and spinal load. All criteria provide the same level of lumbar spine loading (difference is below 25%), except the criterion of minimum lumbar shear force which predicts unrealistically high spinal load and should not be considered further. Estimated spinal load and predicted muscle force activation pattern are in accordance with the intradiscal pressure measurements and EMG measurements. The L4/L5 spine loads 1312 N, 1674 N, and 1993 N were predicted for mass of weight in hand 2, 5, and 8 kg, respectively using criterion of mininum muscle stress cubed. As the optimization criteria do not considerably affect the spinal load, their choice is not critical in further clinical or ergonomic studies and computationally simpler criterion can be used.

  7. 49 CFR 176.708 - Segregation distances.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Segregation distances. 176.708 Section 176.708... Requirements for Radioactive Materials § 176.708 Segregation distances. (a) Table IV lists minimum separation... into account any relocation of cargo during the voyage. (e) Any departure from the segregation...

  8. Influence of Time-Pickoff Circuit Parameters on LiDAR Range Precision

    PubMed Central

    Wang, Hongming; Yang, Bingwei; Huyan, Jiayue; Xu, Lijun

    2017-01-01

    A pulsed time-of-flight (TOF) measurement-based Light Detection and Ranging (LiDAR) system is more effective for medium-long range distances. As a key ranging unit, a time-pickoff circuit based on automatic gain control (AGC) and constant fraction discriminator (CFD) is designed to reduce the walk error and the timing jitter for obtaining the accurate time interval. Compared with Cramer–Rao lower bound (CRLB) and the estimation of the timing jitter, four parameters-based Monte Carlo simulations are established to show how the range precision is influenced by the parameters, including pulse amplitude, pulse width, attenuation fraction and delay time of the CFD. Experiments were carried out to verify the relationship between the range precision and three of the parameters, exclusing pulse width. It can be concluded that two parameters of the ranging circuit (attenuation fraction and delay time) were selected according to the ranging performance of the minimum pulse amplitude. The attenuation fraction should be selected in the range from 0.2 to 0.6 to achieve high range precision. The selection criterion of the time-pickoff circuit parameters is helpful for the ranging circuit design of TOF LiDAR system. PMID:29039772

  9. A simple criterion for determining the dynamical stability of three-body systems

    NASA Technical Reports Server (NTRS)

    Black, D. C.

    1982-01-01

    Coplanar, prograde three-body systems (TBS) are discussed, emphasizing the specification of general criteria for determining whether such systems are dynamically stable. It is shown that the Graziani-Black (1981) criteria provide a quantitatively accurate characterization of the onset of dynamic instability for values of the dimensionless mass ranging from one millionth to one million. Harrington's (1977) general criterion and the Graziani-Black criterion are compared with results from analytic work that spans a 12-orders-of-magnitude variation in the mass ratios of the TBS components. Comparison of the Graziani-Black criteria with data for eight well-studied triple-star systems indicates that the observed lower limit for the ratio of periastron distance of the tertiary orbit to the semimajor axis of the binary orbit is due to dynamical instability rather than to cosmogonic processes.

  10. ELECTROFISHING DISTANCE NEEDED TO ESTIMATE FISH SPECIES RICHNESS IN RAFTABLE WESTERN USA RIVERS

    EPA Science Inventory

    A critical issue in river monitoring is the minimum amount of sampling distance required to adequately represent the fish assemblage of a reach. Determining adequate sampling distance is important because it affects estimates of fish assemblage integrity and diversity at local a...

  11. Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.

    ERIC Educational Resources Information Center

    Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.

    This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…

  12. Determining size and dispersion of minimum viable populations for land management planning and species conservation

    NASA Astrophysics Data System (ADS)

    Lehmkuhl, John F.

    1984-03-01

    The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.

  13. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.

  14. [Medical image segmentation based on the minimum variation snake model].

    PubMed

    Zhou, Changxiong; Yu, Shenglin

    2007-02-01

    It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.

  15. Structure for identifying, locating and quantifying physical phenomena

    DOEpatents

    Richardson, John G.

    2006-10-24

    A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.

  16. Method and apparatus for identifying, locating and quantifying physical phenomena and structure including same

    DOEpatents

    Richardson, John G.

    2006-01-24

    A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.

  17. Physical employment standards for U.K. fire and rescue service personnel.

    PubMed

    Blacker, S D; Rayson, M P; Wilkinson, D M; Carter, J M; Nevill, A M; Richmond, V L

    2016-01-01

    Evidence-based physical employment standards are vital for recruiting, training and maintaining the operational effectiveness of personnel in physically demanding occupations. (i) Develop criterion tests for in-service physical assessment, which simulate the role-related physical demands of UK fire and rescue service (UK FRS) personnel. (ii) Develop practical physical selection tests for FRS applicants. (iii) Evaluate the validity of the selection tests to predict criterion test performance. Stage 1: we conducted a physical demands analysis involving seven workshops and an expert panel to document the key physical tasks required of UK FRS personnel and to develop 'criterion' and 'selection' tests. Stage 2: we measured the performance of 137 trainee and 50 trained UK FRS personnel on selection, criterion and 'field' measures of aerobic power, strength and body size. Statistical models were developed to predict criterion test performance. Stage 3: matter experts derived minimum performance standards. We developed single person simulations of the key physical tasks required of UK FRS personnel as criterion and selection tests (rural fire, domestic fire, ladder lift, ladder extension, ladder climb, pump assembly, enclosed space search). Selection tests were marginally stronger predictors of criterion test performance (r = 0.88-0.94, 95% Limits of Agreement [LoA] 7.6-14.0%) than field test scores (r = 0.84-0.94, 95% LoA 8.0-19.8%) and offered greater face and content validity and more practical implementation. This study outlines the development of role-related, gender-free physical employment tests for the UK FRS, which conform to equal opportunities law. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Astrometry and exoplanets in the Gaia era: a Bayesian approach to detection and parameter recovery

    NASA Astrophysics Data System (ADS)

    Ranalli, P.; Hobbs, D.; Lindegren, L.

    2018-06-01

    The Gaia mission is expected to make a significant contribution to the knowledge of exoplanet systems, both in terms of their number and of their physical properties. We develop Bayesian methods and detection criteria for orbital fitting, and revise the detectability of exoplanets in light of the in-flight properties of Gaia. Limiting ourselves to one-planet systems as a first step of the development, we simulate Gaia data for exoplanet systems over a grid of S/N, orbital period, and eccentricity. The simulations are then fit using Markov chain Monte Carlo methods. We investigate the detection rate according to three information criteria and the Δχ2. For the Δχ2, the effective number of degrees of freedom depends on the mission length. We find that the choice of the Markov chain starting point can affect the quality of the results; we therefore consider two limit possibilities: an ideal case, and a very simple method that finds the starting point assuming circular orbits. We use 6644 and 4402 simulations to assess the fraction of false positive detections in a 5 yr and in a 10 yr mission, respectively; and 4968 and 4706 simulations to assess the detection rate and how the parameters are recovered. Using Jeffreys' scale of evidence, the fraction of false positives passing a strong evidence criterion is ≲0.2% (0.6%) when considering a 5 yr (10 yr) mission and using the Akaike information criterion or the Watanabe-Akaike information criterion, and <0.02% (<0.06%) when using the Bayesian information criterion. We find that there is a 50% chance of detecting a planet with a minimum S/N = 2.3 (1.7). This sets the maximum distance to which a planet is detectable to 70 pc and 3.5 pc for a Jupiter-mass and Neptune-mass planets, respectively, assuming a 10 yr mission, a 4 au semi-major axis, and a 1 M⊙ star. We show the distribution of the accuracy and precision with which orbital parameters are recovered. The period is the orbital parameter that can be determined with the best accuracy, with a median relative difference between input and output periods of 4.2% (2.9%) assuming a 5 yr (10 yr) mission. The median accuracy of the semi-major axis of the orbit can be recovered with a median relative error of 7% (6%). The eccentricity can also be recovered with a median absolute accuracy of 0.07 (0.06).

  19. The double high tide at Port Ellen: Doodson's criterion revisited

    NASA Astrophysics Data System (ADS)

    Byrne, Hannah A. M.; Mattias Green, J. A.; Bowers, David G.

    2017-07-01

    Doodson proposed a minimum criterion to predict the occurrence of double high (or double low) waters when a higher-frequency tidal harmonic is added to the semi-diurnal tide. If the phasing of the harmonic is optimal, the condition for a double high water can be written bn2/a > 1 where b is the amplitude of the higher harmonic, a is the amplitude of the semi-diurnal tide, and n is the ratio of their frequencies. Here we expand this criterion to allow for (i) a phase difference ϕ between the semi-diurnal tide and the harmonic and (ii) the fact that the double high water will disappear in the event that b/a becomes large enough for the higher harmonic to be the dominant component of the tide. This can happen, for example, at places or times where the semi-diurnal tide is very small. The revised parameter is br2/a, where r is a number generally less than n, although equal to n when ϕ = 0. The theory predicts that a double high tide will form when this parameter exceeds 1 and then disappear when it exceeds a value of order n2 and the higher harmonic becomes dominant. We test these predictions against observations at Port Ellen in the Inner Hebrides of Scotland. For most of the data set, the largest harmonic of the semi-diurnal tide is the sixth diurnal component, for which n = 3. The principal lunar and solar semi-diurnal tides are about equal at Port Ellen and so the semi-diurnal tide becomes very small twice a month at neap tides (here defined as the smallest fortnightly tidal range). A double high water forms when br2/a first exceeds a minimum value of about 1.5 as neap tides are approached and then disappears as br2/a then exceeds a second limiting value of about 10 at neap tides in agreement with the revised criterion.

  20. Pigment dispersion and Artisan phakic intraocular lenses: crystalline lens rise as a safety criterion.

    PubMed

    Baïkoff, Georges; Bourgeon, Grégoire; Jodai, Horacio Jitsuo; Fontaine, Aline; Lellis, Fernando Viera; Trinquet, Laure

    2005-04-01

    To validate the theory that crystalline lens rise can be used as a safety criterion to prevent pigment dispersion in eyes with an Artisan phakic intraocular lens (IOL) (Ophtec BV). Monticelli Clinic, Marseilles, France. A comparative analysis of crystalline lens rise in 9 eyes with pigment dispersion and 78 eyes without dispersion was performed. All eyes had previous implantation of an Artisan IOL. Anterior segment imaging was done using an anterior chamber optical coherence tomography (AC OCT) prototype. Crystalline lens rise was defined by the distance between the anterior pole of the crystalline lens and the horizontal plane joining the opposite iridocorneal recesses. The study confirmed that crystalline lens rise can be considered a safety criterion for implantation of Artisan-type phakic IOLs. The higher the crystalline lens rise, the greater the risk for developing pigment dispersion in the area of the pupil. This complication occurred more frequently in hyperopic eyes than in myopic eyes. Results indicate there is little or no risk for pigment dispersion if the rise is less than 600 microm; 67% of eyes with a rise of 600 microm or more developed pupillary pigment dispersion. In some cases in which the IOL was loosely fixated, there was no traction on the iris root and dispersion was prevented or delayed. Crystalline lens rise should be considered a new safety criterion for Artisan phakic IOL implantation and should also be applied to other types of phakic IOLs. The distance remaining between the crystalline lens rise and a 600 microm theoretical safety level allows one to calculate how long the IOL can safely remain in the eye.

  1. Estimation of a Stopping Criterion for Geophysical Granular Flows Based on Numerical Experimentation

    NASA Astrophysics Data System (ADS)

    Yu, B.; Dalbey, K.; Bursik, M.; Patra, A.; Pitman, E. B.

    2004-12-01

    Inundation area may be the most important factor for mitigation of natural hazards related to avalanches, debris flows, landslides and pyroclastic flows. Run-out distance is the key parameter for inundation because the front deposits define the leading edge of inundation. To define the run-out distance, it is necessary to know when a flow stops. Numerical experiments are presented for determining a stopping criterion and exploring the suitability of a Savage-Hutter granular model for computing inundation areas of granular flows. The TITAN2D model was employed to run numerical experiments based on the Savage-Hutter theory. A potentially reasonable stopping criterion was found as a function of dimensionless average velocity, aspect ratio of pile, internal friction angle, bed friction angle and bed slope in the flow direction. Slumping piles on a horizontal surface and geophysical flows over complex topography were simulated. Several mountainous areas, including Colima volcano (MX), Casita (Nic.), Little Tahoma Peak (WA, USA) and the San Bernardino Mountains (CA, USA) were used to simulate geophysical flows. Volcanic block and ash flows, debris avalanches and debris flows occurred in these areas and caused varying degrees of damage. The areas have complex topography, including locally steep open slopes, sinuous channels, and combinations of these. With different topography and physical scaling, slumping piles and geophysical flows have a somewhat different dependence of dimensionless stopping velocity on power-law constants associated with aspect ratio of pile, internal friction angle, bed friction angle and bed slope in the flow direction. Visual comparison of the details of the inundation area obtained from the TITAN2D model with models that contain some form of viscous dissipation point out weaknesses in the model that are not evident by investigation of the stopping criterion alone.

  2. Entanglement criteria via the uncertainty relations in su(2) and su(1,1) algebras: Detection of non-Gaussian entangled states

    NASA Astrophysics Data System (ADS)

    Nha, Hyunchul; Kim, Jaewan

    2006-07-01

    We derive a class of inequalities, from the uncertainty relations of the su(1,1) and the su(2) algebra in conjunction with partial transposition, that must be satisfied by any separable two-mode states. These inequalities are presented in terms of the su(2) operators Jx=(a†b+ab†)/2 , Jy=(a†b-ab†)/2i , and the total photon number ⟨Na+Nb⟩ . They include as special cases the inequality derived by Hillery and Zubairy [Phys. Rev. Lett. 96, 050503 (2006)], and the one by Agarwal and Biswas [New J. Phys. 7, 211 (2005)]. In particular, optimization over the whole inequalities leads to the criterion obtained by Agarwal and Biswas. We show that this optimal criterion can detect entanglement for a broad class of non-Gaussian entangled states, i.e., the su(2) minimum-uncertainty states. Experimental schemes to test the optimal criterion are also discussed, especially the one using linear optical devices and photodetectors.

  3. Minimum triplet covers of binary phylogenetic X-trees.

    PubMed

    Huber, K T; Moulton, V; Steel, M

    2017-12-01

    Trees with labelled leaves and with all other vertices of degree three play an important role in systematic biology and other areas of classification. A classical combinatorial result ensures that such trees can be uniquely reconstructed from the distances between the leaves (when the edges are given any strictly positive lengths). Moreover, a linear number of these pairwise distance values suffices to determine both the tree and its edge lengths. A natural set of pairs of leaves is provided by any 'triplet cover' of the tree (based on the fact that each non-leaf vertex is the median vertex of three leaves). In this paper we describe a number of new results concerning triplet covers of minimum size. In particular, we characterize such covers in terms of an associated graph being a 2-tree. Also, we show that minimum triplet covers are 'shellable' and thereby provide a set of pairs for which the inter-leaf distance values will uniquely determine the underlying tree and its associated branch lengths.

  4. Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis.

    PubMed

    Grigoras, Catalin

    2007-04-11

    This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation.

  5. Cyclists' perceptions of motorist harassment pre- to post-trial of the minimum passing distance road rule amendment in Queensland, Australia.

    PubMed

    Heesch, Kristiann C; Schramm, Amy; Debnath, Ashim Kumar; Haworth, Narelle

    2017-12-01

    Issues addressed Cyclists' perceptions of harassment from motorists discourages cycling. This study examined changes in cyclists' reporting of harassment pre- to post-introduction of the Queensland trial of the minimum passing distance road rule amendment (MPD-RRA). Methods Cross-sectional online surveys of cyclists in Queensland, Australia were conducted in 2009 (pre-trial; n=1758) and 2015 (post-trial commencement; n=1997). Cyclists were asked about their experiences of harassment from motorists while cycling. Logistic regression modelling was used to examine differences in the reporting of harassment between these time periods, after adjustments for demographic characteristics and cycling behaviour. Results At both time periods, the most reported types of harassment were deliberately driving too close (causing fear or anxiety), shouting abuse and making obscene gestures or engaging in sexual harassment. The percentage of cyclists who reported tailgating by motorists increased between 2009 and 2015 (15.1% to 19.5%; P<0.001). The percentage of cyclists reporting other types of harassment did not change significantly. Conclusions Cyclists in Queensland continue to perceive harassment while cycling on the road. The amendment to the minimum passing distance rule in Queensland appears to be having a negative effect on one type of harassment but no significant effects on others. So what? Minimum passing distance rules may not be improving cyclists' perceptions of motorists' behaviours. Additional strategies are required to create a supportive environment for cycling.

  6. Benefits of Using Pairwise Trajectory Management in the Central East Pacific

    NASA Technical Reports Server (NTRS)

    Chartrand, Ryan; Ballard, Kathryn

    2016-01-01

    Pairwise Trajectory Management (PTM) is a concept that utilizes airborne and ground-based capabilities to enable airborne spacing operations in oceanic regions. The goal of PTM is to use enhanced surveillance, along with airborne tools, to manage the spacing between aircraft. Due to the enhanced airborne surveillance of Automatic Dependent Surveillance-Broadcast (ADS-B) information and reduced communication, the PTM minimum spacing distance will be less than distances currently required of an air traffic controller. Reduced minimum distance will increase the capacity of aircraft operations at a given altitude or volume of airspace, thereby increasing time on desired trajectory and overall flight efficiency. PTM is designed to allow a flight crew to resolve a specific traffic conflict (or conflicts), identified by the air traffic controller, while maintaining the flight crew's desired altitude. The air traffic controller issues a PTM clearance to a flight crew authorized to conduct PTM operations in order to resolve a conflict for the pair (or pairs) of aircraft (i.e., the PTM aircraft and a designated target aircraft). This clearance requires the flight crew of the PTM aircraft to use their ADS-B-enabled onboard equipment to manage their spacing relative to the designated target aircraft to ensure spacing distances that are no closer than the PTM minimum distance. When the air traffic controller determines that PTM is no longer required, the controller issues a clearance to cancel the PTM operation.

  7. Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem

    USGS Publications Warehouse

    Schneider, Davod C.; Piatt, John F.

    1986-01-01

    The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.

  8. External Catalyst Breakup Phenomena

    DTIC Science & Technology

    1976-06-01

    catalyst particle can cause high internal pressures which result in particle destruction. Analytical results suggest rhat erosion effects from solid...mechanisms. * Pressure Forces. High G loadings and bed pressure drops should be avoided. Bed pre-loads should be kept at a minimum value. Thruster...5.2.7.1 Failure Theories ............................ 243 5.2.7.2 Maximum Tension Stress Criterion ............ 244 5.2.7.3 Distortion Energy Approach

  9. The Physiological Profile of Trained Female Dance Majors.

    ERIC Educational Resources Information Center

    Rimmer, James H.; And Others

    This investigation studied the physiological profiles of eight highly trained female dance majors. To be considered highly trained, each subject had to be dancing a minimum of three hours a day, four to five days a week, for the last year. They also had to meet the criterion of dancing at least ten hours a week for the last five years prior to…

  10. Delineating riparian zones for entire river networks using geomorphological criteria

    NASA Astrophysics Data System (ADS)

    Fernández, D.; Barquín, J.; Álvarez-Cabria, M.; Peñas, F. J.

    2012-03-01

    Riparian zone delineation is a central issue for riparian and river ecosystem management, however, criteria used to delineate them are still under debate. The area inundated by a 50-yr flood has been indicated as an optimal hydrological descriptor for riparian areas. This detailed hydrological information is, however, not usually available for entire river corridors, and is only available for populated areas at risk of flooding. One of the requirements for catchment planning is to establish the most appropriate location of zones to conserve or restore riparian buffer strips for whole river networks. This issue could be solved by using geomorphological criteria extracted from Digital Elevation Models. In this work we have explored the adjustment of surfaces developed under two different geomorphological criteria with respect to the flooded area covered by the 50-yr flood, in an attempt to rapidly delineate hydrologically-meaningful riparian zones for entire river networks. The first geomorphological criterion is based on the surface that intersects valley walls at a given number of bankfull depths above the channel (BFDAC), while the second is based on the surface defined by a~threshold value indicating the relative cost of moving from the stream up to the valley, accounting for slope and elevation change (path distance). As the relationship between local geomorphology and 50-yr flood has been suggested to be river-type dependant, we have performed our analyses distinguishing between three river types corresponding with three valley morphologies: open, shallow vee and deep vee valleys (in increasing degree of valley constrainment). Adjustment between the surfaces derived from geomorphological and hydrological criteria has been evaluated using two different methods: one based on exceeding areas (minimum exceeding score) and the other on the similarity among total area values. Both methods have pointed out the same surfaces when looking for those that best match with the 50-yr flood. Results have shown that the BFDAC approach obtains an adjustment slightly better than that of path distance. However, BFDAC requires bankfull depth regional regressions along the considered river network. Results have also confirmed that unconstrained valleys require lower threshold values than constrained valleys when deriving surfaces using geomorphological criteria. Moreover, this study provides: (i) guidance on the selection of the proper geomorphological criterion and associated threshold values, and (ii) an easy calibration framework to evaluate the adjustment with respect to hydrologically-meaningful surfaces.

  11. 27 CFR 555.220 - Table of separation distances of ammonium nitrate and blasting agents from explosives or blasting...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...

  12. 27 CFR 555.220 - Table of separation distances of ammonium nitrate and blasting agents from explosives or blasting...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...

  13. 27 CFR 555.220 - Table of separation distances of ammonium nitrate and blasting agents from explosives or blasting...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...

  14. 27 CFR 555.220 - Table of separation distances of ammonium nitrate and blasting agents from explosives or blasting...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...

  15. 27 CFR 555.220 - Table of separation distances of ammonium nitrate and blasting agents from explosives or blasting...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...

  16. The Mobility and Dispersal of Augmented Gravel in Upland Channels: a Knowledge-limited Practise in Supply-limited Channels

    NASA Astrophysics Data System (ADS)

    Downs, P. W.; Gilvear, D. J.

    2017-12-01

    Most river restoration research has been directed at rivers in the highly populated alluvial lowlands: significantly less is known about effectively rehabilitating upland channels, in part because the dynamics of sediment transfer are less well understood. Upland gravel augmentation is thus both a somewhat unproven method for rehabilitating degraded aquatic habitats in sediment-poor reaches, but also a natural experiment in better understanding sediment dynamics in steep, hydraulically-complex river channels. Monitoring on the River Avon in SW England since Water Year (WY) 2015 uses seismic impact plates, RFID-tagged particles and detailed channel bed mapping to establish the mobility rates of augmented particles, their dispersal distances and settling locations relative to flows received. Particles are highly, and equally, mobile: in WY2015, 17 sub-bankfull flows moved at least 60% of augmented particles with volumetric movement non-linearly correlated to flow energy but not to particle size. Waning rates of transport over the year suggest supply limitations. This relationship breaks down early in WY2017 where a two-year flow event moved 40% of the particles in just two months - confounding factors may include particle mass differences and particle supplies from upstream. Median particle travel distances correlate well to energy applied and suggest a long-tailed fan of dispersal with supplemental controls including channel curvature, boulder presence and stream power. Locally, particles are deposited preferentially around boulders and in sheltered river margins but also perched in clusters above the low-flow channel. High tracer mobility makes median transport distances highly dependent on the survey length - in WY2017 some particles travelled 300 m in a 3-month period that included the two-year flood event. Further, in WY2017 median transport distance as a function of volumetric transport suggested significant transport beyond the target reach. The observed particle dynamics thus have implications both for the biological effectiveness of gravel augmentation and the efficacy criterion of `minimum mobility'. They also reflect the challenges inherent to constraint-limited natural experiments that are, conversely, important in proving the value of geomorphology to resource managers.

  17. 46 CFR 42.20-70 - Minimum bow height.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2012-10-01 2012-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...

  18. 46 CFR 42.20-70 - Minimum bow height.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2011-10-01 2011-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...

  19. 46 CFR 42.20-70 - Minimum bow height.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2014-10-01 2014-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...

  20. 46 CFR 42.20-70 - Minimum bow height.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2013-10-01 2013-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...

  1. 46 CFR 42.20-70 - Minimum bow height.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2010-10-01 2010-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...

  2. The Effects of Target and Missile Characteristics on Theoretical Minimum Miss Distance for a Beam-Rider Guidance System in the Presence of Noise

    NASA Technical Reports Server (NTRS)

    Stewart, Elwood C.; Druding, Frank; Nishiura, Togo

    1959-01-01

    A study has been made to determine the relative importance of those factors which place an inherent limitation on the minimum obtainable miss distance for a beam-rider navigation system operating in the presence of glint noise and target evasive maneuver. Target and missile motions are assumed to be coplanar. The factors considered are the missile natural frequencies and damping ratios, missile steady-state acceleration capabilities, target evasive maneuver characteristics, and angular scintillation noise characteristics.

  3. Minimum requirements for adequate nighttime conspicuity of highway signs

    DOT National Transportation Integrated Search

    1988-02-01

    A laboratory and field study were conducted to assess the minimum luminance levels of signs to ensure that they will be detected and identified at adequate distances under nighttime driving conditions. A total of 30 subjects participated in the field...

  4. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  5. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  6. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  7. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  8. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  9. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  10. Geographical diffusion of prazosin across Veterans Health Administration: Examination of regional variation in daily dosing and quality indicators among veterans with posttraumatic stress disorder.

    PubMed

    Abrams, Thad E; Lund, Brian C; Alexander, Bruce; Bernardy, Nancy C; Friedman, Matthew J

    2015-01-01

    Posttraumatic stress disorder (PTSD) is a high-priority treatment area for the Veterans Health Administration (VHA), and dissemination patterns of innovative, efficacious therapies can inform areas for potential improvement of diffusion efforts and quality prescribing. In this study, we replicated a prior examination of the period prevalence of prazosin use as a function of distance from Puget Sound, Washington, where prazosin was first tested as an effective treatment for PTSD and where prazosin use was previously shown to be much greater than in other parts of the United States. We tested the following three hypotheses related to prazosin geographic diffusion: (1) a positive geographical correlation exists between the distance from Puget Sound and the proportion of users treated according to a guideline recommended minimum therapeutic target dose (>/=6 mg/d), (2) an inverse geographic correlation exists between prazosin and benzodiazepine use, and (3) no geographical correlation exists between prazosin use and serotonin reuptake inhibitor/serotonin norepinephrine reuptake inhibitor (SSRI/SNRI) use. Among a national sample of veterans with PTSD, overall prazosin utilization increased from 5.5 to 14.8% from 2006 to 2012. During this time period, rates at the Puget Sound VHA location declined from 34.4 to 29.9%, whereas utilization rates at locations a minimum of 2,500 miles away increased from 3.0 to 12.8%. Rates of minimum target dosing fell from 42.6 to 34.6% at the Puget Sound location. In contrast, at distances of at least 2,500 miles from Puget Sound, minimum threshold dosing rates remained stable (range, 18.6 to 17.7%). No discernible association was demonstrated between SSRI/SNRI or benzodiazepine utilization and the geographic distance from Puget Sound. Minimal threshold dosing of prazosin correlated positively with increased diffusion of prazosin use, but there was still a distance diffusion gradient. Although prazosin adoption has improved, geographic differences persist in both prescribing rates and minimum target dosing. Importantly, these regional disparities appear to be limited to prazosin prescribing and are not meaningfully correlated with SSRI/SNRI and benzodiazepine use as indicators of PTSD prescribing quality.

  11. Survey of Noncommissioned Officer Academies for Criterion Development Purposes,

    DTIC Science & Technology

    1961-12-01

    Inspection, Fitting and Wearing of the Uniform, Ceremonies, Customs and Courtesies, Conduct of Physical Training Program, etc. )--minimum of 15 hours. 3...in a course and covers the general responsibilities of leadership, problems of leader- subordinate relationships , and some of the leader’s specific...OPERATION AT INSTALLATIONS SURVEYED 3Y DA MILITARY PERSONNEL MANAGMNT TEAMS Type of Training Program Installation Refresher Leadership Instructor

  12. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  13. Low-flow analysis and selected flow statistics representative of 1930-2002 for streamflow-gaging stations in or near West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2006-01-01

    Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.

  14. Chemistry of Aviation Fuels

    NASA Technical Reports Server (NTRS)

    Knepper, Bryan; Hwang, Soon Muk; DeWitt, Kenneth J.

    2004-01-01

    Minimum ignition energies of various methanol/air mixtures were measured in a temperature controlled constant volume combustion vessel using a spark ignition method with a spark gap distance of 2 mm. The minimum ignition energies decrease rapidly as the mixture composition (equivalence ratio, Phi) changes from lean to stoichiometric, reach a minimum value, and then increase rather slowly with Phi. The minimum of the minimum ignition energy (MIE) and the corresponding mixture composition were determined to be 0.137 mJ and Phi = 1.16, a slightly rich mixture. The variation of minimum ignition energy with respect to the mixture composition is explained in terms of changes in reaction chemistry.

  15. Rail vs truck transport of biomass.

    PubMed

    Mahmudi, Hamed; Flynn, Peter C

    2006-01-01

    This study analyzes the economics of transshipping biomass from truck to train in a North American setting. Transshipment will only be economic when the cost per unit distance of a second transportation mode is less than the original mode. There is an optimum number of transshipment terminals which is related to biomass yield. Transshipment incurs incremental fixed costs, and hence there is a minimum shipping distance for rail transport above which lower costs/km offset the incremental fixed costs. For transport by dedicated unit train with an optimum number of terminals, the minimum economic rail shipping distance for straw is 170 km, and for boreal forest harvest residue wood chips is 145 km. The minimum economic shipping distance for straw exceeds the biomass draw distance for economically sized centrally located power plants, and hence the prospects for rail transport are limited to cases in which traffic congestion from truck transport would otherwise preclude project development. Ideally, wood chip transport costs would be lowered by rail transshipment for an economically sized centrally located power plant, but in a specific case in Alberta, Canada, the layout of existing rail lines precludes a centrally located plant supplied by rail, whereas a more versatile road system enables it by truck. Hence for wood chips as well as straw the economic incentive for rail transport to centrally located processing plants is limited. Rail transshipment may still be preferred in cases in which road congestion precludes truck delivery, for example as result of community objections.

  16. Solar wind velocity and temperature in the outer heliosphere

    NASA Technical Reports Server (NTRS)

    Gazis, P. R.; Barnes, A.; Mihalov, J. D.; Lazarus, A. J.

    1994-01-01

    At the end of 1992, the Pioneer 10, Pioneer 11, and Voyager 2 spacecraft were at heliocentric distances of 56.0, 37.3, and 39.0 AU and heliographic latitudes of 3.3 deg N, 17.4 deg N, and 8.6 deg S, respectively. Pioneer 11 and Voyager 2 are at similar celestial longitudes, while Pioneer 10 is on the opposite side of the Sun. All three spacecraft have working plasma analyzers, so intercomparison of data from these spacecraft provides important information about the global character of the solar wind in the outer heliosphere. The averaged solar wind speed continued to exhibit its well-known variation with solar cycle: Even at heliocentric distances greater than 50 AU, the average speed is highest during the declining phase of the solar cycle and lowest near solar minimum. There was a strong latitudinal gradient in solar wind speed between 3 deg and 17 deg N during the last solar minimum, but this gradient has since disappeared. The solar wind temperature declined with increasing heliocentric distance out to a heliocentric distance of at least 20 AU; this decline appeared to continue at larger heliocentric distances, but temperatures in the outer heliosphere were suprisingly high. While Pioneer 10 and Voyager 2 observed comparable solar wind temperatures, the temperature at Pioneer 11 was significantly higher, which suggests the existence of a large-scale variation of temperature with heliographic longitude. There was also some suggestion that solar wind temperatures were higher near solar minimum.

  17. BMDS: A Collection of R Functions for Bayesian Multidimensional Scaling

    ERIC Educational Resources Information Center

    Okada, Kensuke; Shigemasu, Kazuo

    2009-01-01

    Bayesian multidimensional scaling (MDS) has attracted a great deal of attention because: (1) it provides a better fit than do classical MDS and ALSCAL; (2) it provides estimation errors of the distances; and (3) the Bayesian dimension selection criterion, MDSIC, provides a direct indication of optimal dimensionality. However, Bayesian MDS is not…

  18. 33 CFR 67.05-20 - Minimum lighting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...

  19. 33 CFR 67.05-20 - Minimum lighting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...

  20. 33 CFR 67.05-20 - Minimum lighting requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...

  1. 33 CFR 67.05-20 - Minimum lighting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...

  2. 33 CFR 67.05-20 - Minimum lighting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...

  3. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  4. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  5. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  6. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  7. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  8. 40 CFR 257.25 - Assessment monitoring program.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Minimum distance between upgradient edge of the unit and downgradient monitoring well screen (minimum... that is likely to be without appreciable risk of deleterious effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or...

  9. 40 CFR 257.25 - Assessment monitoring program.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Minimum distance between upgradient edge of the unit and downgradient monitoring well screen (minimum... that is likely to be without appreciable risk of deleterious effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or...

  10. 40 CFR 257.25 - Assessment monitoring program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Minimum distance between upgradient edge of the unit and downgradient monitoring well screen (minimum... that is likely to be without appreciable risk of deleterious effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or...

  11. Scaling effect on the fracture toughness of bone materials using MMTS criterion.

    PubMed

    Akbardoost, Javad; Amirafshari, Reza; Mohsenzade, Omid; Berto, Filippo

    2018-05-21

    The aim of this study is to present a stress based approach for investigating the effect of specimen size on the fracture toughness of bone materials. The proposed approach is a modified form of the classical fracture criterion called maximum tangential stress (MTS). The mechanical properties of bone are different in longitudinal and transverse directions and hence the tangential stress component in the proposed approach should be determined in the orthotropic media. Since only the singular terms of series expansions were obtained in the previous studies, the tangential stress is measured from finite element analysis. In this study, the critical distance is also assumed to be size dependent and a semi-empirical formulation is used for describing the size dependency of the critical distance. By comparing the results predicted by the proposed approach and those reported in the previous studies, it is shown that the proposed approach can predict the fracture resistance of cracked bone by taking into account the effect of specimen size. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Analysis of Radiation Impact on White Mice through Radiation Dose Mapping in Medical Physics Laboratory

    NASA Astrophysics Data System (ADS)

    Sutikno, Madnasri; Susilo; Arya Wijayanti, Riza

    2016-08-01

    A study about X-ray radiation impact on the white mice through radiation dose mapping in Medical Physic Laboratory is already done. The purpose of this research is to determine the minimum distance of radiologist to X-ray instrument through treatment on the white mice. The radiation exposure doses are measured on the some points in the distance from radiation source between 30 cm up to 80 with interval of 30 cm. The impact of radiation exposure on the white mice and the effects of radiation measurement in different directions are investigated. It is founded that minimum distance of radiation worker to radiation source is 180 cm and X-ray has decreased leukocyte number and haemoglobin and has increased thrombocyte number in the blood of white mice.

  13. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  14. Flat-fielding of Solar Hα Observations Based on the Maximum Correntropy Criterion

    NASA Astrophysics Data System (ADS)

    Xu, Gao-Gui; Zheng, Sheng; Lin, Gang-Hua; Wang, Xiao-Fan

    2016-08-01

    The flat-field CCD calibration method of Kuhn et al. (KLL) is an efficient method for flat-fielding. However, since it depends on the minimum of the sum of squares error (SSE), its solution is sensitive to noise, especially non-Gaussian noise. In this paper, a new algorithm is proposed to determine the flat field. The idea is to change the criterion of gain estimate from SSE to the maximum correntropy. The result of a test on simulated data demonstrates that our method has a higher accuracy and a faster convergence than KLL’s and Chae’s. It has been found that the method effectively suppresses noise, especially in the case of typical non-Gaussian noise. And the computing time of our algorithm is the shortest.

  15. Reliability and validity of cervical position measurements in individuals with and without chronic neck pain.

    PubMed

    Dunleavy, Kim; Neil, Joseph; Tallon, Allison; Adamo, Diane E

    2015-09-01

    The cervical range of motion device (CROM) has been shown to provide reliable forward head position (FHP) measurement when the upper cervical angle (UCA) is controlled. However, measurement without UCA standardization is reflective of habitual patterns. Criterion validity has not been reported. The purposes of this study were to establish: (1) criterion validity of CROM FHP and UCA compared to Optotrak data, (2) relative reliability and minimal detectable change (MDC95) in patients with and without cervical pain, and (3) to compare UCA and FHP in patients with and without pain in habitual postures. (1) Within-subjects single session concurrent criterion validity design. Simultaneous CROM and OP measurement was conducted in habitual sitting posture in 16 healthy young adults. (2) Reliability and MDC95 of UCA and FHP were calculated from three trials. (3) Values for adults over 35 years with cervical pain and age-matched healthy controls were compared. (1) Forward head position distances were moderately correlated and UCA angles were highly correlated. The mean (standard deviation) differences can be expected to vary between 1·48 cm (1·74) for FHP and -1·7 (2·46)° for UCA. (2) Reliability for CROM FHP measurements were good to excellent (no pain) and moderate (pain). Cervical range of motion FHP MDC95 was moderately low (no pain), and moderate (pain). Reliability for CROM UCA measurements was excellent and MDC95 low for both groups. There was no difference in FHP distances between the pain and no pain groups, UCA was significantly more extended in the pain group (P<0·05). Cervical range of motion FHP measurements were only moderately correlated with Optotrak data, and limits of agreement (LOA) and MDC95 were relatively large. There was also no difference in CROM FHP distance between older symptomatic and asymptomatic individuals. Cervical range of motion FHP measurement is therefore not recommended as a clinical outcome measure. Cervical range of motion UCA measurements showed good criterion validity, excellent test-retest reliability, and achievable MDC95 in asymptomatic and symptomatic participants. Differences of more than 6° are required to exceed error. Cervical range of motion UCA shows promise as a useful reliable and valid measurement, particularly as patients with cervical pain exhibited significantly more extended angles.

  16. Reliability and validity of cervical position measurements in individuals with and without chronic neck pain

    PubMed Central

    Neil, Joseph; Tallon, Allison; Adamo, Diane E.

    2015-01-01

    Objectives The cervical range of motion device (CROM) has been shown to provide reliable forward head position (FHP) measurement when the upper cervical angle (UCA) is controlled. However, measurement without UCA standardization is reflective of habitual patterns. Criterion validity has not been reported. The purposes of this study were to establish: (1) criterion validity of CROM FHP and UCA compared to Optotrak data, (2) relative reliability and minimal detectable change (MDC95) in patients with and without cervical pain, and (3) to compare UCA and FHP in patients with and without pain in habitual postures. Methods (1) Within-subjects single session concurrent criterion validity design. Simultaneous CROM and OP measurement was conducted in habitual sitting posture in 16 healthy young adults. (2) Reliability and MDC95 of UCA and FHP were calculated from three trials. (3) Values for adults over 35 years with cervical pain and age-matched healthy controls were compared. Results (1) Forward head position distances were moderately correlated and UCA angles were highly correlated. The mean (standard deviation) differences can be expected to vary between 1·48 cm (1·74) for FHP and −1·7 (2·46)° for UCA. (2) Reliability for CROM FHP measurements were good to excellent (no pain) and moderate (pain). Cervical range of motion FHP MDC95 was moderately low (no pain), and moderate (pain). Reliability for CROM UCA measurements was excellent and MDC95 low for both groups. There was no difference in FHP distances between the pain and no pain groups, UCA was significantly more extended in the pain group (P<0·05). Discussion Cervical range of motion FHP measurements were only moderately correlated with Optotrak data, and limits of agreement (LOA) and MDC95 were relatively large. There was also no difference in CROM FHP distance between older symptomatic and asymptomatic individuals. Cervical range of motion FHP measurement is therefore not recommended as a clinical outcome measure. Cervical range of motion UCA measurements showed good criterion validity, excellent test–retest reliability, and achievable MDC95 in asymptomatic and symptomatic participants. Differences of more than 6° are required to exceed error. Cervical range of motion UCA shows promise as a useful reliable and valid measurement, particularly as patients with cervical pain exhibited significantly more extended angles. PMID:26917936

  17. A Comparison of Propagation Between Apertured Bessel and Gaussian beams

    NASA Astrophysics Data System (ADS)

    Lin, Mei; Yu, Yanzhong

    2009-04-01

    A true Bessel beam is a family of diffraction-free beams. Thus the most interesting and attractive characteristic of such beam is non-diffracting propagation. In optics, the comparisons of maximum propagation distance had been done between Bessel and Gaussian beams by Durnin and Sprangle, respectively. However, the results obtained by them are conflict due to the difference between their criteria. Because Bessel beams have many potential applications in millimeter wave bands, therefore, it is necessary and significant that the comparison is carried out at these bands. A new contrast criterion at millimeter wavelengths is proposed in our paper. Under this criterion, the numerical results are presented and a new conclusion is drawn.

  18. The performance of trellis coded multilevel DPSK on a fading mobile satellite channel

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1987-01-01

    The performance of trellis coded multilevel differential phase-shift-keying (MDPSK) over Rician and Rayleigh fading channels is discussed. For operation at L-Band, this signalling technique leads to a more robust system than the coherent system with dual pilot tone calibration previously proposed for UHF. The results are obtained using a combination of analysis and simulation. The analysis shows that the design criterion for trellis codes to be operated on fading channels with interleaving/deinterleaving is no longer free Euclidean distance. The correct design criterion for optimizing bit error probability of trellis coded MDPSK over fading channels will be presented along with examples illustrating its application.

  19. Boson peak and Ioffe-Regel criterion in amorphous siliconlike materials: The effect of bond directionality.

    PubMed

    Beltukov, Y M; Fusco, C; Parshin, D A; Tanguy, A

    2016-02-01

    The vibrational properties of model amorphous materials are studied by combining complete analysis of the vibration modes, dynamical structure factor, and energy diffusivity with exact diagonalization of the dynamical matrix and the kernel polynomial method, which allows a study of very large system sizes. Different materials are studied that differ only by the bending rigidity of the interactions in a Stillinger-Weber modelization used to describe amorphous silicon. The local bending rigidity can thus be used as a control parameter, to tune the sound velocity together with local bonds directionality. It is shown that for all the systems studied, the upper limit of the Boson peak corresponds to the Ioffe-Regel criterion for transverse waves, as well as to a minimum of the diffusivity. The Boson peak is followed by a diffusivity's increase supported by longitudinal phonons. The Ioffe-Regel criterion for transverse waves corresponds to a common characteristic mean-free path of 5-7 Å (which is slightly bigger for longitudinal phonons), while the fine structure of the vibrational density of states is shown to be sensitive to the local bending rigidity.

  20. A weighted information criterion for multiple minor components and its adaptive extraction algorithms.

    PubMed

    Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an

    2017-05-01

    Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Study of Noise-Certification Standards for Aircraft Engines. Volume 2. Procedures for Measuring Far Field Sound Pressure Levels around an Outdoor Jet-Engine Test Stand.

    DTIC Science & Technology

    1983-06-01

    60 References ........................................................... 79 AccesSqlon For NTIS rFA&I r"!’ TAU U: .,P Dist r A. -. S iv...separate exhaust nozzles for discharge of fan and turbine exhaust flows (e.g., JT15D, TFE731 , ALF-502, CF34, JT3D, CFM56, RB.211, CF6, JT9D, and PW2037...minimum radial distance from the effective source of sound at 40 Hz should then be approxi- mately 69 m. At 60 Hz, the minimum radial distance should be

  2. Applying six classifiers to airborne hyperspectral imagery for detecting giant reed

    USDA-ARS?s Scientific Manuscript database

    This study evaluated and compared six different image classifiers, including minimum distance (MD), Mahalanobis distance (MAHD), maximum likelihood (ML), spectral angle mapper (SAM), mixture tuned matched filtering (MTMF) and support vector machine (SVM), for detecting and mapping giant reed (Arundo...

  3. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  4. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  5. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  6. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  7. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  8. Nonlinear dimension reduction and clustering by Minimum Curvilinearity unfold neuropathic pain and tissue embryological classes.

    PubMed

    Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo

    2010-09-15

    Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.

  9. Guidance strategies and analysis for low thrust navigation

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1973-01-01

    A low-thrust guidance algorithm suitable for operational use was formulated. A constrained linear feedback control law was obtained using a minimum terminal miss criterion and restricting control corrections to constant changes for specified time periods. Both fixed- and variable-time-of-arrival guidance were considered. The performance of the guidance law was evaluated by applying it to the approach phase of the 1980 rendezvous mission with the comet Encke.

  10. Development and psychometric properties of the Ethics Environment Questionnaire.

    PubMed

    McDaniel, C

    1997-09-01

    The author reports on the development and the psychometric properties of the Ethics Environment Questionnaire (EEQ), an instrument by which to measure the opinions of health-care providers about ethics in their clinical practice organizations. The EEQ was developed to increase the number of valid and reliable measures pertaining to ethics in health-care delivery. The EEQ is a 20-item self-administered questionnaire using a Likert-type 5-point format, offering ease of administration. It is applicable to a cross-section of health-care practitioners and health-care facilities. The mean administration time is 10 minutes. The EEQ represents testing on 450 respondents in acute care settings among a cross-section of acute care facilities. Internal consistency reliability using Cronbach's alpha coefficient is 0.93, and the test-retest reliability is 0.88. Construct, content, and criterion validity are established. The scale is unidimensional, with factor loadings exceeding the minimum preset criterion. Mean score is 3.1 out of 5.0, with scores of 3.5 and above interpreted as reflective of a positive ethics environment. The EEQ provides a measure of ethics in health-care organizations among multi-practitioners in clinical practice on a valid, reliable, cost effective, and easily administered instrument that requires minimum investment of personnel time.

  11. Determination Of Slitting Criterion Parameter During The Multi Slit Rolling Process

    NASA Astrophysics Data System (ADS)

    Stefanik, Andrzej; Mróz, Sebastian; Szota, Piotr; Dyja, Henryk

    2007-05-01

    The rolling of rods with slitting of the strip calls for the use of special mathematical models that would allow for the separating of metal. A theoretical analysis of the effect of the gap of slitting rollers on the process of band slitting during the rolling of 20 mm and 16 mm-diameter ribbed rods rolled according to the two-strand technology was carried out within this study. For the numerical modeling of strip slitting the Forge3® computer program was applied. The strip slitting in the simulation is implemented by the algorithm of removing elements in which the critical value of the normalized Cockroft - Latham criterion has been exceeded. To determine the value of the criterion the inverse method was applied. Distance between a point, where crack begins, and point of contact metal with the slitting rollers was the parameter for analysis. Power and rolling torque during slit rolling were presented. Distribution and change of the stress in strand while slitting were presented.

  12. SU-E-I-65: The Joint Commission's Requirements for Annual Diagnostic Physics Testing of Nuclear Medicine Equipment, and a Clinically Relevant Methodology for Testing Low-Contrast Resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, W. Geoffrey; Gray, David Clinton

    Purpose: To introduce the Joint Commission's requirements for annual diagnostic physics testing of all nuclear medicine equipment, effective 7/1/2014, and to highlight an acceptable methodology for testing lowcontrast resolution of the nuclear medicine imaging system. Methods: The Joint Commission's required diagnostic physics evaluations are to be conducted for all of the image types produced clinically by each scanner. Other accrediting bodies, such as the ACR and the IAC, have similar imaging metrics, but do not emphasize testing low-contrast resolution as it relates clinically. The proposed method for testing low contrast resolution introduces quantitative metrics that are clinically relevant. The acquisitionmore » protocol and calculation of contrast levels will utilize a modified version of the protocol defined in AAPM Report #52. Results: Using the Rose criterion for lesion detection with a SNRpixel = 4.335 and a CNRlesion = 4, the minimum contrast levels for 25.4 mm and 31.8 mm cold spheres were calculated to be 0.317 and 0.283, respectively. These contrast levels are the minimum threshold that must be attained to guard against false positive lesion detection. Conclusion: Low contrast resolution, or detectability, can be properly tested in a manner that is clinically relevant by measuring the contrast level of cold spheres within a Jaszczak phantom using pixel values within ROI's placed in the background and cold sphere regions. The measured contrast levels are then compared to a minimum threshold calculated using the Rose criterion and a CNRlesion = 4. The measured contrast levels must either meet or exceed this minimum threshold to prove acceptable lesion detectability. This research and development activity was performed by the authors while employed at West Physics Consulting, LLC. It is presented with the consent of West Physics, which has authorized the dissemination of the information and/or techniques described in the work.« less

  13. Cognitive Load Theory and the Use of Worked Examples as an Instructional Strategy in Physics for Distance Learners: A Preliminary Study

    ERIC Educational Resources Information Center

    Saw, Kim Guan

    2017-01-01

    This article revisits the cognitive load theory to explore the use of worked examples to teach a selected topic in a higher level undergraduate physics course for distance learners at the School of Distance Education, Universiti Sains Malaysia. With a break of several years from receiving formal education and having only minimum science…

  14. Selection of finite-element mesh parameters in modeling the growth of hydraulic fracturing cracks

    NASA Astrophysics Data System (ADS)

    Kurguzov, V. D.

    2016-12-01

    The effect of the mesh geometry on the accuracy of solutions obtained by the finite-element method for problems of linear fracture mechanics is investigated. The guidelines have been formulated for constructing an optimum mesh for several routine problems involving elements with linear and quadratic approximation of displacements. The accuracy of finite-element solutions is estimated based on the degree of the difference between the calculated stress-intensity factor (SIF) and its value obtained analytically. In problems of hydrofracturing of oil-bearing formation, the pump-in pressure of injected water produces a distributed load on crack flanks as opposed to standard fracture mechanics problems that have analytical solutions, where a load is applied to the external boundaries of the computational region and the cracks themselves are kept free from stresses. Some model pressure profiles, as well as pressure profiles taken from real hydrodynamic computations, have been considered. Computer models of cracks with allowance for the pre-stressed state, fracture toughness, and elastic properties of materials are developed in the MSC.Marc 2012 finite-element analysis software. The Irwin force criterion is used as a criterion of brittle fracture and the SIFs are computed using the Cherepanov-Rice invariant J-integral. The process of crack propagation in a linearly elastic isotropic body is described in terms of the elastic energy release rate G and modeled using the VCCT (Virtual Crack Closure Technique) approach. It has been found that the solution accuracy is sensitive to the mesh configuration. Several parameters that are decisive in constructing effective finite-element meshes, namely, the minimum element size, the distance between mesh nodes in the vicinity of a crack tip, and the ratio of the height of an element to its length, have been established. It has been shown that a mesh that consists of only small elements does not improve the accuracy of the solution.

  15. Comparison of computer versus manual determination of pulmonary nodule volumes in CT scans

    NASA Astrophysics Data System (ADS)

    Biancardi, Alberto M.; Reeves, Anthony P.; Jirapatnakul, Artit C.; Apanasovitch, Tatiyana; Yankelevitz, David; Henschke, Claudia I.

    2008-03-01

    Accurate nodule volume estimation is necessary in order to estimate the clinically relevant growth rate or change in size over time. An automated nodule volume-measuring algorithm was applied to a set of pulmonary nodules that were documented by the Lung Image Database Consortium (LIDC). The LIDC process model specifies that each scan is assessed by four experienced thoracic radiologists and that boundaries are to be marked around the visible extent of the nodules for nodules 3 mm and larger. Nodules were selected from the LIDC database with the following inclusion criteria: (a) they must have a solid component on a minimum of three CT image slices and (b) they must be marked by all four LIDC radiologists. A total of 113 nodules met the selection criterion with diameters ranging from 3.59 mm to 32.68 mm (mean 9.37 mm, median 7.67 mm). The centroid of each marked nodule was used as the seed point for the automated algorithm. 95 nodules (84.1%) were correctly segmented, but one was considered not meeting the first selection criterion by the automated method; for the remaining ones, eight (7.1%) were structurally too complex or extensively attached and 10 (8.8%) were considered not properly segmented after a simple visual inspection by a radiologist. Since the LIDC specifications, as aforementioned, instruct radiologists to include both solid and sub-solid parts, the automated method core capability of segmenting solid tissues was augmented to take into account also the nodule sub-solid parts. We ranked the distances of the automated method estimates and the radiologist-based estimates from the median of the radiologist-based values. The automated method was in 76.6% of the cases closer to the median than at least one of the values derived from the manual markings, which is a sign of a very good agreement with the radiologists' markings.

  16. On possible parent bodies of Innisfree, Lost City and Prgibram meteorites.

    NASA Astrophysics Data System (ADS)

    Rozaev, A. E.

    1994-12-01

    Minor planets 1981 ET3 and Seleucus are possible parent bodies of Innisfree and Lost City meteorites, asteroid Mithra is the most probable source of Prgibram meteorite. The conclusions are based on the Southworth - Hawkins criterion with taking into account of the motion constants (Tisserand coefficient, etc.) and minimal distances between orbits at present time.

  17. INCREASED VISUAL BEHAVIOR IN LOW VISION CHILDREN.

    ERIC Educational Resources Information Center

    BARRAGA, NATALIE

    TEN PAIRS OF BLIND CHILDREN AGED SIX TO 13 YEARS WHO HAD SOME VISION WERE MATCHED BY PRETEST SCORES ON A TEST OF VISUAL DISCRIMINATION. A CRITERION GROUP, DESIGNATED THE PRINT COMPARISON GROUP, HAD SLIGHLY HIGHER RECORDED DISTANCE ACUITIES AND USED VISION AS THE PRIMARY MEANS OF LEARNING. PAIRS OF EXPERIMENTAL SUBJECTS DAILY RECEIVED 45 MINUTES OF…

  18. Measurement of the lowest dosage of phenobarbital that can produce drug discrimination in rats

    PubMed Central

    Overton, Donald A.; Stanwood, Gregg D.; Patel, Bhavesh N.; Pragada, Sreenivasa R.; Gordon, M. Kathleen

    2009-01-01

    Rationale Accurate measurement of the threshold dosage of phenobarbital that can produce drug discrimination (DD) may improve our understanding of the mechanisms and properties of such discrimination. Objectives Compare three methods for determining the threshold dosage for phenobarbital (D) versus no drug (N) DD. Methods Rats learned a D versus N DD in 2-lever operant training chambers. A titration scheme was employed to increase or decrease dosage at the end of each 18-day block of sessions depending on whether the rat had achieved criterion accuracy during the sessions just completed. Three criterion rules were employed, all based on average percent drug lever responses during initial links of the last 6 D and 6 N sessions of a block. The criteria were: D%>66 and N%<33; D%>50 and N%<50; (D%-N%)>33. Two squads of rats were trained, one immediately after the other. Results All rats discriminated drug versus no drug. In most rats, dosage decreased to low levels and then oscillated near the minimum level required to maintain criterion performance. The lowest discriminated dosage significantly differed under the three criterion rules. The squad that was trained 2nd may have benefited by partially duplicating the lever choices of the previous squad. Conclusions The lowest discriminated dosage is influenced by the criterion of discriminative control that is employed, and is higher than the absolute threshold at which discrimination entirely disappears. Threshold estimations closer to absolute threshold can be obtained when criteria are employed that are permissive, and that allow rats to maintain lever preferences. PMID:19082992

  19. Non-Intrusive Impedance-Based Cable Tester

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Simpson, Howard J. (Inventor)

    1999-01-01

    A non-intrusive electrical cable tester determines the nature and location of a discontinuity in a cable through application of an oscillating signal to one end of the cable. The frequency of the oscillating signal is varied in increments until a minimum, close to zero voltage is measured at a signal injection point which is indicative of a minimum impedance at that point. The frequency of the test signal at which the minimum impedance occurs is then employed to determine the distance to the discontinuity by employing a formula which relates this distance to the signal frequency and the velocity factor of the cable. A numerically controlled oscillator is provided to generate the oscillating signal, and a microcontroller automatically controls operation of the cable tester to make the desired measurements and display the results. The device is contained in a portable housing which may be hand held to facilitate convenient use of the device in difficult to access locations.

  20. Short Distance Standoff Raman Detection of Extra Virgin Olive Oil Adulterated with Canola and Grapeseed Oils.

    PubMed

    Farley, Carlton; Kassu, Aschalew; Bose, Nayana; Jackson-Davis, Armitra; Boateng, Judith; Ruffin, Paul; Sharma, Anup

    2017-06-01

    A short distance standoff Raman technique is demonstrated for detecting economically motivated adulteration (EMA) in extra virgin olive oil (EVOO). Using a portable Raman spectrometer operating with a 785 nm laser and a 2-in. refracting telescope, adulteration of olive oil with grapeseed oil and canola oil is detected between 1% and 100% at a minimum concentration of 2.5% from a distance of 15 cm and at a minimum concentration of 5% from a distance of 1 m. The technique involves correlating the intensity ratios of prominent Raman bands of edible oils at 1254, 1657, and 1441 cm -1 to the degree of adulteration. As a novel variation in the data analysis technique, integrated intensities over a spectral range of 100 cm -1 around the Raman line were used, making it possible to increase the sensitivity of the technique. The technique is demonstrated by detecting adulteration of EVOO with grapeseed and canola oils at 0-100%. Due to the potential of this technique for making measurements from a convenient distance, the short distance standoff Raman technique has the promise to be used for routine applications in food industry such as identifying food items and monitoring EMA at various checkpoints in the food supply chain and storage facilities.

  1. 47 CFR 73.610 - Minimum distance separations between stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... they fail to comply with the requirements specified in paragraphs (b), (c) and (d) of this section... separation. (c) Minimum allotment and station adjacent channel separations applicable to all zones: (1... pairs of channels (see § 73.603(a)). (d) In addition to the requirements of paragraphs (a), (b) and (c...

  2. Cadaver study of anatomic landmark identification for placing ankle arthroscopy portals.

    PubMed

    Scheibling, B; Koch, G; Clavert, P

    2017-05-01

    Arthroscopy-assisted surgery is now widely used at the ankle for osteochondral lesions of the talus, anterior and posterior impingement syndromes, talocrural or subtalar fusion, foreign body removal, and ankle instability. Injuries to the vessels and nerves may occur during these procedures. To determine whether ultrasound topographic identification of vulnerable structures decreased the risk of iatrogenic injuries to vessels, nerves, and tendons and influenced the distance separating vulnerable structures from the arthroscope introduced through four different portals. Ultrasonography to identify vulnerable structures before or during arthroscopic surgery on the ankle may be useful. Twenty fresh cadaver ankles from body donations to the anatomy institute in Strasbourg, France, were divided into two equal groups. Preoperative ultrasonography to mark the trajectories of vessels, nerves, and tendons was performed in one group but not in the other. The portals were created using a 4-mm trocar. Each portal was then dissected. The primary evaluation criterion was the presence or absence of injuries to vessels, nerves, and tendons. The secondary evaluation criterion was the distance between these structures and the arthroscope. No tendon injuries occurred with ultrasonography. Without ultrasonography, there were two full-thickness tendon lesions, one to the extensor hallucis longus and the other to the Achilles tendon. Furthermore, with the anterolateral, anteromedial, and posteromedial portals, the distance separating the vessels and nerves from the arthroscope was greater with than without ultrasonography (P=0.041, P=0.005, and P=0.002), respectively; no significant difference was found with the anterior portal. Preoperative ultrasound topographic identification decreases the risk of iatrogenic injury to the vessels, nerves, and tendons during ankle arthroscopy and places these structures at a safer distance from the arthroscope. Our hypothesis was confirmed. IV, cadaver study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  3. Advances in Distance-Based Hole Cuts on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Pandya, Shishir A.

    2015-01-01

    An automatic and efficient method to determine appropriate hole cuts based on distances to the wall and donor stencil maps for overset grids is presented. A new robust procedure is developed to create a closed surface triangulation representation of each geometric component for accurate determination of the minimum hole. Hole boundaries are then displaced away from the tight grid-spacing regions near solid walls to allow grid overlap to occur away from the walls where cell sizes from neighboring grids are more comparable. The placement of hole boundaries is efficiently determined using a mid-distance rule and Cartesian maps of potential valid donor stencils with minimal user input. Application of this procedure typically results in a spatially-variable offset of the hole boundaries from the minimum hole with only a small number of orphan points remaining. Test cases on complex configurations are presented to demonstrate the new scheme.

  4. An effective visualization technique for depth perception in augmented reality-based surgical navigation.

    PubMed

    Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung

    2016-03-01

    Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Suitability of river delta sediment as proppant, Missouri and Niobrara Rivers, Nebraska and South Dakota, 2015

    USGS Publications Warehouse

    Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine

    2017-11-16

    Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.

  6. Effects of Increasing Distance of a One-on-One Paraprofessional on Student Engagement

    ERIC Educational Resources Information Center

    Russel, Caroline S.; Allday, R. Allan; Duhon, Gary J.

    2015-01-01

    This study sought to maintain task engagement of a 4-year-old student with developmental disabilities included in a pre-K classroom while decreasing reliance of one-on-one support from a paraprofessional. To accomplish these goals, a withdrawal design (A-B-A) with a nested changing-criterion design was used to withdraw paraprofessional proximity.…

  7. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  8. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  9. Refining Stimulus Parameters in Assessing Infant Speech Perception Using Visual Reinforcement Infant Speech Discrimination: Sensation Level.

    PubMed

    Uhler, Kristin M; Baca, Rosalinda; Dudas, Emily; Fredrickson, Tammy

    2015-01-01

    Speech perception measures have long been considered an integral piece of the audiological assessment battery. Currently, a prelinguistic, standardized measure of speech perception is missing in the clinical assessment battery for infants and young toddlers. Such a measure would allow systematic assessment of speech perception abilities of infants as well as the potential to investigate the impact early identification of hearing loss and early fitting of amplification have on the auditory pathways. To investigate the impact of sensation level (SL) on the ability of infants with normal hearing (NH) to discriminate /a-i/ and /ba-da/ and to determine if performance on the two contrasts are significantly different in predicting the discrimination criterion. The design was based on a survival analysis model for event occurrence and a repeated measures logistic model for binary outcomes. The outcome for survival analysis was the minimum SL for criterion and the outcome for the logistic regression model was the presence/absence of achieving the criterion. Criterion achievement was designated when an infant's proportion correct score was >0.75 on the discrimination performance task. Twenty-two infants with NH sensitivity participated in this study. There were 9 males and 13 females, aged 6-14 mo. Testing took place over two to three sessions. The first session consisted of a hearing test, threshold assessment of the two speech sounds (/a/ and /i/), and if time and attention allowed, visual reinforcement infant speech discrimination (VRISD). The second session consisted of VRISD assessment for the two test contrasts (/a-i/ and /ba-da/). The presentation level started at 50 dBA. If the infant was unable to successfully achieve criterion (>0.75) at 50 dBA, the presentation level was increased to 70 dBA followed by 60 dBA. Data examination included an event analysis, which provided the probability of criterion distribution across SL. The second stage of the analysis was a repeated measures logistic regression where SL and contrast were used to predict the likelihood of speech discrimination criterion. Infants were able to reach criterion for the /a-i/ contrast at statistically lower SLs when compared to /ba-da/. There were six infants who never reached criterion for /ba-da/ and one never reached criterion for /a-i/. The conditional probability of not reaching criterion by 70 dB SL was 0% for /a-i/ and 21% for /ba-da/. The predictive logistic regression model showed that children were more likely to discriminate the /a-i/ even when controlling for SL. Nearly all normal-hearing infants can demonstrate discrimination criterion of a vowel contrast at 60 dB SL, while a level of ≥70 dB SL may be needed to allow all infants to demonstrate discrimination criterion of a difficult consonant contrast. American Academy of Audiology.

  10. The research of Raman spectra measurement system based on tiled-grating monochromator

    NASA Astrophysics Data System (ADS)

    Liu, Li-na; Zhang, Yin-chao; Chen, Si-ying; Chen, He; Guo, Pan; Wang, Yuan

    2013-09-01

    A set of Raman spectrum measurement system, essentially a Raman spectrometer, has been independently designed and accomplished by our research group. This system adopts tiled-grating structure, namely two 50mm × 50mm holographic gratings are tiled to form a big spectral grating. It not only improves the resolution but also reduces the cost. This article outlines the Raman spectroscopy system's composition structure and performance parameters. Then corresponding resolutions of the instrument under different criterions are deduced through experiments and data fitting. The result shows that the system's minimum resolution is up to 0.02nm, equivalent to 0.5cm-1 wavenumber under Rayleigh criterion; and it will be up to 0.007nm, equivalent to 0.19cm-1 wavenumber under Sparrow criterion. Then Raman spectra of CCl4 and alcohol have been obtained by the spectrometer, which agreed with the standard spectrum respectively very well. Finally, we measured the spectra of the alcohol solutions with different concentrations and extracted the intensity of characteristic peaks from smoothed spectra. Linear fitting between intensity of characteristic peaks and alcohol solution concentrations has been made. And the linear correlation coefficient is 0.96.

  11. 27 CFR 555.218 - Table of distances for storage of explosive materials.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... with traffic volume of more than 3,000 vehicles/day Barricaded Unbarricaded Separation of magazines... explosive materials are defined in § 555.11. (2) When two or more storage magazines are located on the same property, each magazine must comply with the minimum distances specified from inhabited buildings, railways...

  12. Ant colony optimization for solving university facility layout problem

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin

    2013-04-01

    Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).

  13. 14 CFR 91.177 - Minimum altitudes for IFR operations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., an altitude of 2,000 feet above the highest obstacle within a horizontal distance of 4 nautical miles from the course to be flown; or (ii) In any other case, an altitude of 1,000 feet above the highest... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Minimum altitudes for IFR operations. 91...

  14. What Is the Optimal Minimum Penetration Depth for "All-Inside" Meniscal Repairs?

    PubMed

    McCulloch, Patrick C; Jones, Hugh L; Lue, Jeffrey; Parekh, Jesal N; Noble, Philip C

    2016-08-01

    To identify desired minimum depth setting for safe, effective placement of the all-inside meniscal suture anchors. Using 16 cadaveric knees and standard arthroscopic techniques, 3-dimensional surfaces of the meniscocapsular junction and posterior capsule were digitized. Using standard anteromedial and anterolateral portals, the distance from the meniscocapsular junction to the posterior capsule outer wall was measured for 3 locations along the posterior half of medial and lateral menisci. Multiple all-inside meniscal repairs were performed on 7 knees to determine an alternate measure of capsular thickness (X2) and compared with the digitized results. In the digitized group, the distance (X1) from the capsular junction to the posterior capsular wall was averaged in both menisci for 3 regions using anteromedial and anterolateral portals. Mean distances of 6.4 to 8.8 mm were found for the lateral meniscus and 6.5 to 9.1 mm for the medial meniscus. The actual penetration depth was determined in the repair group and labeled X2. It showed a similar pattern to the variation seen in X1 by region, although it exceeded predicted distances an average 1.7 mm in the medial and 1.5 mm in the lateral meniscus owing to visible deformation of the capsule as it pierced. Capsular thickness during arthroscopic repair measures approximately 6 to 9 mm (X1), with 1.5 to 2 mm additional depth needed to ensure penetration rather than bulging of the posterior capsule (X2), resulting in 8 to 10 mm minimum penetration depth range. Surgeons can add desired distance away from the meniscocapsular junction (L) at device implantation, finding optimal minimal setting for penetration depth (X2 + L), which for most repairable tears may be as short as 8 mm and not likely to be greater than 16 mm. Minimum depth setting for optimal placement of all-inside meniscal suture anchors when performing all-inside repair of the medial or lateral meniscus reduces risk of harming adjacent structures secondary to overpenetration and underpenetration of the posterior capsule. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  15. A minimum attention control law for ball catching.

    PubMed

    Jang, Cheongjae; Lee, Jee-eun; Lee, Sohee; Park, F C

    2015-10-06

    Digital implementations of control laws typically involve discretization with respect to both time and space, and a control law that can achieve a task at coarser levels of discretization can be said to require less control attention, and also reduced implementation costs. One means of quantitatively capturing the attention of a control law is to measure the rate of change of the control with respect to changes in state and time. In this paper we present an attention-minimizing control law for ball catching and other target tracking tasks based on Brockett's attention criterion. We first highlight the connections between this attention criterion and some well-known principles from human motor control. Under the assumption that the optimal control law is the sum of a linear time-varying feedback term and a time-varying feedforward term, we derive an LQR-based minimum attention tracking control law that is stable, and obtained efficiently via a finite-dimensional optimization over the symmetric positive-definite matrices. Taking ball catching as our primary task, we perform numerical experiments comparing the performance of the various control strategies examined in the paper. Consistent with prevailing theories about human ball catching, our results exhibit several familiar features, e.g., the transition from open-loop to closed-loop control during the catching movement, and improved robustness to spatiotemporal discretization. The presented control laws are applicable to more general tracking problems that are subject to limited communication resources.

  16. SU-F-T-272: Patient Specific Quality Assurance of Prostate VMAT Plans with Portal Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darko, J; Osei, E; University of Waterloo, Waterloo, ON

    Purpose: To evaluate the effectiveness of using the Portal Dosimetry (PD) method for patient specific quality assurance of prostate VMAT plans. Methods: As per institutional protocol all VMAT plans were measured using the Varian Portal Dosimetry (PD) method. A gamma evaluation criterion of 3%-3mm with a minimum area gamma pass rate (gamma <1) of 95% is used clinically for all plans. We retrospectively evaluated the portal dosimetry results for 170 prostate patients treated with VMAT technique. Three sets of criterions were adopted for re-evaluating the measurements; 3%-3mm, 2%-2mm and 1%-1mm. For all criterions two areas, Field+1cm and MLC-CIAO were analysed.Tomore » ascertain the effectiveness of the portal dosimetry technique in determining the delivery accuracy of prostate VMAT plans, 10 patients previously measured with portal dosimetry, were randomly selected and their measurements repeated using the ArcCHECK method. The same criterion used in the analysis of PD was used for the ArcCHECK measurements. Results: All patient plans reviewed met the institutional criteria for Area Gamma pass rate. Overall, the gamma pass rate (gamma <1) decreases for 3%-3mm, 2%-2mm and 1%-1mm criterion. For each criterion the pass rate was significantly reduced when the MLC-CIAO was used instead of FIELD+1cm. There was noticeable change in sensitivity for MLC-CIAO with 2%-2mm criteria and much more significant reduction at 1%-1mm. Comparable results were obtained for the ArcCHECK measurements. Although differences were observed between the clockwise verses the counter clockwise plans in both the PD and ArcCHECK measurements, this was not deemed to be statistically significant. Conclusion: This work demonstrates that Portal Dosimetry technique can be effectively used for quality assurance of VMAT plans. Results obtained show similar sensitivity compared to ArcCheck. To reveal certain delivery inaccuracies, the use of a combination of criterions may provide an effective way in improving the overall sensitivity of PD. Funding provided in part by the Prostate Ride for Dad, Kitchener-Waterloo, Canada.« less

  17. Spatial analyses for nonoverlapping objects with size variations and their application to coral communities.

    PubMed

    Muko, Soyoka; Shimatani, Ichiro K; Nozawa, Yoko

    2014-07-01

    Spatial distributions of individuals are conventionally analysed by representing objects as dimensionless points, in which spatial statistics are based on centre-to-centre distances. However, if organisms expand without overlapping and show size variations, such as is the case for encrusting corals, interobject spacing is crucial for spatial associations where interactions occur. We introduced new pairwise statistics using minimum distances between objects and demonstrated their utility when examining encrusting coral community data. We also calculated the conventional point process statistics and the grid-based statistics to clarify the advantages and limitations of each spatial statistical method. For simplicity, coral colonies were approximated by disks in these demonstrations. Focusing on short-distance effects, the use of minimum distances revealed that almost all coral genera were aggregated at a scale of 1-25 cm. However, when fragmented colonies (ramets) were treated as a genet, a genet-level analysis indicated weak or no aggregation, suggesting that most corals were randomly distributed and that fragmentation was the primary cause of colony aggregations. In contrast, point process statistics showed larger aggregation scales, presumably because centre-to-centre distances included both intercolony spacing and colony sizes (radius). The grid-based statistics were able to quantify the patch (aggregation) scale of colonies, but the scale was strongly affected by the colony size. Our approach quantitatively showed repulsive effects between an aggressive genus and a competitively weak genus, while the grid-based statistics (covariance function) also showed repulsion although the spatial scale indicated from the statistics was not directly interpretable in terms of ecological meaning. The use of minimum distances together with previously proposed spatial statistics helped us to extend our understanding of the spatial patterns of nonoverlapping objects that vary in size and the associated specific scales. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  18. [Minimum Standards for the Spatial Accessibility of Primary Care: A Systematic Review].

    PubMed

    Voigtländer, S; Deiters, T

    2015-12-01

    Regional disparities of access to primary care are substantial in Germany, especially in terms of spatial accessibility. However, there is no legally or generally binding minimum standard for the spatial accessibility effort that is still acceptable. Our objective is to analyse existing minimum standards, the methods used as well as their empirical basis. A systematic literature review was undertaken of publications regarding minimum standards for the spatial accessibility of primary care based on a title word and keyword search using PubMed, SSCI/Web of Science, EMBASE and Cochrane Library. 8 minimum standards from the USA, Germany and Austria could be identified. All of them specify the acceptable spatial accessibility effort in terms of travel time; almost half include also distance(s). The travel time maximum, which is acceptable, is 30 min and it tends to be lower in urban areas. Primary care is, according to the identified minimum standards, part of the local area (Nahbereich) of so-called central places (Zentrale Orte) providing basic goods and services. The consideration of means of transport, e. g. public transport, is heterogeneous. The standards are based on empirical studies, consultation with service providers, practical experiences, and regional planning/central place theory as well as on legal or political regulations. The identified minimum standards provide important insights into the effort that is still acceptable regarding spatial accessibility, i. e. travel time, distance and means of transport. It seems reasonable to complement the current planning system for outpatient care, which is based on provider-to-population ratios, by a gravity-model method to identify places as well as populations with insufficient spatial accessibility. Due to a lack of a common minimum standard we propose - subject to further discussion - to begin with a threshold based on the spatial accessibility limit of the local area, i. e. 30 min to the next primary care provider for at least 90% of the regional population. The exceeding of the threshold would necessitate a discussion of a health care deficit and in line with this a potential need for intervention, e. g. in terms of alternative forms of health care provision. © Georg Thieme Verlag KG Stuttgart · New York.

  19. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  20. Evaluation of the NASA Ames no. 1 7 by 10 foot wind tunnel as an acoustic test facility

    NASA Technical Reports Server (NTRS)

    Wilby, J. F.; Scharton, T. D.

    1975-01-01

    Measurements were made in the no. 1 7'x10' wind tunnel at NASA Ames Research Center, with the objectives of defining the acoustic characteristics and recommending minimum cost treatments so that the tunnel can be converted into an acoustic research facility. The results indicate that the noise levels in the test section are due to (a) noise generation in the test section, associated with the presence of solid bodies such as the pitot tube, and (b) propagation of acoustic energy from the fan. A criterion for noise levels in the test section is recommended, based on low-noise microphone support systems. Noise control methods required to meet the criterion include removal of hardware items for the test section and diffuser, improved design of microphone supports, and installation of acoustic treatment in the settling chamber and diffuser.

  1. Entanglement-enhanced Neyman-Pearson target detection using quantum illumination

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-08-01

    Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.

  2. A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs

    NASA Technical Reports Server (NTRS)

    Bloomfield, Harvey S.

    1991-01-01

    The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.

  3. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  4. Varying the valuating function and the presentable bank in computerized adaptive testing.

    PubMed

    Barrada, Juan Ramón; Abad, Francisco José; Olea, Julio

    2011-05-01

    In computerized adaptive testing, the most commonly used valuating function is the Fisher information function. When the goal is to keep item bank security at a maximum, the valuating function that seems most convenient is the matching criterion, valuating the distance between the estimated trait level and the point where the maximum of the information function is located. Recently, it has been proposed not to keep the same valuating function constant for all the items in the test. In this study we expand the idea of combining the matching criterion with the Fisher information function. We also manipulate the number of strata into which the bank is divided. We find that the manipulation of the number of items administered with each function makes it possible to move from the pole of high accuracy and low security to the opposite pole. It is possible to greatly improve item bank security with much fewer losses in accuracy by selecting several items with the matching criterion. In general, it seems more appropriate not to stratify the bank.

  5. First DNA Barcode Reference Library for the Identification of South American Freshwater Fish from the Lower Paraná River

    PubMed Central

    Brancolini, Florencia; del Pazo, Felipe; Posner, Victoria Maria; Grimberg, Alexis; Arranz, Silvia Eda

    2016-01-01

    Valid fish species identification is essential for biodiversity conservation and fisheries management. Here, we provide a sequence reference library based on mitochondrial cytochrome c oxidase subunit I for a valid identification of 79 freshwater fish species from the Lower Paraná River. Neighbour-joining analysis based on K2P genetic distances formed non-overlapping clusters for almost all species with a ≥99% bootstrap support each. Identification was successful for 97.8% of species as the minimum genetic distance to the nearest neighbour exceeded the maximum intraspecific distance in all these cases. A barcoding gap of 2.5% was apparent for the whole data set with the exception of four cases. Within-species distances ranged from 0.00% to 7.59%, while interspecific distances varied between 4.06% and 19.98%, without considering Odontesthes species with a minimum genetic distance of 0%. Sequence library validation was performed by applying BOLDs BIN analysis tool, Poisson Tree Processes model and Automatic Barcode Gap Discovery, along with a reliable taxonomic assignment by experts. Exhaustive revision of vouchers was performed when a conflicting assignment was detected after sequence analysis and BIN discordance evaluation. Thus, the sequence library presented here can be confidently used as a benchmark for identification of half of the fish species recorded for the Lower Paraná River. PMID:27442116

  6. Classification and recognition of dynamical models: the role of phase, independent components, kernels and optimal transport.

    PubMed

    Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano

    2007-11-01

    We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.

  7. Can we observe neutrino flares in coincidence with explosive transients?

    NASA Astrophysics Data System (ADS)

    Guépin, C.; Kotera, K.

    2017-12-01

    The new generation of powerful instruments is reaching sensitivities and temporal resolutions that will allow multi-messenger astronomy of explosive transient phenomena, with high-energy neutrinos as a central figure. We derive general criteria for the detectability of neutrinos from powerful transient sources for given instrument sensitivities. In practice, we provide the minimum photon flux necessary for neutrino detection based on two main observables: the bolometric luminosity and the time variability of the emission. This limit can be compared to the observations in specified wavelengths in order to target the most promising sources for follow-ups. Our criteria can also help distinguishing false associations of neutrino events with a flaring source. We find that relativistic transient sources such as high- and low-luminosity gamma-ray bursts (GRBs), blazar flares, tidal disruption events, and magnetar flares could be observed with IceCube, as they have a good chance to occur within a detectable distance. Of the nonrelativistic transient sources, only luminous supernovae appear as promising candidates. We caution that our criterion should not be directly applied to low-luminosity GRBs and type Ibc supernovae, as these objects could have hosted a choked GRB, leading to neutrino emission without a relevant counterpart radiation. We treat the concrete example of PKS 1424-418 major outburst and the possible association with an IceCube event IC 35.

  8. Differential segregation in a cell-cell contact interface: the dynamics of the immunological synapse.

    PubMed Central

    Burroughs, Nigel John; Wülfing, Christoph

    2002-01-01

    Receptor-ligand couples in the cell-cell contact interface between a T cell and an antigen-presenting cell form distinct geometric patterns and undergo spatial rearrangement within the contact interface. Spatial segregation of the antigen and adhesion receptors occurs within seconds of contact, central aggregation of the antigen receptor then occurring over 1-5 min. This structure, called the immunological synapse, is becoming a paradigm for localized signaling. However, the mechanisms driving its formation, in particular spatial segregation, are currently not understood. With a reaction diffusion model incorporating thermodynamics, elasticity, and reaction kinetics, we examine the hypothesis that differing bond lengths (extracellular domain size) is the driving force behind molecular segregation. We derive two key conditions necessary for segregation: a thermodynamic criterion on the effective bond elasticity and a requirement for the seeding/nucleation of domains. Domains have a minimum length scale and will only spontaneously coalesce/aggregate if the contact area is small or the membrane relaxation distance large. Otherwise, differential attachment of receptors to the cytoskeleton is required for central aggregation. Our analysis indicates that differential bond lengths have a significant effect on synapse dynamics, i.e., there is a significant contribution to the free energy of the interaction, suggesting that segregation by differential bond length is important in cell-cell contact interfaces and the immunological synapse. PMID:12324401

  9. Relation between inflammables and ignition sources in aircraft environments

    NASA Technical Reports Server (NTRS)

    Scull, Wilfred E

    1951-01-01

    A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and discussed. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings through which flame will not propagate are presented and discussed. Ignition temperatures and limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressures and minimum size of opening for flame propagation in gasoline-air mixtures are included; inerting of gasoline-air mixtures is discussed.

  10. Criterion-Validity of Commercially Available Physical Activity Tracker to Estimate Step Count, Covered Distance and Energy Expenditure during Sports Conditions

    PubMed Central

    Wahl, Yvonne; Düking, Peter; Droszez, Anna; Wahl, Patrick; Mester, Joachim

    2017-01-01

    Background: In the past years, there was an increasing development of physical activity tracker (Wearables). For recreational people, testing of these devices under walking or light jogging conditions might be sufficient. For (elite) athletes, however, scientific trustworthiness needs to be given for a broad spectrum of velocities or even fast changes in velocities reflecting the demands of the sport. Therefore, the aim was to evaluate the validity of eleven Wearables for monitoring step count, covered distance and energy expenditure (EE) under laboratory conditions with different constant and varying velocities. Methods: Twenty healthy sport students (10 men, 10 women) performed a running protocol consisting of four 5 min stages of different constant velocities (4.3; 7.2; 10.1; 13.0 km·h−1), a 5 min period of intermittent velocity, and a 2.4 km outdoor run (10.1 km·h−1) while wearing eleven different Wearables (Bodymedia Sensewear, Beurer AS 80, Polar Loop, Garmin Vivofit, Garmin Vivosmart, Garmin Vivoactive, Garmin Forerunner 920XT, Fitbit Charge, Fitbit Charge HR, Xaomi MiBand, Withings Pulse Ox). Step count, covered distance, and EE were evaluated by comparing each Wearable with a criterion method (Optogait system and manual counting for step count, treadmill for covered distance and indirect calorimetry for EE). Results: All Wearables, except Bodymedia Sensewear, Polar Loop, and Beurer AS80, revealed good validity (small MAPE, good ICC) for all constant and varying velocities for monitoring step count. For covered distance, all Wearables showed a very low ICC (<0.1) and high MAPE (up to 50%), revealing no good validity. The measurement of EE was acceptable for the Garmin, Fitbit and Withings Wearables (small to moderate MAPE), while Bodymedia Sensewear, Polar Loop, and Beurer AS80 showed a high MAPE up to 56% for all test conditions. Conclusion: In our study, most Wearables provide an acceptable level of validity for step counts at different constant and intermittent running velocities reflecting sports conditions. However, the covered distance, as well as the EE could not be assessed validly with the investigated Wearables. Consequently, covered distance and EE should not be monitored with the presented Wearables, in sport specific conditions. PMID:29018355

  11. [Pigment dispersion and Artisan implants: crystalline lens rise as a safety criterion].

    PubMed

    Baikoff, G; Bourgeon, G; Jodai, H Jitsuo; Fontaine, A; Vieira Lellis, F; Trinquet, L

    2005-06-01

    To validate the theoretical notion of a crystalline lens rise as a safety criterion for ARTISAN implants in order to prevent the development of pigment dispersion in the implanted eye. Crystalline lens rise is defined by the distance between the crystalline lens's anterior pole and the horizontal plane joining the opposite iridocorneal recesses. We analyzed the biometric measurements of 87 eyes with an Artisan implant. A comparative analysis of the crystalline lens rise was carried out on the nine eyes having developed pigment dispersion and 78 eyes with no problems. Among the modern anterior segment imaging devices (Artemis, Scheimpflug photography, optical coherence tomography, radiology exploration, magnetic resonance imaging, TDM), an anterior chamber optical coherence tomography (AC-OCT) prototype was used. This working hypothesis was confirmed by this study: the crystalline lens rise must be considered as a new safety criterion for implanting Artisan phakic lenses. Indeed, the higher the crystalline lens's rise, the greater the risk of developing pigment dispersion in the pupil area. This complication is more frequent in hyperopes than in myopes. We can consider that there is little or no risk of pigment dispersion if the rise is below 600 microm; however, at 600 microm or greater, there is a 67% rate of pupillary pigment dispersion. In certain cases, when the implant was loosely fixed, there was no traction on the iris root. This is a complication that can be avoided or delayed. The crystalline lens rise must be part of new safety criteria to be taken into consideration when inserting an Artisan implant. This notion must also be applied to other types of phakic implants. The distance remaining between the crystalline lens rise and a 600-micromm theoretical safety level allows one to calculate a safety time interval.

  12. Space availability in confined sheep during pregnancy, effects in movement patterns and use of space.

    PubMed

    Averós, Xavier; Lorea, Areta; Beltrán de Heredia, Ignacia; Arranz, Josune; Ruiz, Roberto; Estevez, Inma

    2014-01-01

    Space availability is essential to grant the welfare of animals. To determine the effect of space availability on movement and space use in pregnant ewes (Ovis aries), 54 individuals were studied during the last 11 weeks of gestation. Three treatments were tested (1, 2, and 3 m2/ewe; 6 ewes/group). Ewes' positions were collected for 15 minutes using continuous scan samplings two days/week. Total and net distance, net/total distance ratio, maximum and minimum step length, movement activity, angular dispersion, nearest, furthest and mean neighbour distance, peripheral location ratio, and corrected peripheral location ratio were calculated. Restriction in space availability resulted in smaller total travelled distance, net to total distance ratio, maximum step length, and angular dispersion but higher movement activity at 1 m2/ewe as compared to 2 and 3 m2/ewe (P<0.01). On the other hand, nearest and furthest neighbour distances increased from 1 to 3 m2/ewe (P<0.001). Largest total distance, maximum and minimum step length, and movement activity, as well as lowest net/total distance ratio and angular dispersion were observed during the first weeks (P<0.05) while inter-individual distances increased through gestation. Results indicate that movement patterns and space use in ewes were clearly restricted by limitations of space availability to 1 m2/ewe. This reflected in shorter, more sinuous trajectories composed of shorter steps, lower inter-individual distances and higher movement activity potentially linked with higher restlessness levels. On the contrary, differences between 2 and 3 m2/ewe, for most variables indicate that increasing space availability from 2 to 3 m2/ewe would appear to have limited benefits, reflected mostly in a further increment in the inter-individual distances among group members. No major variations in spatial requirements were detected through gestation, except for slight increments in inter-individual distances and an initial adaptation period, with ewes being restless and highly motivated to explore their new environment.

  13. A Classroom Note on: The Average Distance in an Ellipse

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2011-01-01

    This article presents an applied calculus exercise that can be easily shared with students. One of Kepler's greatest discoveries was the fact that the planets move in elliptic orbits with the sun at one focus. Astronomers characterize the orbits of particular planets by their minimum and maximum distances to the sun, known respectively as the…

  14. Estimation of Discontinuous Displacement Vector Fields with the Minimum Description Length Criterion.

    DTIC Science & Technology

    1990-10-01

    type of approach for finding a dense displacement vector field has a time complexity that allows a real - time implementation when an appropriate control...hardly vector fields as they appear in Stereo or motion. The reason for this is the fact that local displacement vector field ( DVF ) esti- mates bave...2 objects’ motion, but that the quantitative optical flow is not a reliable measure of the real motion [VP87, SU87]. This applies even more to the

  15. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  16. Planar Steering of a Single Ferrofluid Drop by Optimal Minimum Power Dynamic Feedback Control of Four Electromagnets at a Distance

    PubMed Central

    Probst, R.; Lin, J.; Komaee, A.; Nacev, A.; Cummins, Z.

    2010-01-01

    Any single permanent or electro magnet will always attract a magnetic fluid. For this reason it is difficult to precisely position and manipulate ferrofluid at a distance from magnets. We develop and experimentally demonstrate optimal (minimum electrical power) 2-dimensional manipulation of a single droplet of ferrofluid by feedback control of 4 external electromagnets. The control algorithm we have developed takes into account, and is explicitly designed for, the nonlinear (fast decay in space, quadratic in magnet strength) nature of how the magnets actuate the ferrofluid, and it also corrects for electro-magnet charging time delays. With this control, we show that dynamic actuation of electro-magnets held outside a domain can be used to position a droplet of ferrofluid to any desired location and steer it along any desired path within that domain – an example of precision control of a ferrofluid by magnets acting at a distance. PMID:21218157

  17. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    NASA Astrophysics Data System (ADS)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.

  18. The Effects of Long Distance Running on Preadolescent Children.

    ERIC Educational Resources Information Center

    Covington, N. Kay

    This study investigated the effects of selected physiological variables on preadolescent male and female long distance runners. The trained group was comprised of 20 children between the ages of 8 and 10 who had been running a minimum of 20 miles per week for two months or longer. The control group was made up of 20 children of the same ages who…

  19. Optimizing the Launch of a Projectile to Hit a Target

    ERIC Educational Resources Information Center

    Mungan, Carl E.

    2017-01-01

    Some teenagers are exploring the outer perimeter of a castle. They notice a spy hole in its wall, across the moat a horizontal distance "x" and vertically up the wall a distance "y." They decide to throw pebbles at the hole. One girl wants to use physics to throw with the minimum speed necessary to hit the hole. What is the…

  20. Strain tensor selection and the elastic theory of incompatible thin sheets.

    PubMed

    Oshri, Oz; Diamant, Haim

    2017-05-01

    The existing theory of incompatible elastic sheets uses the deviation of the surface metric from a reference metric to define the strain tensor [Efrati et al., J. Mech. Phys. Solids 57, 762 (2009)JMPSA80022-509610.1016/j.jmps.2008.12.004]. For a class of simple axisymmetric problems we examine an alternative formulation, defining the strain based on deviations of distances (rather than distances squared) from their rest values. While the two formulations converge in the limit of small slopes and in the limit of an incompressible sheet, for other cases they are found not to be equivalent. The alternative formulation offers several features which are absent in the existing theory. (a) In the case of planar deformations of flat incompatible sheets, it yields linear, exactly solvable, equations of equilibrium. (b) When reduced to uniaxial (one-dimensional) deformations, it coincides with the theory of extensible elastica; in particular, for a uniaxially bent sheet it yields an unstrained cylindrical configuration. (c) It gives a simple criterion determining whether an isometric immersion of an incompatible sheet is at mechanical equilibrium with respect to normal forces. For a reference metric of constant positive Gaussian curvature, a spherical cap is found to satisfy this criterion except in an arbitrarily narrow boundary layer.

  1. Strain tensor selection and the elastic theory of incompatible thin sheets

    NASA Astrophysics Data System (ADS)

    Oshri, Oz; Diamant, Haim

    2017-05-01

    The existing theory of incompatible elastic sheets uses the deviation of the surface metric from a reference metric to define the strain tensor [Efrati et al., J. Mech. Phys. Solids 57, 762 (2009), 10.1016/j.jmps.2008.12.004]. For a class of simple axisymmetric problems we examine an alternative formulation, defining the strain based on deviations of distances (rather than distances squared) from their rest values. While the two formulations converge in the limit of small slopes and in the limit of an incompressible sheet, for other cases they are found not to be equivalent. The alternative formulation offers several features which are absent in the existing theory. (a) In the case of planar deformations of flat incompatible sheets, it yields linear, exactly solvable, equations of equilibrium. (b) When reduced to uniaxial (one-dimensional) deformations, it coincides with the theory of extensible elastica; in particular, for a uniaxially bent sheet it yields an unstrained cylindrical configuration. (c) It gives a simple criterion determining whether an isometric immersion of an incompatible sheet is at mechanical equilibrium with respect to normal forces. For a reference metric of constant positive Gaussian curvature, a spherical cap is found to satisfy this criterion except in an arbitrarily narrow boundary layer.

  2. Relation Between Inflammables and Ignition Sources in Aircraft Environments

    NASA Technical Reports Server (NTRS)

    Scull, Wilfred E

    1950-01-01

    A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and, discussed herein. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, and minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings incapable of flame propagation are presented and discussed. The ignition temperatures and the limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressure and the minimum size of openings for flame propagation of gasoline - air mixtures are included. Inerting of gasoline - air mixtures is discussed.

  3. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  4. Sparsity of the normal matrix in the refinement of macromolecules at atomic and subatomic resolution.

    PubMed

    Jelsch, C

    2001-09-01

    The normal matrix in the least-squares refinement of macromolecules is very sparse when the resolution reaches atomic and subatomic levels. The elements of the normal matrix, related to coordinates, thermal motion and charge-density parameters, have a global tendency to decrease rapidly with the interatomic distance between the atoms concerned. For instance, in the case of the protein crambin at 0.54 A resolution, the elements are reduced by two orders of magnitude for distances above 1.5 A. The neglect a priori of most of the normal-matrix elements according to a distance criterion represents an approximation in the refinement of macromolecules, which is particularly valid at very high resolution. The analytical expressions of the normal-matrix elements, which have been derived for the coordinates and the thermal parameters, show that the degree of matrix sparsity increases with the diffraction resolution and the size of the asymmetric unit.

  5. Transport of Escherichia coli in 25 m quartz sand columns

    NASA Astrophysics Data System (ADS)

    Lutterodt, G.; Foppen, J. W. A.; Maksoud, A.; Uhlenbrook, S.

    2011-01-01

    To help improve the prediction of bacteria travel distances in aquifers laboratory experiments were conducted to measure the distant dependent sticking efficiencies of two low attaching Escherichia coli strains (UCFL-94 and UCFL-131). The experimental set up consisted of a 25 m long helical column with a diameter of 3.2 cm packed with 99.1% pure-quartz sand saturated with a solution of magnesium sulfate and calcium chloride. Bacteria mass breakthrough at sampling distances ranging from 6 to 25.65 m were observed to quantify bacteria attachment over total transport distances ( αL) and sticking efficiencies at large intra-column segments ( αi) (> 5 m). Fractions of cells retained ( Fi) in a column segment as a function of αi were fitted with a power-law distribution from which the minimum sticking efficiency defined as the sticking efficiency of 0.001% bacteria fraction of the total input mass retained that results in a 5 log removal were extrapolated. Low values of αL in the order 10 - 4 and 10 - 3 were obtained for UCFL-94 and UCFL-131 respectively, while αi-values ranged between 10 - 6 to 10 - 3 for UCFL-94 and 10 - 5 to 10 - 4 for UCFL-131. In addition, both αL and αi reduced with increasing transport distance, and high coefficients of determination (0.99) were obtained for power-law distributions of αi for the two strains. Minimum sticking efficiencies extrapolated were 10 - 7 and 10 - 8 for UCFL-94 and UCFL-131, respectively. Fractions of cells exiting the column were 0.19 and 0.87 for UCFL-94 and UCL-131, respectively. We concluded that environmentally realistic sticking efficiency values in the order of 10 - 4 and 10 - 3 and much lower sticking efficiencies in the order 10 - 5 are measurable in the laboratory, Also power-law distributions in sticking efficiencies commonly observed for limited intra-column distances (< 2 m) are applicable at large transport distances(> 6 m) in columns packed with quartz grains. High fractions of bacteria populations may possess the so-called minimum sticking efficiency, thus expressing their ability to be transported over distances longer than what might be predicted using measured sticking efficiencies from experiments with both short (< 1 m) and long columns (> 25 m). Also variable values of sticking efficiencies within and among the strains show heterogeneities possibly due to variations in cell surface characteristics of the strains. The low sticking efficiency values measured express the importance of the long columns used in the experiments and the lower values of extrapolated minimum sticking efficiencies makes the method a valuable tool in delineating protection areas in real-world scenarios.

  6. What is an adequate sample size? Operationalising data saturation for theory-based interview studies.

    PubMed

    Francis, Jill J; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P; Grimshaw, Jeremy M

    2010-12-01

    In interview studies, sample size is often justified by interviewing participants until reaching 'data saturation'. However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on the theory of planned behaviour, designed to identify three belief categories (Behavioural, Normative and Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or studywise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Studywise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.

  7. On a stronger-than-best property for best prediction

    NASA Astrophysics Data System (ADS)

    Teunissen, P. J. G.

    2008-03-01

    The minimum mean squared error (MMSE) criterion is a popular criterion for devising best predictors. In case of linear predictors, it has the advantage that no further distributional assumptions need to be made, other then about the first- and second-order moments. In the spatial and Earth sciences, it is the best linear unbiased predictor (BLUP) that is used most often. Despite the fact that in this case only the first- and second-order moments need to be known, one often still makes statements about the complete distribution, in particular when statistical testing is involved. For such cases, one can do better than the BLUP, as shown in Teunissen (J Geod. doi: 10.1007/s00190-007-0140-6, 2006), and thus devise predictors that have a smaller MMSE than the BLUP. Hence, these predictors are to be preferred over the BLUP, if one really values the MMSE-criterion. In the present contribution, we will show, however, that the BLUP has another optimality property than the MMSE-property, provided that the distribution is Gaussian. It will be shown that in the Gaussian case, the prediction error of the BLUP has the highest possible probability of all linear unbiased predictors of being bounded in the weighted squared norm sense. This is a stronger property than the often advertised MMSE-property of the BLUP.

  8. Ballistics Trajectory and Impact Analysis for Insensitive Munitions and Hazard Classification Project Criteria

    NASA Astrophysics Data System (ADS)

    Baker, Ernest; van der Voort, Martijn; NATO Munitions Safety Information Analysis Centre Team

    2017-06-01

    Ballistics trajectory and impact conditions calculations were conducted in order to investigate the origin of the projection criteria for Insensitive Munitions (IM) and Hazard Classification (HC). The results show that the existing IM and HC projection criteria distance-mass relations are based on launch energy rather than impact conditions. The distance-mass relations were reproduced using TRAJCAN trajectory analysis by using launch energies of 8, 20 and 79J and calculating the maximum impact distance reached by a natural fragment (steel) launched from 1 m height. The analysis shows that at the maximum throw distances, the impact energy is generally much smaller than the launch energy. Using maximum distance projections, new distance-mass relations were developed that match the criteria based on impact energy at 15m and beyond rather than launch energy. Injury analysis was conducted using penetration injury and blunt injury models. The smallest projectile masses in the distance-mass relations are in the transition region from penetration injury to blunt injury. For this reason, blunt injury dominates the assessment of injury or lethality. State of the art blunt injury models predict only minor injury for a 20J impact. For a 79J blunt impact, major injury is likely to occur. MSIAC recommends changing the distance-mass relation that distinguishes a munitions burning response to a 20 J impact energy criterion at 15 m and updating of the UN Orange Book.

  9. Read distance performance and variation of 5 low-frequency radio frequency identification panel transceiver manufacturers.

    PubMed

    Ryan, S E; Blasi, D A; Anglin, C O; Bryant, A M; Rickard, B A; Anderson, M P; Fike, K E

    2010-07-01

    Use of electronic animal identification technologies by livestock managers is increasing, but performance of these technologies can be variable when used in livestock production environments. This study was conducted to determine whether 1) read distance of low-frequency radio frequency identification (RFID) transceivers is affected by type of transponder being interrogated; 2) read distance variation of low-frequency RFID transceivers is affected by transceiver manufacturer; and 3) read distance of various transponder-transceiver manufacturer combinations meet the 2004 United States Animal Identification Plan (USAIP) bovine standards subcommittee minimum read distance recommendation of 60 cm. Twenty-four transceivers (n = 5 transceivers per manufacturer for Allflex, Boontech, Farnam, and Osborne; n = 4 transceivers for Destron Fearing) were tested with 60 transponders [n = 10 transponders per type for Allflex full duplex B (FDX-B), Allflex half duplex (HDX), Destron Fearing FDX-B, Farnam FDX-B, and Y-Tex FDX-B; n = 6 for Temple FDX-B (EM Microelectronic chip); and n = 4 for Temple FDX-B (HiTag chip)] presented in the parallel orientation. All transceivers and transponders met International Organization for Standardization 11784 and 11785 standards. Transponders represented both one-half duplex and full duplex low-frequency air interface technologies. Use of a mechanical trolley device enabled the transponders to be presented to the center of each transceiver at a constant rate, thereby reducing human error. Transponder and transceiver manufacturer interacted (P < 0.0001) to affect read distance, indicating that transceiver performance was greatly dependent upon the transponder type being interrogated. Twenty-eight of 30 combinations of transceivers and transponders evaluated met the minimum recommended USAIP read distance. The mean read distance across all 30 combinations was 45.1 to 129.4 cm. Transceiver manufacturer and transponder type interacted to affect read distance variance (P < 0.05). Maximum read distance performance of low-frequency RFID technologies with low variance can be achieved by selecting specific transponder-transceiver combinations.

  10. Computational fluid dynamics (CFD) investigation of impacts of an obstruction on airflow in underground mines.

    PubMed

    Zhou, L; Goodman, G; Martikainen, A

    2013-01-01

    Continuous airflow monitoring can improve the safety of the underground work force by ensuring the uninterrupted and controlled distribution of mine ventilation to all working areas. Air velocity measurements vary significantly and can change rapidly depending on the exact measurement location and, in particular, due to the presence of obstructions in the air stream. Air velocity must be measured at locations away from obstructions to avoid the vortices and eddies that can produce inaccurate readings. Further, an uninterrupted measurement path cannot always be guaranteed when using continuous airflow monitors due to the presence of nearby equipment, personnel, roof falls and rib rolls. Effective use of these devices requires selection of a minimum distance from an obstacle, such that an air velocity measurement can be made but not affected by the presence of that obstacle. This paper investigates the impacts of an obstruction on the behavior of downstream airflow using a numerical CFD model calibrated with experimental test results from underground testing. Factors including entry size, obstruction size and the inlet or incident velocity are examined for their effects on the distributions of airflow around an obstruction. A relationship is developed between the minimum measurement distance and the hydraulic diameters of the entry and the obstruction. A final analysis considers the impacts of continuous monitor location on the accuracy of velocity measurements and on the application of minimum measurement distance guidelines.

  11. Computational fluid dynamics (CFD) investigation of impacts of an obstruction on airflow in underground mines

    PubMed Central

    Zhou, L.; Goodman, G.; Martikainen, A.

    2015-01-01

    Continuous airflow monitoring can improve the safety of the underground work force by ensuring the uninterrupted and controlled distribution of mine ventilation to all working areas. Air velocity measurements vary significantly and can change rapidly depending on the exact measurement location and, in particular, due to the presence of obstructions in the air stream. Air velocity must be measured at locations away from obstructions to avoid the vortices and eddies that can produce inaccurate readings. Further, an uninterrupted measurement path cannot always be guaranteed when using continuous airflow monitors due to the presence of nearby equipment, personnel, roof falls and rib rolls. Effective use of these devices requires selection of a minimum distance from an obstacle, such that an air velocity measurement can be made but not affected by the presence of that obstacle. This paper investigates the impacts of an obstruction on the behavior of downstream airflow using a numerical CFD model calibrated with experimental test results from underground testing. Factors including entry size, obstruction size and the inlet or incident velocity are examined for their effects on the distributions of airflow around an obstruction. A relationship is developed between the minimum measurement distance and the hydraulic diameters of the entry and the obstruction. A final analysis considers the impacts of continuous monitor location on the accuracy of velocity measurements and on the application of minimum measurement distance guidelines. PMID:26388684

  12. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  13. Molecular marker to identify radiolarian species -toward establishment of paleo-environmental proxy-

    NASA Astrophysics Data System (ADS)

    Ishitani, Y.

    2017-12-01

    Marine fossilized unicellular plankton are known to have many genetically divergent species (biological species) in the single morphological species and these biological species show the species-specific environments much more precisely than that of morphological species. Among these plankton, Radiolaria are one of the best candidates for time- and environmental-indicators in the modern and past oceans, because radiolarians are the only group which represent entire water column from shallow to deep waters. However, the ecology and evolution of radiolarian were traditionally studied in paleontology and paleoceanography by morphological species. Even Radiolaria has a huge potential for novel proxy of wide and deep environments, there is no criterion to identify the biological species. The motivation for this study is setting the quantitative delimitation to establish the biological species of radiolarians based on molecular data, for leading the future ecological and paleo-environmental study. Identification of the biological species by ribosomal DNA sequences are mainly based on two ways: one is the evolutionary distance of the small subunit (SSU) rDNA, the internal transcribed spacer region of ribosomal DNA (ITS1 and 2), and the large subunit (LSU) rDNA; and the other is the secondary structure of ITS2. In the present study, all four possible genetic markers (SSU, ITS1, ITS2, and LSU rDNA) were amplified from 232 individuals of five radiolarian morphological species and applied to examine the evolutionary distance and secondary structure of rDNA. Comprehensive survey clearly shows that evolutionary distance of ITS1 rDNA and the secondary structure of ITS2 is good to identify the species. Notably, evolutionary distance of ITS1 rDNA is possible to set the common delimitation to identify the biological species, as 0.225 substitution per site. The results show that the ITS1 and ITS 2 rDNA could be the criterion for radiolarian species identification.

  14. Association between mild cognitive impairment and trajectory-based spatial parameters during timed up and go test using a laser range sensor.

    PubMed

    Nishiguchi, Shu; Yorozu, Ayanori; Adachi, Daiki; Takahashi, Masaki; Aoyama, Tomoki

    2017-08-08

    The Timed Up and Go (TUG) test may be a useful tool to detect not only mobility impairment but also possible cognitive impairment. In this cross-sectional study, we used the TUG test to investigate the associations between trajectory-based spatial parameters measured by laser range sensor (LRS) and cognitive impairment in community-dwelling older adults. The participants were 63 community-dwelling older adults (mean age, 73.0 ± 6.3 years). The trajectory-based spatial parameters during the TUG test were measured using an LRS. In each forward and backward phase, we calculated the minimum distance from the marker, the maximum distance from the x-axis (center line), the length of the trajectories, and the area of region surrounded by the trajectory of the center of gravity and the x-axis (center line). We measured mild cognitive impairment using the Mini-Mental State Examination score (26/27 was the cut-off score for defining mild cognitive impairment). Compared with participants with normal cognitive function, those with mild cognitive impairment exhibited the following trajectory-based spatial parameters: short minimum distance from the marker (p = 0.044), narrow area of center of gravity in the forward phase (p = 0.012), and a large forward/whole phase ratio of the area of the center of gravity (p = 0.026) during the TUG test. In multivariate logistic regression analyses, a short minimum distance from the marker (odds ratio [OR]: 0.82, 95% confidence interval [CI]: 0.69-0.98), narrow area of the center of gravity in the forward phase (OR: 0.01, 95% CI: 0.00-0.36), and large forward/whole phase ratio of the area of the center of gravity (OR: 0.94, 95% CI: 0.88-0.99) were independently associated with mild cognitive impairment. In conclusion, our results indicate that some of the trajectory-based spatial parameters measured by LRS during the TUG test were independently associated with cognitive impairment in older adults. In particular, older adults with cognitive impairment exhibit shorter minimum distances from the marker and asymmetrical trajectories during the TUG test.

  15. Off-Resonant Two-Photon Absorption Cross-Section Enhancement of an Organic Chromophore on Gold Nanorods

    PubMed Central

    Sivapalan, Sean T.; Vella, Jarrett H.; Yang, Timothy K.; Dalton, Matthew J.; Haley, Joy E.; Cooper, Thomas M.; Urbas, Augustine M.; Tan, Loon-Seng; Murphy, Catherine J.

    2013-01-01

    Surface-plasmon-initiated interference effects of polyelectrolyte-coated gold nanorods on the two-photon absorption of an organic chromophore were investigated. With polyelectrolyte bearing gold nanorods of 2,4,6 and 8 layers, the role of the plasmonic fields as function of distance on such effects was examined. An unusual distance dependence was found: enhancements in the two-photon cross-section were at a minimum at an intermediate distance, then rose again at a further distance. The observed values of enhancement were compared to theoretical predictions using finite element analysis and showed good agreementdue to constructive and destructive interference effects. PMID:23687561

  16. Numerical modeling of injection, stress and permeability enhancement during shear stimulation at the Desert Peak Enhanced Geothermal System

    USGS Publications Warehouse

    Dempsey, David; Kelkar, Sharad; Davatzes, Nick; Hickman, Stephen H.; Moos, Daniel

    2015-01-01

    Creation of an Enhanced Geothermal System relies on stimulation of fracture permeability through self-propping shear failure that creates a complex fracture network with high surface area for efficient heat transfer. In 2010, shear stimulation was carried out in well 27-15 at Desert Peak geothermal field, Nevada, by injecting cold water at pressure less than the minimum principal stress. An order-of-magnitude improvement in well injectivity was recorded. Here, we describe a numerical model that accounts for injection-induced stress changes and permeability enhancement during this stimulation. In a two-part study, we use the coupled thermo-hydrological-mechanical simulator FEHM to: (i) construct a wellbore model for non-steady bottom-hole temperature and pressure conditions during the injection, and (ii) apply these pressures and temperatures as a source term in a numerical model of the stimulation. In this model, a Mohr-Coulomb failure criterion and empirical fracture permeability is developed to describe permeability evolution of the fractured rock. The numerical model is calibrated using laboratory measurements of material properties on representative core samples and wellhead records of injection pressure and mass flow during the shear stimulation. The model captures both the absence of stimulation at low wellhead pressure (WHP ≤1.7 and ≤2.4 MPa) as well as the timing and magnitude of injectivity rise at medium WHP (3.1 MPa). Results indicate that thermoelastic effects near the wellbore and the associated non-local stresses further from the well combine to propagate a failure front away from the injection well. Elevated WHP promotes failure, increases the injection rate, and cools the wellbore; however, as the overpressure drops off with distance, thermal and non-local stresses play an ongoing role in promoting shear failure at increasing distance from the well.

  17. SU-F-T-405: Development of a Rapid Cardiac Contouring Tool Using Landmark-Driven Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelletier, C; Jung, J; Mosher, E

    2016-06-15

    Purpose: This study aims to develop a tool to rapidly delineate cardiac substructures for use in dosimetry for large-scale clinical trial or epidemiological investigations. The goal is to produce a system that can semi-automatically delineate nine cardiac structures to a reasonable accuracy within a couple of minutes. Methods: The cardiac contouring tool employs a Most Similar Atlas method, where a selection criterion is used to pre-select the most similar model to the patient from a library of pre-defined atlases. Sixty contrast-enhanced cardiac computed tomography angiography (CTA) scans (30 male and 30 female) were manually contoured to serve as the atlasmore » library. For each CTA 12 structures were delineated. Kabsch algorithm was used to compute the optimum rotation and translation matrices between the patient and atlas. Minimum root mean squared distance between the patient and atlas after transformation was used to select the most-similar atlas. An initial study using 10 CTA sets was performed to assess system feasibility. Leave-one patient out method was performed, and fit criteria were calculated to evaluate the fit accuracy compared to manual contours. Results: For the pilot study, mean dice indices of .895 were achieved for the whole heart, .867 for the ventricles, and .802 for the atria. In addition, mean distance was measured via the chord length distribution (CLD) between ground truth and the atlas structures for the four coronary arteries. The mean CLD for all coronary arteries was below 14mm, with the left circumflex artery showing the best agreement (7.08mm). Conclusion: The cardiac contouring tool is able to delineate cardiac structures with reasonable accuracy in less than 90 seconds. Pilot data indicates that the system is able to delineate the whole heart and ventricles within a reasonable accuracy using even a limited library. We are extending the atlas sets to 60 adult males and females in total.« less

  18. 'Ready to hit the ground running': Alumni and employer accounts of a unique part-time distance learning pre-registration nurse education programme.

    PubMed

    Draper, Jan; Beretta, Ruth; Kenward, Linda; McDonagh, Lin; Messenger, Julie; Rounce, Jill

    2014-10-01

    This study explored the impact of The Open University's (OU) preregistration nursing programme on students' employability, career progression and its contribution to developing the nursing workforce across the United Kingdom. Designed for healthcare support workers who are sponsored by their employers, the programme is the only part-time supported open/distance learning programme in the UK leading to registration as a nurse. The international literature reveals that relatively little is known about the impact of previous experience as a healthcare support worker on the experience of transition, employability skills and career progression. To identify alumni and employer views of the perceived impact of the programme on employability, career progression and workforce development. A qualitative design using telephone interviews which were digitally recorded, and transcribed verbatim prior to content analysis to identify recurrent themes. Three geographical areas across the UK. Alumni (n=17) and employers (n=7). Inclusion criterion for alumni was a minimum of two years' post-qualifying experience. Inclusion criteria for employers were those that had responsibility for sponsoring students on the programme and employing them as newly qualified nurses. Four overarching themes were identified: transition, expectations, learning for and in practice, and flexibility. Alumni and employers were of the view that the programme equipped them well to meet the competencies and expectations of being a newly qualified nurse. It provided employers with a flexible route to growing their own workforce and alumni the opportunity to achieve their ambition of becoming a qualified nurse when other more conventional routes would not have been open to them. Some of them had already demonstrated career progression. Generalising results requires caution due to the small, self-selecting sample but findings suggest that a widening participation model of pre-registration nurse education for employed healthcare support workers more than adequately prepares them for the realities of professional practice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria

    PubMed Central

    Hunter, Eric J.

    2015-01-01

    Purpose Schoolteachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method Vibration dosimetry is reformulated with the inclusion of collision stress. Two methods of estimating amplitude of vocal-fold vibration are compared to capture variations in vocal intensity. Energy loss from collision is added to the energy-dissipation dose. An equal-energy-dissipation criterion is defined and used on the teacher corpus as a potential-damage risk criterion. Results Comparison of time-, cycle-, distance-, and energy-dose calculations for 57 teachers reveals a progression in information content in the ability to capture variations in duration, speaking pitch, and vocal intensity. The energy-dissipation dose carries the greatest promise in capturing excessive tissue stress and collision but also the greatest liability, due to uncertainty in parameters. Cycle dose is least correlated with the other doses. Conclusion As a first guide to damage risk in excessive voice use, the equal-energy-dissipation dose criterion can be used to structure trade-off relations between loudness, adduction, and duration of speech. PMID:26172434

  20. Self-sustained criterion with photoionization for positive dc corona plasmas between coaxial cylinders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yuesheng, E-mail: yueshengzheng@fzu.edu.cn; Zhang, Bo, E-mail: shizbcn@tsinghua.edu.cn; He, Jinliang, E-mail: hejl@tsinghua.edu.cn

    The positive dc corona plasmas between coaxial cylinders in air under the application of a self-sustained criterion with photoionization are investigated in this paper. A photon absorption function suitable for cylindrical electrode, which can characterize the total photons within the ionization region, is proposed on the basis of the classic corona onset criteria. Based on the general fluid model with the self-sustained criterion, the role of photoionization in the ionization region is clarified. It is found that the surface electric field keeps constant under a relatively low corona current, while it is slightly weakened with the increase of the coronamore » current. Similar tendencies can be found under different conductor radii and relative air densities. The small change of the surface electric field will become more significant for the electron density distribution as well as the ionization activity under a high corona current, compared with the results under the assumption of a constant surface field. The assumption that the surface electric field remains constant should be corrected with the increase of the corona current when the energetic electrons with a distance from the conductor surface are concerned.« less

  1. Triple-decker sandwiches and related compounds of the first-row transition metals containing cyclopentadienyl and benzene rings.

    PubMed

    Liu, Haibo; Li, Qian-shu; Xie, Yaoming; King, R Bruce; Schaefer, Henry F

    2010-08-12

    The triple-decker sandwich compound trans-Cp(2)V(2)(eta(6):eta(6)-mu-C(6)H(6)) has been synthesized, as well as "slipped" sandwich compounds of the type trans-Cp(2)Co(2)(eta(4):eta(4)-mu-arene) and the cis-Cp(2)Fe(2)(eta(4):eta(4)-mu-C(6)R(6)) derivatives with an Fe-Fe bond (Cp = eta(5)-cyclopentadienyl). Theoretical studies show that the symmetrical triple-decker sandwich structures trans-Cp(2)M(2)(eta(6):eta(6)-mu-C(6)H(6)) are the global minima for M = Ti, V, and Mn but lie approximately 10 kcal/mol above the global minimum for M = Cr. The nonbonding M...M distances and spin states in these triple decker sandwich compounds can be related to the occupancies of the frontier bonding molecular orbitals. The global minimum for the chromium derivative is a singlet spin state cis-Cp(2)Cr(2)(eta(4):eta(4)-mu-C(6)H(6)) structure with a very short CrCr distance of 2.06 A, suggesting a formal quadruple bond. A triplet state cis-Cp(2)Cr(2)(eta(4):eta(4)-mu-C(6)H(6)) structure with a predicted Cr[triple bond]Cr distance of 2.26 A lies only approximately 3 kcal/mol above this global minimum. For the later transition metals the global minima are predicted to be cis-Cp(2)M(2)(eta(6):eta(6)-mu-C(6)H(6)) structures with a metal-metal bond, rather than triple decker sandwiches. These include singlet cis-Cp(2)Fe(2)(eta(4):eta(4)-mu-C(6)H(6)) with a predicted Fe=Fe double bond distance of 2.43 A, singlet cis-Cp(2)Co(2)(eta(3):eta(3)-mu-C(6)H(6)) with a predicted Co-Co single bond distance of 2.59 A, and triplet cis-Cp(2)Ni(2)(eta(3):eta(3)-mu-C(6)H(6)) with a predicted Ni-Ni distance of 2.71 A.

  2. Hyperspectral feature mapping classification based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli

    2016-03-01

    This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.

  3. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  4. SU-F-J-25: Position Monitoring for Intracranial SRS Using BrainLAB ExacTrac Snap Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, S; McCaw, T; Huq, M

    2016-06-15

    Purpose: To determine the accuracy of position monitoring with BrainLAB ExacTrac snap verification following couch rotations during intracranial SRS. Methods: A CT scan of an anthropomorphic head phantom was acquired using 1.25mm slices. The isocenter was positioned near the centroid of the frontal lobe. The head phantom was initially aligned on the treatment couch using cone-beam CT, then repositioned using ExacTrac x-ray verification with residual errors less than 0.2mm and 0.2°. Snap verification was performed over the full range of couch angles in 15° increments with known positioning offsets of 0–3mm applied to the phantom along each axis. At eachmore » couch angle, the smallest tolerance was determined for which no positioning deviation was detected. Results: For couch angles 30°–60° from the center position, where the longitudinal axis of the phantom is approximately aligned with the beam axis of one x-ray tube, snap verification consistently detected positioning errors exceeding the maximum 8mm tolerance. Defining localization error as the difference between the known offset and the minimum tolerance for which no deviation was detected, the RMS error is mostly less than 1mm outside of couch angles 30°–60° from the central couch position. Given separate measurements of patient position from the two imagers, whether to proceed with treatment can be determined by the criterion of a reading within tolerance from just one (OR criterion) or both (AND criterion) imagers. Using a positioning tolerance of 1.5mm, snap verification has sensitivity and specificity of 94% and 75%, respectively, with the AND criterion, and 67% and 93%, respectively, with the OR criterion. If readings exceeding maximum tolerance are excluded, the sensitivity and specificity are 88% and 86%, respectively, with the AND criterion. Conclusion: With a positioning tolerance of 1.5mm, ExacTrac snap verification can be used during intracranial SRS with sensitivity and specificity between 85% and 90%.« less

  5. Transverse Stress Decay in a Specially Orthotropic Strip Under Localizing Normal Edge Loading

    NASA Technical Reports Server (NTRS)

    Fichter, W. B.

    2000-01-01

    Solutions are presented for the stresses in a specially orthotropic infinite strip which is subjected to localized uniform normal loading on one edge while the other edge is either restrained against normal displacement only, or completely fixed. The solutions are used to investigate the diffusion of load into the strip and in particular the decay of normal stress across the width of the strip. For orthotropic strips representative of a broad range of balanced and symmetric angle-ply composite laminates, minimum strip widths are found that ensure at least 90% decay of the normal stress across the strip. In addition, in a few cases where, on the fixed edge the peak shear stress exceeds the normal stress in magnitude, minimum strip widths that ensure 90% decay of both stresses are found. To help in putting these results into perspective, and to illustrate the influence of material properties on load 9 orthotropic materials, closed-form solutions for the stresses in similarly loaded orthotropic half-planes are obtained. These solutions are used to generate illustrative stress contour plots for several representative laminates. Among the laminates, those composed of intermediate-angle plies, i.e., from about 30 degrees to 60 degrees, exhibit marked changes in normal stress contour shape with stress level. The stress contours are also used to find 90% decay distances in the half-planes. In all cases, the minimum strip widths for 90% decay of the normal stress exceed the 90% decay distances in the corresponding half-planes, in amounts ranging from only a few percent to about 50% of the half-plane decay distances. The 90% decay distances depend on both material properties and the boundary conditions on the supported edge.

  6. Space Availability in Confined Sheep during Pregnancy, Effects in Movement Patterns and Use of Space

    PubMed Central

    Averós, Xavier; Lorea, Areta; Beltrán de Heredia, Ignacia; Arranz, Josune; Ruiz, Roberto; Estevez, Inma

    2014-01-01

    Space availability is essential to grant the welfare of animals. To determine the effect of space availability on movement and space use in pregnant ewes (Ovis aries), 54 individuals were studied during the last 11 weeks of gestation. Three treatments were tested (1, 2, and 3 m2/ewe; 6 ewes/group). Ewes' positions were collected for 15 minutes using continuous scan samplings two days/week. Total and net distance, net/total distance ratio, maximum and minimum step length, movement activity, angular dispersion, nearest, furthest and mean neighbour distance, peripheral location ratio, and corrected peripheral location ratio were calculated. Restriction in space availability resulted in smaller total travelled distance, net to total distance ratio, maximum step length, and angular dispersion but higher movement activity at 1 m2/ewe as compared to 2 and 3 m2/ewe (P<0.01). On the other hand, nearest and furthest neighbour distances increased from 1 to 3 m2/ewe (P<0.001). Largest total distance, maximum and minimum step length, and movement activity, as well as lowest net/total distance ratio and angular dispersion were observed during the first weeks (P<0.05) while inter-individual distances increased through gestation. Results indicate that movement patterns and space use in ewes were clearly restricted by limitations of space availability to 1 m2/ewe. This reflected in shorter, more sinuous trajectories composed of shorter steps, lower inter-individual distances and higher movement activity potentially linked with higher restlessness levels. On the contrary, differences between 2 and 3 m2/ewe, for most variables indicate that increasing space availability from 2 to 3 m2/ewe would appear to have limited benefits, reflected mostly in a further increment in the inter-individual distances among group members. No major variations in spatial requirements were detected through gestation, except for slight increments in inter-individual distances and an initial adaptation period, with ewes being restless and highly motivated to explore their new environment. PMID:24733027

  7. Nucleation theory with delayed interactions: an application to the early stages of the receptor-mediated adhesion/fusion kinetics of lipid vesicles.

    PubMed

    Raudino, Antonio; Pannuzzo, Martina

    2010-01-28

    A semiquantitative theory aimed to describe the adhesion kinetics between soft objects, such as living cells or vesicles, has been developed. When rigid bodies are considered, the adhesion kinetics is successfully described by the classical Derjaguin, Landau, Verwey, and Overbeek (DLVO) picture, where the energy profile of two approaching bodies is given by a two asymmetrical potential wells separated by a barrier. The transition probability from the long-distance to the short-distance minimum defines the adhesion rate. Conversely, soft bodies might follow a different pathway to reach the short-distance minimum: thermally excited fluctuations give rise to local protrusions connecting the approaching bodies. These transient adhesion sites are stabilized by short-range adhesion forces (e.g., ligand-receptor interactions between membranes brought at contact distance), while they are destabilized both by repulsive forces and by the elastic deformation energy. Above a critical area of the contact site, the adhesion forces prevail: the contact site grows in size until the complete adhesion of the two bodies inside a short-distance minimum is attained. This nucleation mechanism has been developed in the framework of a nonequilibrium Fokker-Planck picture by considering both the adhesive patch growth and dissolution processes. In addition, we also investigated the effect of the ligand-receptor pairing kinetics at the adhesion site in the time course of the patch expansion. The ratio between the ligand-receptor pairing kinetics and the expansion rate of the adhesion site is of paramount relevance in determining the overall nucleation rate. The theory enables one to self-consistently include both thermodynamics (energy barrier height) and dynamic (viscosity) parameters, giving rise in some limiting cases to simple analytical formulas. The model could be employed to rationalize fusion kinetics between vesicles, provided the short-range adhesion transition is the rate-limiting step to the whole adhesion process. Approximate relationships between the experimental fusion rates reported in the literature and parameters such as membrane elastic bending modulus, repulsion strength, temperature, osmotic forces, ligand-receptor binding energy, solvent and membrane viscosities are satisfactory explained by our model. The present results hint a possible role of the initial long-distance-->short-distance transition in determining the whole fusion kinetics.

  8. How quantitative measures unravel design principles in multi-stage phosphorylation cascades.

    PubMed

    Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf

    2008-09-07

    We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.

  9. Use of power analysis to develop detectable significance criteria for sea urchin toxicity tests

    USGS Publications Warehouse

    Carr, R.S.; Biedenbach, J.M.

    1999-01-01

    When sufficient data are available, the statistical power of a test can be determined using power analysis procedures. The term “detectable significance” has been coined to refer to this criterion based on power analysis and past performance of a test. This power analysis procedure has been performed with sea urchin (Arbacia punctulata) fertilization and embryological development data from sediment porewater toxicity tests. Data from 3100 and 2295 tests for the fertilization and embryological development tests, respectively, were used to calculate the criteria and regression equations describing the power curves. Using Dunnett's test, a minimum significant difference (MSD) (β = 0.05) of 15.5% and 19% for the fertilization test, and 16.4% and 20.6% for the embryological development test, for α ≤ 0.05 and α ≤ 0.01, respectively, were determined. The use of this second criterion reduces type I (false positive) errors and helps to establish a critical level of difference based on the past performance of the test.

  10. Cellular and dendritic growth in a binary melt - A marginal stability approach

    NASA Technical Reports Server (NTRS)

    Laxmanan, V.

    1986-01-01

    A simple model for the constrained growth of an array of cells or dendrites in a binary alloy in the presence of an imposed positive temperature gradient in the liquid is proposed, with the dendritic or cell tip radius calculated using the marginal stability criterion of Langer and Muller-Krumbhaar (1977). This approach, an approach adopting the ad hoc assumption of minimum undercooling at the cell or dendrite tip, and an approach based on the stability criterion of Trivedi (1980) all predict tip radii to within 30 percent of each other, and yield a simple relationship between the tip radius and the growth conditions. Good agreement is found between predictions and data obtained in a succinonitrile-acetone system, and under the present experimental conditions, the dendritic tip stability parameter value is found to be twice that obtained previously, possibly due to a transition in morphology from a cellular structure with just a few side branches, to a more fully developed dendritic structure.

  11. Maximum correntropy square-root cubature Kalman filter with application to SINS/GPS integrated systems.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng

    2018-05-31

    For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Efficient graphene saturable absorbers on D-shaped optical fiber for ultrashort pulse generation

    PubMed Central

    Zapata, J. D.; Steinberg, D.; Saito, L. A. M.; de Oliveira, R. E. P.; Cárdenas, A. M.; de Souza, E. A. Thoroh

    2016-01-01

    We demonstrated a method to construct high efficiency saturable absorbers based on the evanescent light field interaction of CVD monolayer graphene deposited on side-polished D-shaped optical fiber. A set of samples was fabricated with two different core-graphene distances (0 and 1 μm), covered with graphene ranging between 10 and 25 mm length. The mode-locking was achieved and the best pulse duration was 256 fs, the shortest pulse reported in the literature with CVD monolayer graphene in EDFL. As result, we find a criterion between the polarization relative extinction ratio in the samples and the pulse duration, which relates the better mode-locking performance with the higher polarization extinction ratio of the samples. This criterion also provides a better understanding of the graphene distributed saturable absorbers and their reproducible performance as optoelectronic devices for optical applications. PMID:26856886

  13. A numerical algorithm of tooth profile of non-circular cylindrical gear

    NASA Astrophysics Data System (ADS)

    Wang, Xuan

    2017-08-01

    Non-circular cylindrical gear (NCCG) is a common form of non-circular gear. Different from the circular gear, the tooth profile equation of NCCG cannot be obtained. So it is necessary to use a numerical algorithm to calculate the tooth profile of NCCG. For this reason, this paper presents a simple and highly efficient numerical algorithm to obtain the tooth profile of NCCG. Firstly, the mathematical model of tooth profile envelope of NCCG is established based on the principle of gear shaping, and the tooth profile envelope of NCCG is obtained. Secondly, the polar radius and polar angle of shaper cutter tooth profile are chosen as the criterions, by which the points of NCCG tooth cogging can be screened out. Finally, the boundary of tooth cogging points is extracted by a distance criterion and correspondingly the tooth profile of NCCG is obtained.

  14. Coexistence Analysis of Civil Unmanned Aircraft Systems at Low Altitudes

    NASA Astrophysics Data System (ADS)

    Zhou, Yuzhe

    2016-11-01

    The requirement of unmanned aircraft systems in civil areas is growing. However, provisioning of flight efficiency and safety of unmanned aircraft has critical requirements on wireless communication spectrum resources. Current researches mainly focus on spectrum availability. In this paper, the unmanned aircraft system communication models, including the coverage model and data rate model, and two coexistence analysis procedures, i. e. the interference and noise ratio criterion and frequency-distance-direction criterion, are proposed to analyze spectrum requirements and interference results of the civil unmanned aircraft systems at low altitudes. In addition, explicit explanations are provided. The proposed coexistence analysis criteria are applied to assess unmanned aircraft systems' uplink and downlink interference performances and to support corresponding spectrum planning. Numerical results demonstrate that the proposed assessments and analysis procedures satisfy requirements of flexible spectrum accessing and safe coexistence among multiple unmanned aircraft systems.

  15. Towards a new tool for the evaluation of the quality of ultrasound compressed images.

    PubMed

    Delgorge, Cécile; Rosenberger, Christophe; Poisson, Gérard; Vieyres, Pierre

    2006-11-01

    This paper presents a new tool for the evaluation of ultrasound image compression. The goal is to measure the image quality as easily as with a statistical criterion, and with the same reliability as the one provided by the medical assessment. An initial experiment is proposed to medical experts and represents our reference value for the comparison of evaluation criteria. Twenty-one statistical criteria are selected from the literature. A cumulative absolute similarity measure is defined as a distance between the criterion to evaluate and the reference value. A first fusion method based on a linear combination of criteria is proposed to improve the results obtained by each of them separately. The second proposed approach combines different statistical criteria and uses the medical assessment in a training phase with a support vector machine. Some experimental results are given and show the benefit of fusion.

  16. On the critical flame radius and minimum ignition energy for spherical flame initiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zheng; Burke, M. P.; Ju, Yiguang

    2011-01-01

    Spherical flame initiation from an ignition kernel is studied theoretically and numerically using different fuel/oxygen/helium/argon mixtures (fuel: hydrogen, methane, and propane). The emphasis is placed on investigating the critical flame radius controlling spherical flame initiation and its correlation with the minimum ignition energy. It is found that the critical flame radius is different from the flame thickness and the flame ball radius and that their relationship depends strongly on the Lewis number. Three different flame regimes in terms of the Lewis number are observed and a new criterion for the critical flame radius is introduced. For mixtures with Lewis numbermore » larger than a critical Lewis number above unity, the critical flame radius is smaller than the flame ball radius but larger than the flame thickness. As a result, the minimum ignition energy can be substantially over-predicted (under-predicted) based on the flame ball radius (the flame thickness). The results also show that the minimum ignition energy for successful spherical flame initiation is proportional to the cube of the critical flame radius. Furthermore, preferential diffusion of heat and mass (i.e. the Lewis number effect) is found to play an important role in both spherical flame initiation and flame kernel evolution after ignition. It is shown that the critical flame radius and the minimum ignition energy increase significantly with the Lewis number. Therefore, for transportation fuels with large Lewis numbers, blending of small molecule fuels or thermal and catalytic cracking will significantly reduce the minimum ignition energy.« less

  17. Association between women's empowerment and infant and child feeding practices in sub-Saharan Africa: an analysis of Demographic and Health Surveys.

    PubMed

    Na, Muzi; Jennings, Larissa; Talegawkar, Sameera A; Ahmed, Saifuddin

    2015-12-01

    To explore the relationship between women's empowerment and WHO recommended infant and young child feeding (IYCF) practices in sub-Saharan Africa. Analysis was conducted using data from ten Demographic and Health Surveys between 2010 and 2013. Women's empowerment was assessed by nine standard items covering three dimensions: economic, socio-familial and legal empowerment. Three core IYCF practices examined were minimum dietary diversity, minimum meal frequency and minimum acceptable diet. Separate multivariable logistic regression models were applied for the IYCF practices on dimensional and overall empowerment in each country. Benin, Burkina Faso, Ethiopia, Mali, Niger, Nigeria, Rwanda, Sierra Leone, Uganda and Zimbabwe. Youngest singleton children aged 6-23 months and their mothers (n 15 153). Less than 35 %, 60 % and 18 % of children 6-23 months of age met the criterion of minimum dietary diversity, minimum meal frequency and minimum acceptable diet, respectively. In general, likelihood of meeting the recommended IYCF criteria was positively associated with the economic dimension of women's empowerment. Socio-familial empowerment was negatively associated with the three feeding criteria, except in Zimbabwe. The legal dimension of empowerment did not show any clear pattern in the associations. Greater overall empowerment of women was consistently and positively associated with multiple IYCF practices in Mali, Rwanda and Sierra Leone. However, consistent negative relationships were found in Benin and Niger. Null or mixed results were observed in the remaining countries. The importance of women's empowerment for IYCF practices needs to be discussed by context and by dimension of empowerment.

  18. The production route selection algorithm in virtual manufacturing networks

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2017-08-01

    The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.

  19. Post Detection Target State Estimation Using Heuristic Information Processing - A Preliminary Investigation

    DTIC Science & Technology

    1977-09-01

    Interpolation algorithm allows this to be done when the transition boundaries are defined close together and parallel to one another. In this case the...in the variable kernel esti- -mates.) In [2] a goodness-of-fit criterion for a set of sam- One question of great interest to us in this study pies...an estimate /(x) is For the unimodal case the ab.olute minimum okV .based on the variables ocurs at k .= 100, ce 5. At this point we have j Mean

  20. Shape optimization of the modular press body

    NASA Astrophysics Data System (ADS)

    Pabiszczak, Stanisław

    2016-12-01

    A paper contains an optimization algorithm of cross-sectional dimensions of a modular press body for the minimum mass criterion. Parameters of the wall thickness and the angle of their inclination relative to the base of section are assumed as the decision variables. The overall dimensions are treated as a constant. The optimal values of parameters were calculated using numerical method of the tool Solver in the program Microsoft Excel. The results of the optimization procedure helped reduce body weight by 27% while maintaining the required rigidity of the body.

  1. Interpretation of impeller flow calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuzson, J.

    1993-09-01

    Most available computer programs are analysis and not design programs. Therefore the intervention of the designer is indispensable. Guidelines are needed to evaluate the degree of fluid mechanic perfection of a design which is compromised for practical reasons. A new way of plotting the computer output is proposed here which illustrates the energy distribution throughout the flow. The consequence of deviating from optimal flow pattern is discussed and specific cases are reviewed. A criterion is derived for the existence of a jet/wake flow pattern and for the minimum wake mixing loss.

  2. A methodology based on reduced complexity algorithm for system applications using microprocessors

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1988-01-01

    The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.

  3. San Francisco floating STOLport study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The operational, economic, environmental, social and engineering feasibility of utilizing deactivated maritime vessels as a waterfront quiet short takeoff and landing facility to be located near the central business district of San Francisco was investigated. Criteria were developed to evaluate each site, and minimum standards were established for each criterion. Predicted conditions at the two sites were compared to the requirements for each of the 11 criteria as a means of evaluating site performance. Criteria include land use, community structure, economic impact, access, visual character, noise, air pollution, natural environment, weather, air traffic, and terminal design.

  4. Recognition of In-Vehicle Group Activities (iVGA): Phase-I, Feasibility Study

    DTIC Science & Technology

    2014-08-27

    the driver is either adjusting his/her eyeglasses , adjusting his/her makeup, or possibly attempt to hiding his/her face from getting recognized. In...closest of two patterns measured based on hamming distance determine the best class representing a test pattern. Figure 61 presents the Hamming neural...symbols are different. In another way, it measures the minimum number of substitutions required to change one string into the other, or the minimum

  5. Complex networks in the Euclidean space of communicability distances

    NASA Astrophysics Data System (ADS)

    Estrada, Ernesto

    2012-06-01

    We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.

  6. The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.

    PubMed

    Carson, Cantwell G; Levine, Jonathan S

    2016-09-01

    The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Temperature lapse rates at restricted thermodynamic equilibrium. Part II: Saturated air and further discussions

    NASA Astrophysics Data System (ADS)

    Björnbom, Pehr

    2016-03-01

    In the first part of this work equilibrium temperature profiles in fluid columns with ideal gas or ideal liquid were obtained by numerically minimizing the column energy at constant entropy, equivalent to maximizing column entropy at constant energy. A minimum in internal plus potential energy for an isothermal temperature profile was obtained in line with Gibbs' classical equilibrium criterion. However, a minimum in internal energy alone for adiabatic temperature profiles was also obtained. This led to a hypothesis that the adiabatic lapse rate corresponds to a restricted equilibrium state, a type of state in fact discussed already by Gibbs. In this paper similar numerical results for a fluid column with saturated air suggest that also the saturated adiabatic lapse rate corresponds to a restricted equilibrium state. The proposed hypothesis is further discussed and amended based on the previous and the present numerical results and a theoretical analysis based on Gibbs' equilibrium theory.

  8. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, J. M.; Liu, Y.; Li, W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize amore » regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.« less

  9. A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network

    NASA Astrophysics Data System (ADS)

    Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.

    Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.

  10. Pessimistic Determination of Mechanical Conditions and Micro/macroeconomic Evaluation of Mine Pillar Replacement

    NASA Astrophysics Data System (ADS)

    Chen, Qingfa; Zhao, Fuyu

    2017-12-01

    Numerous pillars are left after mining of underground mineral resources using the open stope method or after the first step of the partial filling method. The mineral recovery rate can, however, be improved by replacement recovery of pillars. In the present study, the relationships among the pillar type, minimum pillar width, and micro/macroeconomic factors were investigated from two perspectives, namely mechanical stability and micro/macroeconomic benefit. Based on the mechanical stability formulas for ore and artificial pillars, the minimum width for a specific pillar type was determined using a pessimistic criterion. The microeconomic benefit c of setting an ore pillar, the microeconomic benefit w of artificial pillar replacement, and the economic net present value (ENPV) of the replacement process were calculated. The values of c and w were compared with respect to ENPV, based on which the appropriate pillar type and economical benefit were determined.

  11. Protein-protein interaction site predictions with minimum covariance determinant and Mahalanobis distance.

    PubMed

    Qiu, Zhijun; Zhou, Bo; Yuan, Jiangfeng

    2017-11-21

    Protein-protein interaction site (PPIS) prediction must deal with the diversity of interaction sites that limits their prediction accuracy. Use of proteins with unknown or unidentified interactions can also lead to missing interfaces. Such data errors are often brought into the training dataset. In response to these two problems, we used the minimum covariance determinant (MCD) method to refine the training data to build a predictor with better performance, utilizing its ability of removing outliers. In order to predict test data in practice, a method based on Mahalanobis distance was devised to select proper test data as input for the predictor. With leave-one-validation and independent test, after the Mahalanobis distance screening, our method achieved higher performance according to Matthews correlation coefficient (MCC), although only a part of test data could be predicted. These results indicate that data refinement is an efficient approach to improve protein-protein interaction site prediction. By further optimizing our method, it is hopeful to develop predictors of better performance and wide range of application. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Gender classification in children based on speech characteristics: using fundamental and formant frequencies of Malay vowels.

    PubMed

    Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa

    2013-03-01

    Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  13. Habitat assessment for giant pandas in the Qinling Mountain region of China

    USGS Publications Warehouse

    Feng, Tian-Tian; Van Manen, Frank T.; Zhao, Na-Xun; Li, Ming; Wei, Fu-Wen

    2009-01-01

    Because habitat loss and fragmentation threaten giant pandas (Ailuropoda melanoleuca), habitat protection and restoration are important conservation measures for this endangered species. However, distribution and value of potential habitat to giant pandas on a regional scale are not fully known. Therefore, we identified and ranked giant panda habitat in Foping Nature Reserve, Guanyinshan Nature Reserve, and adjacent areas in the Qinling Mountains of China. We used Mahalanobis distance and 11 digital habitat layers to develop a multivariate habitat signature associated with 247 surveyed giant panda locations, which we then applied to the study region. We identified approximately 128 km2 of giant panda habitat in Foping Nature Reserve (43.6% of the reserve) and 49 km2 in Guanyinshan Nature Reserve (33.6% of the reserve). We defined core habitat areas by incorporating a minimum patch-size criterion (5.5 km2) based on home-range size. Percentage of core habitat area was higher in Foping Nature Reserve (41.8% of the reserve) than Guanyinshan Nature Reserve (26.3% of the reserve). Within the larger analysis region, Foping Nature Reserve contained 32.7% of all core habitat areas we identified, indicating regional importance of the reserve. We observed a negative relationship between distribution of core areas and presence of roads and small villages. Protection of giant panda habitat at lower elevations and improvement of habitat linkages among core habitat areas are important in a regional approach to giant panda conservation.

  14. 49 CFR 192.735 - Compressor stations: Storage of combustible materials.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance... buildings, must be stored a safe distance from the compressor building. (b) Aboveground oil or gasoline...

  15. 27 CFR 555.206 - Location of magazines.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... in the table of distances for storage of explosive materials in § 555.218. (2) Ammonium nitrate and... for the separation of ammonium nitrate and blasting agents in § 555.220. However, the minimum...

  16. 27 CFR 555.206 - Location of magazines.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... in the table of distances for storage of explosive materials in § 555.218. (2) Ammonium nitrate and... for the separation of ammonium nitrate and blasting agents in § 555.220. However, the minimum...

  17. 27 CFR 555.206 - Location of magazines.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... in the table of distances for storage of explosive materials in § 555.218. (2) Ammonium nitrate and... for the separation of ammonium nitrate and blasting agents in § 555.220. However, the minimum...

  18. 27 CFR 555.206 - Location of magazines.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... in the table of distances for storage of explosive materials in § 555.218. (2) Ammonium nitrate and... for the separation of ammonium nitrate and blasting agents in § 555.220. However, the minimum...

  19. Influence of anthropometry on the kinematics of the cervical spine and the risk of injury in sled tests in female volunteers.

    PubMed

    Dehner, Christoph; Schick, Sylvia; Arand, Markus; Elbel, Martin; Hell, Wolfram; Kramer, Michael

    2008-07-01

    The objective of this study was to investigate the influence of anthropometric data on the kinematics of the cervical spine and the risk factors for sustaining a neck injury during rear-end collisions occurring in a sled test. A rear-end collision with a velocity change (DeltaV) of 6.3 km/h was simulated in a sled test with eight healthy female subjects. The study analysed the association of anthropometric data with the initial distance between the head and the head restraint, defined kinematic characteristics, the neck injury criterion (NIC) and the neck injury criterion minor (NICmin). The head circumference is negatively associated (r=-0.598) with the initial distance between the head and the head restraint, the maximal head extension (r=-0.687) and the maximal dorsal angular head acceleration (r=-0.633). The body weight (r=0.800), body height (r=0.949) and thorax circumference (r=0.632) are positively associated with the maximal ventral head translation. The neck length correlates positively with the NIC (r=0.826) and negatively with the NICmin (r=-0.797). Anthropometric factors influence the kinematics of the cervical spine and the risk of injury. A high risk of injury may be assumed for individuals with a small head circumference, long neck, tall body height and high body weight.

  20. Dual tasking negatively impacts obstacle avoidance abilities in post-stroke individuals with visuospatial neglect: Task complexity matters!

    PubMed

    Aravind, Gayatri; Lamontagne, Anouk

    2017-01-01

    Persons with perceptual-attentional deficits due to visuospatial neglect (VSN) after a stroke are at a risk of collisions while walking in the presence of moving obstacles. The attentional burden of performing a dual-task may further compromise their obstacle avoidance performance, putting them at a greater risk of collisions. The objective of this study was to compare the ability of persons with (VSN+) and without VSN (VSN-) to dual task while negotiating moving obstacles. Twenty-six stroke survivors (13 VSN+, 13 VSN-) were assessed on their ability to (a) negotiate moving obstacles while walking (locomotor single task); (b) perform a pitch-discrimination task (cognitive single task) and (c) simultaneously perform the walking and cognitive tasks (dual task). We compared the groups on locomotor (collision rates, minimum distance from obstacle and onset of strategies) and cognitive (error rates) outcomes. For both single and dual task walking, VSN+ individuals showed higher collision rates compared to VSN- individuals. Dual tasking caused deterioration of locomotor (more collisions, delayed onset and smaller minimum distances) and cognitive performances (higher error rate) in VSN+ individuals. Contrastingly, VSN- individuals maintained collision rates, increased minimum distance, but showed more cognitive errors, prioritizing their locomotor performance. Individuals with VSN demonstrate cognitive-locomotor interference under dual task conditions, which could severely compromise safety when ambulating in community environments and may explain the poor recovery of independent community ambulation in these individuals.

  1. Contributions of long-distance dispersal to population growth in colonising Pinus ponderosa populations.

    PubMed

    Lesser, Mark R; Jackson, Stephen T

    2013-03-01

    Long-distance dispersal is an integral part of plant species migration and population development. We aged and genotyped 1125 individuals in four disjunct populations of Pinus ponderosa that were initially established by long-distance dispersal in the 16th and 17th centuries. Parentage analysis was used to determine if individuals were the product of local reproductive events (two parents present), long-distance pollen dispersal (one parent present) or long-distance seed dispersal (no parents present). All individuals established in the first century at each site were the result of long-distance dispersal. Individuals reproduced at younger ages with increasing age of the overall population. These results suggest Allee effects, where populations were initially unable to expand on their own, and were dependent on long-distance dispersal to overcome a minimum-size threshold. Our results demonstrate that long-distance dispersal was not only necessary for initial colonisation but also to sustain subsequent population growth during early phases of expansion. © 2012 Blackwell Publishing Ltd/CNRS.

  2. Setting Priorities in Global Child Health Research Investments: Addressing Values of Stakeholders

    PubMed Central

    Kapiriri, Lydia; Tomlinson, Mark; Gibson, Jennifer; Chopra, Mickey; El Arifeen, Shams; Black, Robert E.; Rudan, Igor

    2007-01-01

    Aim To identify main groups of stakeholders in the process of health research priority setting and propose strategies for addressing their systems of values. Methods In three separate exercises that took place between March and June 2006 we interviewed three different groups of stakeholders: 1) members of the global research priority setting network; 2) a diverse group of national-level stakeholders from South Africa; and 3) participants at the conference related to international child health held in Washington, DC, USA. Each of the groups was administered different version of the questionnaire in which they were asked to set weights to criteria (and also minimum required thresholds, where applicable) that were a priori defined as relevant to health research priority setting by the consultants of the Child Health and Nutrition Research initiative (CHNRI). Results At the global level, the wide and diverse group of respondents placed the greatest importance (weight) to the criterion of maximum potential for disease burden reduction, while the most stringent threshold was placed on the criterion of answerability in an ethical way. Among the stakeholders’ representatives attending the international conference, the criterion of deliverability, answerability, and sustainability of health research results was proposed as the most important one. At the national level in South Africa, the greatest weight was placed on the criterion addressing the predicted impact on equity of the proposed health research. Conclusions Involving a large group of stakeholders when setting priorities in health research investments is important because the criteria of relevance to scientists and technical experts, whose knowledge and technical expertise is usually central to the process, may not be appropriate to specific contexts and in accordance with the views and values of those who invest in health research, those who benefit from it, or wider society as a whole. PMID:17948948

  3. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  4. Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.

    PubMed

    Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P

    2015-02-20

    Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.

  5. Optimization of deformation monitoring networks using finite element strain analysis

    NASA Astrophysics Data System (ADS)

    Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.

    2018-04-01

    An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.

  6. Generalized Bohm’s criterion and negative anode voltage fall in electric discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Londer, Ya. I.; Ul’yanov, K. N., E-mail: kulyanov@vei.ru

    2013-10-15

    The value of the voltage fall across the anode sheath is found as a function of the current density. Analytic solutions are obtained in a wide range of the ratio of the directed velocity of plasma electrons v{sub 0} to their thermal velocity v{sub T}. It is shown that the voltage fall in a one-dimensional collisionless anode sheath is always negative. At the small values of v{sub 0}/v{sub T}, the obtained expression asymptotically transforms into the Langmuir formula. Generalized Bohm’s criterion for an electric discharge with allowance for the space charge density ρ(0), electric field E(0), ion velocity v{sub i}(0),more » and ratio v{sub 0}/v{sub T} at the plasma-sheath interface is formulated. It is shown that the minimum value of the ion velocity v{sub i}{sup *}(0) corresponds to the vanishing of the electric field at one point inside the sheath. The dependence of v{sub i}{sup *} (0) on ρ(0), E(0), and v{sub 0}/v{sub T} determines the boundary of the existence domain of stationary solutions in the sheath. Using this criterion, the maximum possible degree of contraction of the electron current at the anode is determined for a short high-current vacuum arc discharge.« less

  7. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  8. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  9. Do school classrooms meet the visual requirements of children and recommended vision standards?

    PubMed

    Negiloni, Kalpa; Ramani, Krishna Kumar; Sudhir, Rachapalle Reddi

    2017-01-01

    Visual demands of school children tend to vary with diverse classroom environments. The study aimed to evaluate the distance and near Visual Acuity (VA) demand in Indian school classrooms and their comparison with the recommended vision standards. The distance and near VA demands were assessed in 33 classrooms (grades 4 to 12) of eight schools. The VA threshold demand relied on the smallest size of distance and near visual task material and viewing distance. The logMAR equivalents of minimum VA demand at specific seating positions (desk) and among different grades were evaluated. The near threshold was converted into actual near VA demand by including the acuity reserve. The existing dimensions of chalkboard and classroom, gross area in a classroom per student and class size in all the measured classrooms were compared to the government recommended standards. In 33 classrooms assessed (35±10 students per room), the average distance and near logMAR VA threshold demand was 0.31±0.17 and 0.44±0.14 respectively. The mean distance VA demand (minimum) in front desk position was 0.56±0.18 logMAR. Increased distance threshold demand (logMAR range -0.06, 0.19) was noted in 7 classrooms (21%). The mean VA demand in grades 4 to 8 and grades 9 to 12 was 0.35±0.16 and 0.24±0.16 logMAR respectively and the difference was not statistically significant (p = 0.055). The distance from board to front desk was greater than the recommended standard of 2.2m in 27 classrooms (82%). The other measured parameters were noted to be different from the proposed standards in majority of the classrooms. The study suggests the inclusion of task demand assessment in school vision screening protocol to provide relevant guidance to school authorities. These findings can serve as evidence to accommodate children with mild to moderate visual impairment in the regular classrooms.

  10. Optimizing the Launch of a Projectile to Hit a Target

    NASA Astrophysics Data System (ADS)

    Mungan, Carl E.

    2017-12-01

    Some teenagers are exploring the outer perimeter of a castle. They notice a spy hole in its wall, across the moat a horizontal distance x and vertically up the wall a distance y. They decide to throw pebbles at the hole. One girl wants to use physics to throw with the minimum speed necessary to hit the hole. What is the required launch speed v and launch angle θ above the horizontal?

  11. A latent class distance association model for cross-classified data with a categorical response variable.

    PubMed

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.

  12. Kinematic measurement from panned cinematography.

    PubMed

    Gervais, P; Bedingfield, E W; Wronko, C; Kollias, I; Marchiori, G; Kuntz, J; Way, N; Kuiper, D

    1989-06-01

    Traditional 2-D cinematography has used a stationary camera with its optical axis perpendicular to the plane of motion. This method has constrained the size of the object plane or has introduced potential errors from a small subject image size with large object field widths. The purpose of this study was to assess a panning technique that could overcome the inherent limitations of small object field widths, small object image sizes and limited movement samples. The proposed technique used a series of reference targets in the object field that provided the necessary scales and origin translations. A 102 m object field was panned. Comparisons between criterion distances and film measured distances for field widths of 46 m and 22 m resulted in absolute mean differences that were comparable to that of the traditional method.

  13. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility. Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility. End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test). When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility. PMID:24570599

  14. The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.

    PubMed

    Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar

    2018-03-01

    This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility (< 10 cycles per minute), and the difference between near and distance phoria (> 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near phoria, and monocular accommodative facility yield good sensitivity and specificity for diagnosis of NSBVAs in a community set-up. © 2017 Optometry Australia.

  15. Observation of a Coulomb flux tube

    NASA Astrophysics Data System (ADS)

    Greensite, Jeff; Chung, Kristian

    2018-03-01

    In Coulomb gauge there is a longitudinal color electric field associated with a static quark-antiquark pair. We have measured the spatial distribution of this field, and find that it falls off exponentially with transverse distance from a line joining the two quarks. In other words there is a Coulomb flux tube, with a width that is somewhat smaller than that of the minimal energy flux tube associated with the asymptotic string tension. A confinement criterion for gauge theories with matter fields is also proposed.

  16. Nucleation theory with delayed interactions: An application to the early stages of the receptor-mediated adhesion/fusion kinetics of lipid vesicles

    NASA Astrophysics Data System (ADS)

    Raudino, Antonio; Pannuzzo, Martina

    2010-01-01

    A semiquantitative theory aimed to describe the adhesion kinetics between soft objects, such as living cells or vesicles, has been developed. When rigid bodies are considered, the adhesion kinetics is successfully described by the classical Derjaguin, Landau, Verwey, and Overbeek (DLVO) picture, where the energy profile of two approaching bodies is given by a two asymmetrical potential wells separated by a barrier. The transition probability from the long-distance to the short-distance minimum defines the adhesion rate. Conversely, soft bodies might follow a different pathway to reach the short-distance minimum: thermally excited fluctuations give rise to local protrusions connecting the approaching bodies. These transient adhesion sites are stabilized by short-range adhesion forces (e.g., ligand-receptor interactions between membranes brought at contact distance), while they are destabilized both by repulsive forces and by the elastic deformation energy. Above a critical area of the contact site, the adhesion forces prevail: the contact site grows in size until the complete adhesion of the two bodies inside a short-distance minimum is attained. This nucleation mechanism has been developed in the framework of a nonequilibrium Fokker-Planck picture by considering both the adhesive patch growth and dissolution processes. In addition, we also investigated the effect of the ligand-receptor pairing kinetics at the adhesion site in the time course of the patch expansion. The ratio between the ligand-receptor pairing kinetics and the expansion rate of the adhesion site is of paramount relevance in determining the overall nucleation rate. The theory enables one to self-consistently include both thermodynamics (energy barrier height) and dynamic (viscosity) parameters, giving rise in some limiting cases to simple analytical formulas. The model could be employed to rationalize fusion kinetics between vesicles, provided the short-range adhesion transition is the rate-limiting step to the whole adhesion process. Approximate relationships between the experimental fusion rates reported in the literature and parameters such as membrane elastic bending modulus, repulsion strength, temperature, osmotic forces, ligand-receptor binding energy, solvent and membrane viscosities are satisfactory explained by our model. The present results hint a possible role of the initial long-distance→short-distance transition in determining the whole fusion kinetics.

  17. DD-HDS: A method for visualization and exploration of high-dimensional data.

    PubMed

    Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard

    2007-09-01

    Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.

  18. The provision of clearances accuracy in piston - cylinder mating

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Shalay, V. V.

    2017-08-01

    The paper is aimed at increasing the quality of the pumping equipment in oil and gas industry. The main purpose of the study is to stabilize maximum values of productivity and durability of the pumping equipment based on the selective assembly of the cylinder-piston kinematic mating by optimization criterion. It is shown that the minimum clearance in the piston-cylinder mating is formed by maximum material dimensions. It is proved that maximum material dimensions are characterized by their own laws of distribution within the tolerance limits for the diameters of the cylinder internal mirror and the outer cylindrical surface of the piston. At that, their dispersion zones should be divided into size groups with a group tolerance equal to half the tolerance for the minimum clearance. The techniques for measuring the material dimensions - the smallest cylinder diameter and the largest piston diameter according to the envelope condition - are developed for sorting them into size groups. Reliable control of the dimensions precision ensures optimal minimum clearances of the piston-cylinder mating in all the size groups of the pumping equipment, necessary for increasing the equipment productivity and durability during the production, operation and repair processes.

  19. U S Navy Diving Manual. Volume 2. Mixed-Gas Diving. Revision 1.

    DTIC Science & Technology

    1981-07-01

    has been soaked in a solution of portant aspects of underwater physics and physiology caustic potash. This chemical absorbed the carbon as they...between the diver’s breathing passages and the circuit must be of minimum volume minimum of caustic fumes. Water produced by the to preclude deadspace and...strongly react with water to pro- space around the absorbent bed to reduce the gas duce caustic fumes and cannot be used in UBA’s. flow distance. The

  20. Maximum and minimum return losses from a passive two-port network terminated with a mismatched load

    NASA Technical Reports Server (NTRS)

    Otoshi, T. Y.

    1993-01-01

    This article presents an analytical method for determining the exact distance a load is required to be offset from a passive two-port network to obtain maximum or minimum return losses from the terminated two-port network. Equations are derived in terms of two-port network S-parameters and load reflection coefficient. The equations are useful for predicting worst-case performances of some types of networks that are terminated with offset short-circuit loads.

  1. Geometric characterization of separability and entanglement in pure Gaussian states by single-mode unitary operations

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio

    2007-10-01

    We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.

  2. Analysis of the minimum swerving distance for the development of a motorcycle autonomous braking system.

    PubMed

    Giovannini, Federico; Savino, Giovanni; Pierini, Marco; Baldanzini, Niccolò

    2013-10-01

    In the recent years the autonomous emergency brake (AEB) was introduced in the automotive field to mitigate the injury severity in case of unavoidable collisions. A crucial element for the activation of the AEB is to establish when the obstacle is no longer avoidable by lateral evasive maneuvers (swerving). In the present paper a model to compute the minimum swerving distance needed by a powered two-wheeler (PTW) to avoid the collision against a fixed obstacle, named last-second swerving model (Lsw), is proposed. The effectiveness of the model was investigated by an experimental campaign involving 12 volunteers riding a scooter equipped with a prototype autonomous emergency braking, named motorcycle autonomous emergency braking system (MAEB). The tests showed the performance of the model in evasive trajectory computation for different riding styles and fixed obstacles. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  4. Distance telescopes: a survey of user success.

    PubMed

    Lowe, J B; Rubinstein, M P

    2000-05-01

    The distance telescope has a historical reputation for causing difficulties in prescribing and adaptation. Hence, we considered that a retrospective survey of patients at Nottingham Low Vision Clinic might elucidate specific attributes that influence an individual patient's success in using a distance telescope. From 142 patients prescribed distance telescopes since the Clinic's inception, 133 apparently remained users and were mailed a preliminary three-question enquiry about usage of their distance telescopes. The 87 respondents were followed up with questionnaire 2, requesting explicit information about usage, namely frequency, degree of ease or difficulty, and purpose. Older patients required higher magnification (p < 0.025). Seventeen of 74 respondents to questionnaire 2 had various adaptational problems, which are discussed; 57 of 74 patients found their distance telescopes easy to use, and 49 of 57 were frequent users. Thus, ease and frequency are linked (p < 0.05). People tended to use their distance telescopes outdoors and indoors with similar frequency (p > or = 0.29). Adaptation was found to be unrelated to visual acuity, binocularity/monocularity, ocular pathology, or restricted mobility; magnification seemed to be influential, although not significantly. Aging did not significantly impede adaptation. We infer that the universal criterion for selecting treatable patients seems to be personality type. We conclude that adaptation to a device is dependent upon active recognition of its benefits, paralleled with a tolerance of its constraints, which combine to make usage easy and regular on at least one common task.

  5. DNA Fingerprinting Validates Seed Dispersal Curves from Observational Studies in the Neotropical Legume Parkia

    PubMed Central

    Heymann, Eckhard W.; Lüttmann, Kathrin; Michalczyk, Inga M.; Saboya, Pedro Pablo Pinedo; Ziegenhagen, Birgit; Bialozyt, Ronald

    2012-01-01

    Background Determining the distances over which seeds are dispersed is a crucial component for examining spatial patterns of seed dispersal and their consequences for plant reproductive success and population structure. However, following the fate of individual seeds after removal from the source tree till deposition at a distant place is generally extremely difficult. Here we provide a comparison of observationally and genetically determined seed dispersal distances and dispersal curves in a Neotropical animal-plant system. Methodology/Principal Findings In a field study on the dispersal of seeds of three Parkia (Fabaceae) species by two Neotropical primate species, Saguinus fuscicollis and Saguinus mystax, in Peruvian Amazonia, we observationally determined dispersal distances. These dispersal distances were then validated through DNA fingerprinting, by matching DNA from the maternally derived seed coat to DNA from potential source trees. We found that dispersal distances are strongly right-skewed, and that distributions obtained through observational and genetic methods and fitted distributions do not differ significantly from each other. Conclusions/Significance Our study showed that seed dispersal distances can be reliably estimated through observational methods when a strict criterion for inclusion of seeds is observed. Furthermore, dispersal distances produced by the two primate species indicated that these primates fulfil one of the criteria for efficient seed dispersers. Finally, our study demonstrated that DNA extraction methods so far employed for temperate plant species can be successfully used for hard-seeded tropical plants. PMID:22514748

  6. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  7. Variation of z-height of the molecular clouds on the Galactic Plane

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Stark, A. A.

    2002-12-01

    Using the Bell Laboratories Galactic plane in the J=1-0 transition of 13CO, (l, b) = (-5o to 117o, -1o to +1o), and cloud identification code, 13CO clouds have been identified and cataloged as a function of threshold temperature. Distance estimates to the identified clouds have been made with several criteria. Minimum and maximum distances to each identified cloud are determined from a set of all the possible distances of a cloud. Several physical parameters can be determined with distances, such as z-height [D sin (b)], CO luminosity, virial mass and so forth. We select the clouds with a ratio of maximum and minimum of CO luminosities less than 3. The number of selected clouds is 281 out of 1400 identified clouds with 1 K threshold temperature. These clouds are mostly located on the tangential positions in the inner Galaxy, and some are in the Outer Galaxy. It is found that the z-height of lower luminosity clouds (less massive clouds) is systimatically larger than that of high-luminosity clouds (more massive clouds). We claim that this is the first observational evidence of the z-height variation depending on the luminosities (or masses) of molecular clouds on the Galactic plane. Our results could be a basis explaining the formation mechanism of massive clouds, such as giant molecular clouds.

  8. Minimization of municipal solid waste transportation route in West Jakarta using Tabu Search method

    NASA Astrophysics Data System (ADS)

    Chaerul, M.; Mulananda, A. M.

    2018-04-01

    Indonesia still adopts the concept of collect-haul-dispose for municipal solid waste handling and it leads to the queue of the waste trucks at final disposal site (TPA). The study aims to minimize the total distance of waste transportation system by applying a Transshipment model. In this case, analogous of transshipment point is a compaction facility (SPA). Small capacity of trucks collects the waste from waste temporary collection points (TPS) to the compaction facility which located near the waste generator. After compacted, the waste is transported using big capacity of trucks to the final disposal site which is located far away from city. Problem related with the waste transportation can be solved using Vehicle Routing Problem (VRP). In this study, the shortest distance of route from truck pool to TPS, TPS to SPA, and SPA to TPA was determined by using meta-heuristic methods, namely Tabu Search 2 Phases. TPS studied is the container type with total 43 units throughout the West Jakarta City with 38 units of Armroll truck with capacity of 10 m3 each. The result determines the assignment of each truck from the pool to the selected TPS, SPA and TPA with the total minimum distance of 2,675.3 KM. The minimum distance causing the total cost for waste transportation to be spent by the government also becomes minimal.

  9. Development of unauthorized airborne emission source identification procedure

    NASA Astrophysics Data System (ADS)

    Shtripling, L. O.; Bazhenov, V. V.; Varakina, N. S.; Kupriyanova, N. P.

    2018-01-01

    The paper presents the procedure for searching sources of unauthorized airborne emissions. To make reasonable regulation decisions on airborne pollutant emissions and to ensure the environmental safety of population, the procedure provides for the determination of a pollutant mass emission value from the source being the cause of high pollution level and the search of a previously unrecognized contamination source in a specified area. To determine the true value of mass emission from the source, the minimum of the mean-root-square mismatch criterion between the computed and measured pollutant concentration in the given location is used.

  10. Dynamics of ultralight aircraft: Dive recovery of hang gliders

    NASA Technical Reports Server (NTRS)

    Jones, R. T.

    1977-01-01

    Longitudinal control of a hang glider by weight shift is not always adequate for recovery from a vertical dive. According to Lanchester's phugoid theory, recovery from rest to horizontal flight ought to be possible within a distance equal to three times the height of fall needed to acquire level flight velocity. A hang glider, having a wing loading of 5 kg sq m and capable of developing a lift coefficient of 1.0, should recover to horizontal flight within a vertical distance of about 12 m. The minimum recovery distance can be closely approached if the glider is equipped with a small all-moveable tail surface having sufficient upward deflection.

  11. Pairwise Trajectory Management (PTM): Concept Overview

    NASA Technical Reports Server (NTRS)

    Jones, Kenneth M.; Graff, Thomas J.; Chartrand, Ryan C.; Carreno, Victor; Kibler, Jennifer L.

    2017-01-01

    Pairwise Trajectory Management (PTM) is an Interval Management (IM) concept that utilizes airborne and ground-based capabilities to enable the implementation of airborne pairwise spacing capabilities in oceanic regions. The goal of PTM is to use airborne surveillance and tools to manage an "at or greater than" inter-aircraft spacing. Due to the precision of Automatic Dependent Surveillance-Broadcast (ADS-B) information and the use of airborne spacing guidance, the PTM minimum spacing distance will be less than distances a controller can support with current automation systems that support oceanic operations. Ground tools assist the controller in evaluating the traffic picture and determining appropriate PTM clearances to be issued. Avionics systems provide guidance information that allows the flight crew to conform to the PTM clearance issued by the controller. The combination of a reduced minimum distance and airborne spacing management will increase the capacity and efficiency of aircraft operations at a given altitude or volume of airspace. This paper provides an overview of the proposed application, description of a few key scenarios, high level discussion of expected air and ground equipment and procedure changes, overview of a potential flight crew human-machine interface that would support PTM operations and some initial PTM benefits results.

  12. 11. VIEW OF HOCK OUTCROPPING, CONCRETE GRAVITY DAM FACE AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. VIEW OF HOCK OUTCROPPING, CONCRETE GRAVITY DAM FACE AND LAKE WITH TUNNEL INLET STRUCTURE IN DISTANCE, SHOWN AT MINIMUM WATER FLOW, LOOKING SOUTHEAST (UPSTREAM) - Van Arsdale Dam, South Fork of Eel River, Ukiah, Mendocino County, CA

  13. Minimum depth of soil cover above long-span soil-steel railway bridges

    NASA Astrophysics Data System (ADS)

    Esmaeili, Morteza; Zakeri, Jabbar Ali; Abdulrazagh, Parisa Haji

    2013-12-01

    Recently, soil-steel bridges have become more commonly used as railway-highway crossings because of their economical advantages and short construction period compared with traditional bridges. The currently developed formula for determining the minimum depth of covers by existing codes is typically based on vehicle loads and non-stiffened panels and takes into consideration the geometrical shape of the metal structure to avoid the failure of soil cover above a soil-steel bridge. The effects of spans larger than 8 m or more stiffened panels due to railway loads that maintain a safe railway track have not been accounted for in the minimum cover formulas and are the subject of this paper. For this study, two-dimensional finite element (FE) analyses of four low-profile arches and four box culverts with spans larger than 8 m were performed to develop new patterns for the minimum depth of soil cover by considering the serviceability criterion of the railway track. Using the least-squares method, new formulas were then developed for low-profile arches and box culverts and were compared with Canadian Highway Bridge Design Code formulas. Finally, a series of three-dimensional (3D) finite element FE analyses were carried out to control the out-of-plane buckling in the steel plates due to the 3D pattern of train loads. The results show that the out-of-plane bending does not control the buckling behavior of the steel plates, so the proposed equations for minimum depth of cover can be appropriately used for practical purposes.

  14. Estimation of representative elementary volume for DNAPL saturation and DNAPL-water interfacial areas in 2D heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun

    2017-06-01

    Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.

  15. Potential energy function for CH3+CH3 ⇆ C2H6: Attributes of the minimum energy path

    NASA Astrophysics Data System (ADS)

    Robertson, S. H.; Wardlaw, D. M.; Hirst, D. M.

    1993-11-01

    The region of the potential energy surface for the title reaction in the vicinity of its minimum energy path has been predicted from the analysis of ab initio electronic energy calculations. The ab initio procedure employs a 6-31G** basis set and a configuration interaction calculation which uses the orbitals obtained in a generalized valence bond calculation. Calculated equilibrium properties of ethane and of isolated methyl radical are compared to existing theoretical and experimental results. The reaction coordinate is represented by the carbon-carbon interatomic distance. The following attributes are reported as a function of this distance and fit to functional forms which smoothly interpolate between reactant and product values of each attribute: the minimum energy path potential, the minimum energy path geometry, normal mode frequencies for vibrational motion orthogonal to the reaction coordinate, a torsional potential, and a fundamental anharmonic frequency for local mode, out-of-plane CH3 bending (umbrella motion). The best representation is provided by a three-parameter modified Morse function for the minimum energy path potential and a two-parameter hyperbolic tangent switching function for all other attributes. A poorer but simpler representation, which may be satisfactory for selected applications, is provided by a standard Morse function and a one-parameter exponential switching function. Previous applications of the exponential switching function to estimate the reaction coordinate dependence of the frequencies and geometry of this system have assumed the same value of the range parameter α for each property and have taken α to be less than or equal to the ``standard'' value of 1.0 Å-1. Based on the present analysis this is incorrect: The α values depend on the property and range from ˜1.2 to ˜1.8 Å-1.

  16. An expert system for planning and scheduling in a telerobotic environment

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.; Park, Eui H.

    1991-01-01

    A knowledge based approach to assigning tasks to multi-agents working cooperatively in jobs that require a telerobot in the loop was developed. The generality of the approach allows for such a concept to be applied in a nonteleoperational domain. The planning architecture known as the task oriented planner (TOP) uses the principle of flow mechanism and the concept of planning by deliberation to preserve and use knowledge about a particular task. The TOP is an open ended architecture developed with a NEXPERT expert system shell and its knowledge organization allows for indirect consultation at various levels of task abstraction. Considering that a telerobot operates in a hostile and nonstructured environment, task scheduling should respond to environmental changes. A general heuristic was developed for scheduling jobs with the TOP system. The technique is not to optimize a given scheduling criterion as in classical job and/or flow shop problems. For a teleoperation job schedule, criteria are situation dependent. A criterion selection is fuzzily embedded in the task-skill matrix computation. However, goal achievement with minimum expected risk to the human operator is emphasized.

  17. Damage Propagation Modeling for Aircraft Engine Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil

    2008-01-01

    This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.

  18. A parsimonious tree-grow method for haplotype inference.

    PubMed

    Li, Zhenping; Zhou, Wenfeng; Zhang, Xiang-Sun; Chen, Luonan

    2005-09-01

    Haplotype information has become increasingly important in analyzing fine-scale molecular genetics data, such as disease genes mapping and drug design. Parsimony haplotyping is one of haplotyping problems belonging to NP-hard class. In this paper, we aim to develop a novel algorithm for the haplotype inference problem with the parsimony criterion, based on a parsimonious tree-grow method (PTG). PTG is a heuristic algorithm that can find the minimum number of distinct haplotypes based on the criterion of keeping all genotypes resolved during tree-grow process. In addition, a block-partitioning method is also proposed to improve the computational efficiency. We show that the proposed approach is not only effective with a high accuracy, but also very efficient with the computational complexity in the order of O(m2n) time for n single nucleotide polymorphism sites in m individual genotypes. The software is available upon request from the authors, or from http://zhangroup.aporc.org/bioinfo/ptg/ chen@elec.osaka-sandai.ac.jp Supporting materials is available from http://zhangroup.aporc.org/bioinfo/ptg/bti572supplementary.pdf

  19. Design for minimum energy in interstellar communication

    NASA Astrophysics Data System (ADS)

    Messerschmitt, David G.

    2015-02-01

    Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.

  20. Elastohydrodynamic lubrication of elliptical contacts

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.

    1981-01-01

    The determination of the minimum film thickness within contact is considered for both fully flooded and starved conditions. A fully flooded conjunction is one in which the film thickness is not significantly changed when the amount of lubricant is increased. The fully flooded results presented show the influence of contact geometry on minimum film thickness as expressed by the ellipticity parameter and the dimensionless speed, load, and materials parameters. These results are applied to materials of high elastic modulus (hard EHL), such as metal, and to materials of low elastic modulus(soft EHL), such as rubber. In addition to the film thickness equations that are developed, contour plots of pressure and film thickness are given which show the essential features of elastohydrodynamically lubricated conjunctions. The crescent shaped region of minimum film thickness, with its side lobes in which the separation between the solids is a minimum, clearly emerges in the numerical solutions. In addition to the 3 presented for the fully flooded results, 15 more cases are used for hard EHL contacts and 18 cases are used for soft EHL contacts in a theoretical study of the influence of lubricant starvation on film thickness and pressure. From the starved results for both hard and soft EHL contacts, a simple and important dimensionless inlet boundary distance is specified. This inlet boundary distance defines whether a fully flooded or a starved condition exists in the contact. Contour plots of pressure and film thickness in and around the contact are shown for conditions.

  1. Controlling the impact of the managed honeybee on wild bees in protected areas.

    PubMed

    Henry, Mickaël; Rodet, Guy

    2018-06-18

    In recent years, conservation biologists have raised awareness about the risk of ecological interference between massively introduced managed honeybees and the native wild bee fauna in protected natural areas. In this study, we surveyed wild bees and quantified their nectar and pollen foraging success in a rosemary Mediterranean scrubland in southern France, under different conditions of apiary size and proximity. We found that high-density beekeeping triggers foraging competition which depresses not only the occurrence (-55%) and nectar foraging success (-50%) of local wild bees but also nectar (-44%) and pollen (-36%) harvesting by the honeybees themselves. Overall, those competition effects spanned distances of 600-1.100 m around apiaries, i.e. covering 1.1-3.8km 2 areas. Regardless the considered competition criterion, setting distance thresholds among apiaries appeared more tractable than setting colony density thresholds for beekeeping regulation. Moreover, the intraspecific competition among the honeybees has practical implications for beekeepers. It shows that the local carrying capacity has been exceeded and raises concerns for honey yields and colony sustainability. It also offers an effective ecological criterion for pragmatic decision-making whenever conservation practitioners envision progressively reducing beekeeping in protected areas. Although specific to the studied area, the recommendations provided here may help raise consciousness about the threat high-density beekeeping may pose to local nature conservation initiatives, especially in areas with sensitive or endangered plant or bee species such as small oceanic islands with high levels of endemism.

  2. The generalized fracture criteria based on the multi-parameter representation of the crack tip stress field

    NASA Astrophysics Data System (ADS)

    Stepanova, L. V.

    2017-12-01

    The paper is devoted to the multi-parameter asymptotic description of the stress field near the crack tip of a finite crack in an infinite isotropic elastic plane medium subject to 1) tensile stress; 2) in-plane shear; 3) mixed mode loading for a wide range of mode-mixity situations (Mode I and Mode II). The multi-parameter series expansion of stress tensor components containing higher-order terms is obtained. All the coefficients of the multiparameter series expansion of the stress field are given. The main focus is on the discussion of the influence of considering the higher-order terms of the Williams expansion. The analysis of the higher-order terms in the stress field is performed. It is shown that the larger the distance from the crack tip, the more terms it is necessary to keep in the asymptotic series expansion. Therefore, it can be concluded that several more higher-order terms of the Williams expansion should be used for the stress field description when the distance from the crack tip is not small enough. The crack propagation direction angle is calculated. Two fracture criteria, the maximum tangential stress criterion and the strain energy density criterion, are used. The multi-parameter form of the two commonly used fracture criteria is introduced and tested. Thirty and more terms of the Williams series expansion for the near-crack-tip stress field enable the angle to be calculated more precisely.

  3. INTEGRATION OF RELIABILITY WITH MECHANISTIC THERMALHYDRAULICS: REPORT ON APPROACH AND TEST PROBLEM RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. Schroeder; R. W. Youngblood

    The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less

  4. A distance-independent calibration of the luminosity of type Ia supernovae and the Hubble constant

    NASA Technical Reports Server (NTRS)

    Leibundgut, Bruno; Pinto, Philip A.

    1992-01-01

    The absolute magnitude of SNe Ia at maximum is calibrated here using radioactive decay models for the light curve and a minimum of assumptions. The absolute magnitude parameter space is studied using explosion models and a range of rise times, and absolute B magnitudes at maximum are used to derive a range of the H0 and the distance to the Virgo Cluster from SNe Ia. Rigorous limits for H0 of 45 and 105 km/s/Mpc are derived.

  5. Natural migration rates of trees: Global terrestrial carbon cycle implications. Book chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, A.M.

    The paper discusses the forest-ecological processes which constrain the rate of response by forests to rapid future environmental change. It establishes a minimum response time by natural tree populations which invade alien landscapes and reach the status of a mature, closed canopy forest when maximum carbon storage is realized. It considers rare long-distance and frequent short-distance seed transport, seedling and tree establishment, sequential tree and stand maturation, and spread between newly established colonies.

  6. Supernova bangs as a tool to study big bang

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blinnikov, S. I., E-mail: Sergei.Blinnikov@itep.ru

    Supernovae and gamma-ray bursts are the most powerful explosions in observed Universe. This educational review tells about supernovae and their applications in cosmology. It is explained how to understand the production of light in the most luminous events with minimum required energy of explosion. These most luminous phenomena can serve as primary cosmological distance indicators. Comparing the observed distance dependence on red shift with theoretical models one can extract information on evolution of the Universe from Big Bang until our epoch.

  7. Effect of geometric and process variables on the performance of inclined plate settlers in treating aquacultural waste.

    PubMed

    Sarkar, Sudipto; Kamilya, Dibyendu; Mal, B C

    2007-03-01

    Inclined plate settlers are used in treating wastewater due to their low space requirement and high removal rates. The prediction of sedimentation efficiency of these settlers is essential for their performance evaluation. In the present study, the technique of dimensional analysis was applied to predict the sedimentation efficiency of these inclined plate settlers. The effect of various geometric parameters namely, distance between plates (w(p)), plate angle (alpha), length of plate (l(p)), plate roughness (epsilon(p)), number of plates (n(p)) and particle diameter (d(s)) on the dynamic conditions, influencing the sedimentation process was studied. From the study it was established that neither the Reynolds criterion nor the Froude criterion was singularly valid to simulate the sedimentation efficiency (E) for different values of w(p) and flow velocity (v(f)). Considering the prevalent scale effect, simulation equations were developed to predict E at different dynamic conditions. The optimum dynamic condition producing the maximum E is also discussed.

  8. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  9. Clinimetrics of ultrasound pathologies in osteoarthritis: systematic literature review and meta-analysis.

    PubMed

    Oo, W M; Linklater, J M; Daniel, M; Saarakkala, S; Samuels, J; Conaghan, P G; Keen, H I; Deveza, L A; Hunter, D J

    2018-05-01

    The aims of this study were to systematically review clinimetrics of commonly assessed ultrasound pathologies in knee, hip and hand osteoarthritis (OA), and to conduct a meta-analysis for each clinimetric. Medline, Embase, and Cochrane Library databases were searched from their inceptions to September 2016. According to the Outcome Measures in Rheumatology (OMERACT) Instrument Selection Algorithm, data extraction focused on ultrasound technical features and performance metrics. Methodological quality was assessed with modified 19-item Downs and Black score and 11-item Quality Appraisal of Diagnostic Reliability (QAREL) score. Separate meta-analyses were performed for clinimetrics: (1) inter-rater/intra-rater reliability; (2) construct validity; (3) criteria validity; and (4) internal/external responsiveness. Statistical Package for the Social Sciences (SPSS), Excel and Comprehensive Meta-analysis were used. Our search identified 1126 records; of these, 100 were eligible, including a total of 8542 patients and 32,373 joints. The average Downs and Black score was 13.01, and average QAREL was 5.93. The stratified meta-analysis was performed only for knee OA, which demonstrated moderate to substantial reliability [minimum kappa > 0.44(0.15,0.74), minimum intraclass correlation coefficient (ICC) > 0.82(0.73-0.89)], weak construct validity against pain (r = 0.12 to 0.27), function (r = 0.15 to 0.23), and blood biomarkers (r = 0.01 to 0.21), but weak to strong correlation with plain radiography (r = 0.13 to 0.60), strong association with Magnetic Resonance Imaging (MRI) [minimum r = 0.60(0.52,0.67)] and strong discrimination against symptomatic patients (OR = 3.08 to 7.46). There was strong criterion validity against cartilage histology [r = 0.66(-0.05,0.93)], and small to moderate internal [standardized mean difference(SMD) = 0.20 to 0.58] and external (r = 0.35 to 0.43) responsiveness to interventions. Ultrasound demonstrated strong criterion validity with cartilage histology, poor to strong correlation with patient findings and MRI, moderate reliability, and low responsiveness to interventions. CRD42016039954. Copyright © 2018 Osteoarthritis Research Society International. All rights reserved.

  10. Progress Report on Alloy 617 Time Dependent Allowables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, Julie Knibloe

    2015-06-01

    Time dependent allowable stresses are required in the ASME Boiler and Pressure Vessel Code for design of components in the temperature range where time dependent deformation (i.e., creep) is expected to become significant. There are time dependent allowable stresses in Section IID of the Code for use in the non-nuclear construction codes, however, there are additional criteria that must be considered in developing time dependent allowables for nuclear components. These criteria are specified in Section III NH. St is defined as the lesser of three quantities: 100% of the average stress required to obtain a total (elastic, plastic, primary andmore » secondary creep) strain of 1%; 67% of the minimum stress to cause rupture; and 80% of the minimum stress to cause the initiation of tertiary creep. The values are reported for a range of temperatures and for time increments up to 100,000 hours. These values are determined from uniaxial creep tests, which involve the elevated temperature application of a constant load which is relatively small, resulting in deformation over a long time period prior to rupture. The stress which is the minimum resulting from these criteria is the time dependent allowable stress St. In this report data from a large number of creep and creep-rupture tests on Alloy 617 are analyzed using the ASME Section III NH criteria. Data which are used in the analysis are from the ongoing DOE sponsored high temperature materials program, form Korea Atomic Energy Institute through the Generation IV VHTR Materials Program and historical data from previous HTR research and vendor data generated in developing the alloy. It is found that the tertiary creep criterion determines St at highest temperatures, while the stress to cause 1% total strain controls at low temperatures. The ASME Section III Working Group on Allowable Stress Criteria has recommended that the uncertainties associated with determining the onset of tertiary creep and the lack of significant cavitation associated with early tertiary creep strain suggest that the tertiary creep criteria is not appropriate for this material. If the tertiary creep criterion is dropped from consideration, the stress to rupture criteria determines St at all but the lowest temperatures.« less

  11. Commentary: legal minimum tread depth for passenger car tires in the U.S.A.--a survey.

    PubMed

    Blythe, William; Seguin, Debra E

    2006-06-01

    Available tire traction is a significant highway safety issue, particularly on wet roads. Tire-roadway friction on dry, clean roads is essentially independent of tread depth, and depends primarily on roadway surface texture. However, tire-wet-roadway friction, both for longitudinal braking and lateral cornering forces, depends on several variables, most importantly on water depth, speed and tire tread depth, and the roadway surface texture. The car owner-operator has control over speed and tire condition, but not on water depth or road surface texture. Minimum tire tread depth is legislated throughout most of the United States and Europe. Speed reduction for wet road conditions is not.A survey of state requirements for legal minimum tread depth for passenger vehicle tires in the United States is presented. Most states require a minimum of 2/32 of an inch (approximately 1.6 mm) of tread, but two require less, some have no requirements, and some defer to the federal criterion for commercial vehicle safety inspections. The requirement of 2/32 of an inch is consistent with the height of the tread-wear bars built in to passenger car tires sold in the United States, but the rationale for that requirement, or other existing requirements, is not clear. Recent research indicates that a minimum tread depth of 2/32 of an inch does not prevent significant loss of friction at highway speeds, even for minimally wet roadways. The research suggests that tires with less than 4/32 of an inch tread depth may lose approximately 50 percent of available friction in those circumstances, even before hydroplaning occurs. It is concluded that the present requirements for minimum passenger car tire tread depth are not based upon rational safety considerations, and that an increase in the minimum tread depth requirements would have a beneficial effect on highway safety.

  12. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    PubMed Central

    Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.

    2013-01-01

    Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  13. On the validity of the amphoteric-defect model in gallium arsenide and a criterion for Fermi-level pinning by defects

    NASA Astrophysics Data System (ADS)

    Chen, C.-H.; Tan, T. Y.

    1995-10-01

    Using the theoretically calculated point-defect total-energy values of Baraff and Schlüter in GaAs, an amphoteric-defect model has been proposed by Walukiewicz to explain a large number of experimental results. The suggested amphoteric-defect system consists of two point-defect species capable of transforming into each other: the doubly negatively charged Ga vacancy V {Ga/2-} and the triply positively charged defect complex (ASGa+ V As)3+, with AsGa being the antisite defect of an As atom occupying a Ga site and V As being an As vacancy. When present in sufficiently high concentrations, the amphoteric defect system V {Ga/2-}/(AsGa+ V As)3+ is supposed to be able to pin the GaAs Fermi level at approximately the E v +0.6 eV level position, which requires that the net free energy of the V Ga/(AsGa+ V As) defect system to be minimum at the same Fermi-level position. We have carried out a quantitative study of the net energy of this defect system in accordance with the individual point-defect total-energy results of Baraff and Schlüter, and found that the minimum net defect-system-energy position is located at about the E v +1.2 eV level position instead of the needed E v +0.6 eV position. Therefore, the validity of the amphoteric-defect model is in doubt. We have proposed a simple criterion for determining the Fermi-level pinning position in the deeper part of the GaAs band gap due to two oppositely charged point-defect species, which should be useful in the future.

  14. Extremal values on Zagreb indices of trees with given distance k-domination number.

    PubMed

    Pei, Lidan; Pan, Xiangfeng

    2018-01-01

    Let [Formula: see text] be a graph. A set [Formula: see text] is a distance k -dominating set of G if for every vertex [Formula: see text], [Formula: see text] for some vertex [Formula: see text], where k is a positive integer. The distance k -domination number [Formula: see text] of G is the minimum cardinality among all distance k -dominating sets of G . The first Zagreb index of G is defined as [Formula: see text] and the second Zagreb index of G is [Formula: see text]. In this paper, we obtain the upper bounds for the Zagreb indices of n -vertex trees with given distance k -domination number and characterize the extremal trees, which generalize the results of Borovićanin and Furtula (Appl. Math. Comput. 276:208-218, 2016). What is worth mentioning, for an n -vertex tree T , is that a sharp upper bound on the distance k -domination number [Formula: see text] is determined.

  15. Tank Investigation of a Powered Dynamic Model of a Large Long-Range Flying Boat

    NASA Technical Reports Server (NTRS)

    Parkinson, John B; Olson, Roland E; Harr, Marvin I

    1947-01-01

    Principles for designing the optimum hull for a large long-range flying boat to meet the requirements of seaworthiness, minimum drag, and ability to take off and land at all operational gross loads were incorporated in a 1/12-size powered dynamic model of a four-engine transport flying boat having a design gross load of 165,000 pounds. These design principles included the selection of a moderate beam loading, ample forebody length, sufficient depth of step, and close adherence to the form of a streamline body. The aerodynamic and hydrodynamic characteristics of the model were investigated in Langley tank no. 1. Tests were made to determine the minimum allowable depth of step for adequate landing stability, the suitability of the fore-and-aft location of the step, the take-off performance, the spray characteristics, and the effects of simple spray-control devices. The application of the design criterions used and test results should be useful in the preliminary design of similar large flying boats.

  16. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China.

    PubMed

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-09-19

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.

  17. On the optimization of discrete structures with aeroelastic constraints

    NASA Technical Reports Server (NTRS)

    Mcintosh, S. C., Jr.; Ashley, H.

    1978-01-01

    The paper deals with the problem of dynamic structural optimization where constraints relating to flutter of a wing (or other dynamic aeroelastic performance) are imposed along with conditions of a more conventional nature such as those relating to stress under load, deflection, minimum dimensions of structural elements, etc. The discussion is limited to a flutter problem for a linear system with a finite number of degrees of freedom and a single constraint involving aeroelastic stability, and the structure motion is assumed to be a simple harmonic time function. Three search schemes are applied to the minimum-weight redesign of a particular wing: the first scheme relies on the method of feasible directions, while the other two are derived from necessary conditions for a local optimum so that they can be referred to as optimality-criteria schemes. The results suggest that a heuristic redesign algorithm involving an optimality criterion may be best suited for treating multiple constraints with large numbers of design variables.

  18. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    NASA Astrophysics Data System (ADS)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  19. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  20. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  1. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.

  2. Optimization of self-study room open problem based on green and low-carbon campus construction

    NASA Astrophysics Data System (ADS)

    Liu, Baoyou

    2017-04-01

    The optimization of self-study room open arrangement problem in colleges and universities is conducive to accelerate the fine management of the campus and promote green and low-carbon campus construction. Firstly, combined with the actual survey data, the self-study area and living area were divided into different blocks, and the electricity consumption in each self-study room and distance between different living and studying areas were normalized. Secondly, the minimum of total satisfaction index and the minimum of the total electricity consumption were selected as the optimization targets respectively. The mathematical models of linear programming were established and resolved by LINGO software. The results showed that the minimum of total satisfaction index was 4055.533 and the total minimum electricity consumption was 137216 W. Finally, some advice had been put forward on how to realize the high efficient administration of the study room.

  3. Elastohydrodynamic lubrication of point contacts. Ph.D. Thesis - Leeds Univ.

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.

    1976-01-01

    A procedure for the numerical solution of the complete, isothermal, elastohydrodynamic lubrication problem for point contacts is given. This procedure calls for the simultaneous solution of the elasticity and Reynolds equations. By using this theory the influence of the ellipticity parameter and the dimensionless speed, load, and material parameters on the minimum and central film thicknesses was investigated. Thirty-four different cases were used in obtaining the fully flooded minimum- and central-film-thickness formulas. Lubricant starvation was also studied. From the results it was possible to express the minimum film thickness for a starved condition in terms of the minimum film thickness for a fully flooded condition, the speed parameter, and the inlet distance. Fifteen additional cases plus three fully flooded cases were used in obtaining this formula. Contour plots of pressure and film thickness in and around the contact have been presented for both fully flooded and starved lubrication conditions.

  4. Kinematic Distances: A Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Wenger, Trey V.; Balser, Dana S.; Anderson, L. D.; Bania, T. M.

    2018-03-01

    Distances to high-mass star-forming regions (HMSFRs) in the Milky Way are a crucial constraint on the structure of the Galaxy. Only kinematic distances are available for a majority of the HMSFRs in the Milky Way. Here, we compare the kinematic and parallax distances of 75 Galactic HMSFRs to assess the accuracy of kinematic distances. We derive the kinematic distances using three different methods: the traditional method using the Brand & Blitz rotation curve (Method A), the traditional method using the Reid et al. rotation curve and updated solar motion parameters (Method B), and a Monte Carlo technique (Method C). Methods B and C produce kinematic distances closest to the parallax distances, with median differences of 13% (0.43 {kpc}) and 17% (0.42 {kpc}), respectively. Except in the vicinity of the tangent point, the kinematic distance uncertainties derived by Method C are smaller than those of Methods A and B. In a large region of the Galaxy, the Method C kinematic distances constrain both the distances and the Galactocentric positions of HMSFRs more accurately than parallax distances. Beyond the tangent point along ℓ = 30°, for example, the Method C kinematic distance uncertainties reach a minimum of 10% of the parallax distance uncertainty at a distance of 14 {kpc}. We develop a prescription for deriving and applying the Method C kinematic distances and distance uncertainties. The code to generate the Method C kinematic distances is publicly available and may be utilized through an online tool.

  5. A machine-learned computational functional genomics-based approach to drug classification.

    PubMed

    Lötsch, Jörn; Ultsch, Alfred

    2016-12-01

    The public accessibility of "big data" about the molecular targets of drugs and the biological functions of genes allows novel data science-based approaches to pharmacology that link drugs directly with their effects on pathophysiologic processes. This provides a phenotypic path to drug discovery and repurposing. This paper compares the performance of a functional genomics-based criterion to the traditional drug target-based classification. Knowledge discovery in the DrugBank and Gene Ontology databases allowed the construction of a "drug target versus biological process" matrix as a combination of "drug versus genes" and "genes versus biological processes" matrices. As a canonical example, such matrices were constructed for classical analgesic drugs. These matrices were projected onto a toroid grid of 50 × 82 artificial neurons using a self-organizing map (SOM). The distance, respectively, cluster structure of the high-dimensional feature space of the matrices was visualized on top of this SOM using a U-matrix. The cluster structure emerging on the U-matrix provided a correct classification of the analgesics into two main classes of opioid and non-opioid analgesics. The classification was flawless with both the functional genomics and the traditional target-based criterion. The functional genomics approach inherently included the drugs' modulatory effects on biological processes. The main pharmacological actions known from pharmacological science were captures, e.g., actions on lipid signaling for non-opioid analgesics that comprised many NSAIDs and actions on neuronal signal transmission for opioid analgesics. Using machine-learned techniques for computational drug classification in a comparative assessment, a functional genomics-based criterion was found to be similarly suitable for drug classification as the traditional target-based criterion. This supports a utility of functional genomics-based approaches to computational system pharmacology for drug discovery and repurposing.

  6. Decohesion models informed by first-principles calculations: The ab initio tensile test

    NASA Astrophysics Data System (ADS)

    Enrique, Raúl A.; Van der Ven, Anton

    2017-10-01

    Extreme deformation and homogeneous fracture can be readily studied via ab initio methods by subjecting crystals to numerical "tensile tests", where the energy of locally stable crystal configurations corresponding to elongated and fractured states are evaluated by means of density functional method calculations. The information obtained can then be used to construct traction curves of cohesive zone models in order to address fracture at the macroscopic scale. In this work, we perform an in depth analysis of traction curves and how ab initio calculations must be interpreted to rigorously parameterize an atomic scale cohesive zone model, using crystalline Ag as an example. Our analysis of traction curves reveal the existence of two qualitatively distinct decohesion criteria: (i) an energy criterion whereby the released elastic energy equals the energy cost of creating two new surfaces and (ii) an instability criterion that occurs at a higher and size independent stress than that of the energy criterion. We find that increasing the size of the simulation cell renders parts of the traction curve inaccessible to ab initio calculations involving the uniform decohesion of the crystal. We also find that the separation distance below which a crack heals is not a material parameter as has been proposed in the past. Finally, we show that a large energy barrier separates the uniformly stressed crystal from the decohered crystal, resolving a paradox predicted by a scaling law based on the energy criterion that implies that large crystals will decohere under vanishingly small stresses. This work clarifies confusion in the literature as to how a cohesive zone model is to be parameterized with ab initio "tensile tests" in the presence of internal relaxations.

  7. About neighborhood counting measure metric and minimum risk metric.

    PubMed

    Argentini, Andrea; Blanzieri, Enrico

    2010-04-01

    In a 2006 TPAMI paper, Wang proposed the Neighborhood Counting Measure, a similarity measure for the k-NN algorithm. In his paper, Wang mentioned the Minimum Risk Metric (MRM), an early distance measure based on the minimization of the risk of misclassification. Wang did not compare NCM to MRM because of its allegedly excessive computational load. In this comment paper, we complete the comparison that was missing in Wang's paper and, from our empirical evaluation, we show that MRM outperforms NCM and that its running time is not prohibitive as Wang suggested.

  8. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  9. Keep at bay!--Abnormal personal space regulation as marker of paranoia in schizophrenia.

    PubMed

    Schoretsanitis, G; Kutynia, A; Stegmayer, K; Strik, W; Walther, S

    2016-01-01

    During threat, interpersonal distance is deliberately increased. Personal space regulation is related to amygdala function and altered in schizophrenia, but it remains unknown whether it is particularly associated with paranoid threat. We compared performance in two tests on personal space between 64 patients with schizophrenia spectrum disorders and 24 matched controls. Patients were stratified in those with paranoid threat, neutral affect or paranoid experience of power. In the stop-distance paradigm, participants indicated the minimum tolerable interpersonal distance. In the fixed-distance paradigm, they indicated the level of comfort at fixed interpersonal distances. Paranoid threat increased interpersonal distance two-fold in the stop-distance paradigm, and reduced comfort ratings in the fixed-distance paradigm. In contrast, patients experiencing paranoid power had high comfort ratings at any distance. Patients with neutral affect did not differ from controls in the stop-distance paradigm. Differences between groups remained when controlling for gender and positive symptom severity. Among schizophrenia patients, the stop-distance paradigm detected paranoid threat with 93% sensitivity and 83% specificity. Personal space regulation is not generally altered in schizophrenia. However, state paranoid experience has distinct contributions to personal space regulation. Subjects experiencing current paranoid threat share increased safety-seeking behavior. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  10. Judgement bias in predicting the success of one's own basketball free throws but not those of others.

    PubMed

    Cañal-Bruland, Rouwen; Balch, Lars; Niesert, Loet

    2015-07-01

    Skilled basketball players are supposed to hit more often from the free throw distance than would be predicted by their shooting performances at adjacent distances. This is dubbed an especial skill. In the current study, we examined whether especial skills in free throw performance in basketball map onto especial skills in visually judging the success of basketball free throws. In addition, we tested whether this effect would be present in those who predict their own shots but absent in those who judge shots performed by another person. Eight skilled basketball players were coupled with eight equally skilled players, and performed 150 set shots from five different distances (including the free throw distance) while the yoked partner observed the shots. At the moment of ball release, the performers' and the observers' vision were synchronously occluded using liquid-crystal occlusion goggles, and both independently judged whether the shot was successful or not. Results did not replicate an especial skill effect in shooting performance. Based on signal detection theory (SDT) measures (d' and criterion c), results also revealed no especial skill for visually discriminating successful from unsuccessful shots at the foul line when compared to other distances. However, players showed an especial skill judgement bias towards judging balls 'in' at the foul line, but not at other distances. Importantly, this bias was only present in those who judged the success of their own shots, but not in those who judged the shots performed by someone else.

  11. Estimating the brain pathological age of Alzheimer’s disease patients from MR image data based on the separability distance criterion

    NASA Astrophysics Data System (ADS)

    Li, Yongming; Li, Fan; Wang, Pin; Zhu, Xueru; Liu, Shujun; Qiu, Mingguo; Zhang, Jingna; Zeng, Xiaoping

    2016-10-01

    Traditional age estimation methods are based on the same idea that uses the real age as the training label. However, these methods ignore that there is a deviation between the real age and the brain age due to accelerated brain aging. This paper considers this deviation and searches for it by maximizing the separability distance value rather than by minimizing the difference between the estimated brain age and the real age. Firstly, set the search range of the deviation as the deviation candidates according to prior knowledge. Secondly, use the support vector regression (SVR) as the age estimation model to minimize the difference between the estimated age and the real age plus deviation rather than the real age itself. Thirdly, design the fitness function based on the separability distance criterion. Fourthly, conduct age estimation on the validation dataset using the trained age estimation model, put the estimated age into the fitness function, and obtain the fitness value of the deviation candidate. Fifthly, repeat the iteration until all the deviation candidates are involved and get the optimal deviation with maximum fitness values. The real age plus the optimal deviation is taken as the brain pathological age. The experimental results showed that the separability was apparently improved. For normal control-Alzheimer’s disease (NC-AD), normal control-mild cognition impairment (NC-MCI), and MCI-AD, the average improvements were 0.178 (35.11%), 0.033 (14.47%), and 0.017 (39.53%), respectively. For NC-MCI-AD, the average improvement was 0.2287 (64.22%). The estimated brain pathological age could be not only more helpful to the classification of AD but also more precisely reflect accelerated brain aging. In conclusion, this paper offers a new method for brain age estimation that can distinguish different states of AD and can better reflect the extent of accelerated aging.

  12. Overcoming Species Boundaries in Peptide Identification with Bayesian Information Criterion-driven Error-tolerant Peptide Search (BICEPS)*

    PubMed Central

    Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno

    2012-01-01

    Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179

  13. 29 CFR 29.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the Administrator. Apprentice means a worker at least 16 years of age, except where a higher minimum age standard is otherwise fixed by law, who is employed to learn an apprenticeable occupation as... physical movement of removable/transportable electronic media and/or interactive distance learning...

  14. Tandem steerable running gear

    NASA Technical Reports Server (NTRS)

    Fincannon, O. J.; Glenn, D. L.

    1972-01-01

    Characteristics of steering assembly for vehicle designed to move large components of space flight vehicles are presented. Design makes it possible to move heavy and bulky items through narrow passageways with tight turns. Typical configuration is illustrated to show dimensions of turning radius and minimum distances involved.

  15. Simulation of Collision of Arbitrary Shape Particles with Wall in a Viscous Fluid

    NASA Astrophysics Data System (ADS)

    Mohaghegh, Fazlolah; Udaykumar, H. S.

    2016-11-01

    Collision of finite size arbitrary shape particles with wall in a viscous flow is modeled using immersed boundary method. A potential function indicating the distance from the interface is introduced for the particles and the wall. The potential can be defined by using either an analytical expression or level set method. The collision starts when the indicator potentials of the particle and wall are overlapping based on a minimum cut off. A simplified mass spring model is used in order to apply the collision forces. Instead of using a dashpot in order to damp the energy, the spring stiffness is adjusted during the bounce. The results for the case of collision of a falling sphere with the bottom wall agrees well with the experiments. Moreover, it is shown that the results are independent from the minimum collision cut off distance value. Finally, when the particle's shape is ellipsoidal, the rotation of the particle after the collision becomes important and noticeable: At low Stokes number values, the particle almost adheres to the wall in one side and rotates until it reaches the minimum gravitational potential. At high Stokes numbers, the particle bounces and loses the energy until it reaches a situation with low Stokes number.

  16. Occurrence and distribution of fecal indicator bacteria, and physical and chemical indicators of water quality in streams receiving discharge from Dallas/Fort Worth International Airport and vicinity, North-Central Texas, 2008

    USGS Publications Warehouse

    Harwell, Glenn R.; Mobley, Craig A.

    2009-01-01

    This report, done by the U.S. Geological Survey in cooperation with Dallas/Fort Worth International (DFW) Airport in 2008, describes the occurrence and distribution of fecal indicator bacteria (fecal coliform and Escherichia [E.] coli), and the physical and chemical indicators of water quality (relative to Texas Surface Water Quality Standards), in streams receiving discharge from DFW Airport and vicinity. At sampling sites in the lower West Fork Trinity River watershed during low-flow conditions, geometric mean E. coli counts for five of the eight West Fork Trinity River watershed sampling sites exceeded the Texas Commission on Environmental Quality E. coli criterion, thus not fully supporting contact recreation. Two of the five sites with geometric means that exceeded the contact recreation criterion are airport discharge sites, which here means that the major fraction of discharge at those sites is from DFW Airport. At sampling sites in the Elm Fork Trinity River watershed during low-flow conditions, geometric mean E. coli counts exceeded the geometric mean contact recreation criterion for seven (four airport, three non-airport) of 13 sampling sites. Under low-flow conditions in the lower West Fork Trinity River watershed, E. coli counts for airport discharge sites were significantly different from (lower than) E. coli counts for non-airport sites. Under low-flow conditions in the Elm Fork Trinity River watershed, there was no significant difference between E. coli counts for airport sites and non-airport sites. During stormflow conditions, fecal indicator bacteria counts at the most downstream (integrator) sites in each watershed were considerably higher than counts at those two sites during low-flow conditions. When stormflow sample counts are included with low-flow sample counts to compute a geometric mean for each site, classification changes from fully supporting to not fully supporting contact recreation on the basis of the geometric mean contact recreation criterion. All water temperature measurements at sampling sites in the lower West Fork Trinity River watershed were less than the maximum criterion for water temperature for the lower West Fork Trinity segment. Of the measurements at sampling sites in the Elm Fork Trinity River watershed, 95 percent were less than the maximum criterion for water temperature for the Elm Fork Trinity River segment. All dissolved oxygen concentrations were greater than the minimum criterion for stream segments classified as exceptional aquatic life use. Nearly all pH measurements were within the pH criterion range for the classified segments in both watersheds, except for those at one airport site. For sampling sites in the lower West Fork Trinity River watershed, all annual average dissolved solids concentrations were less than the maximum criterion for the lower West Fork Trinity segment. For sampling sites in the Elm Fork Trinity River, nine of the 13 sites (six airport, three non-airport) had annual averages that exceeded the maximum criterion for that segment. For ammonia, 23 samples from 12 different sites had concentrations that exceeded the screening level for ammonia. Of these 12 sites, only one non-airport site had more than the required number of exceedances to indicate a screening level concern. Stormflow total suspended solids concentrations were significantly higher than low-flow concentrations at the two integrator sites. For sampling sites in the lower West Fork Trinity River watershed, all annual average chloride concentrations were less than the maximum annual average chloride concentration criterion for that segment. For the 13 sampling sites in the Elm Fork Trinity River watershed, one non-airport site had an annual average concentration that exceeded the maximum annual average chloride concentration criterion for that segment.

  17. Fuzzy scalar and vector median filters based on fuzzy distances.

    PubMed

    Chatzis, V; Pitas, I

    1999-01-01

    In this paper, the fuzzy scalar median (FSM) is proposed, defined by using ordering of fuzzy numbers based on fuzzy minimum and maximum operations defined by using the extension principle. Alternatively, the FSM is defined from the minimization of a fuzzy distance measure, and the equivalence of the two definitions is proven. Then, the fuzzy vector median (FVM) is proposed as an extension of vector median, based on a novel distance definition of fuzzy vectors, which satisfy the property of angle decomposition. By defining properly the fuzziness of a value, the combination of the basic properties of the classical scalar and vector median (VM) filter with other desirable characteristics can be succeeded.

  18. Edit distance for marked point processes revisited: An implementation by binary integer programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito; Aihara, Kazuyuki

    2015-12-15

    We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process ismore » large.« less

  19. How far and how fast can mushroom spores fly? Physical limits on ballistospore size and discharge distance in the Basidiomycota

    PubMed Central

    Fischer, Mark W. F.; Stolze-Rybczynski, Jessica L.; Cui, Yunluan; Money, Nicholas P.

    2010-01-01

    Active discharge of basidiospores in most species of Basidiomycota is powered by the rapid movement of a droplet of fluid, called Buller’s drop, over the spore surface. This paper is concerned with the operation of the launch mechanism in species with the largest and smallest ballistospores. Aleurodiscus gigasporus (Russulales) produces the largest basidiospores on record. The maximum dimensions of the spores, 34 × 28 µm, correspond to a volume of 14 pL and to an estimated mass of 17 ng. The smallest recorded basidiospores are produced by Hyphodontia latitans (Hymenochaetales). Minimum spore dimensions in this species, 3.5 × 0.5 µm, correspond to a volume of 0.5 fL and mass of 0.6 pg. Neither species has been studied using high-speed video microscopy, but this technique was used to examine ballistospore discharge in species with spores of similar sizes (slightly smaller than A. gigasporus and slightly larger than those of H. latitans). Extrapolation of velocity measurements from these fungi provided estimates of discharge distances ranging from a maximum of almost 2 mm in A. gigasporus to a minimum of 4 µm in H. latitans. These are, respectively, the longest and shortest predicted discharge distances for ballistospores. Limitations to the distances traveled by basidiospores are discussed in relation to the mechanics of the discharge process and the types of fruit-bodies from which the spores are released. PMID:20835365

  20. Using traveling salesman problem algorithms for evolutionary tree construction.

    PubMed

    Korostensky, C; Gonnet, G H

    2000-07-01

    The construction of evolutionary trees is one of the major problems in computational biology, mainly due to its complexity. We present a new tree construction method that constructs a tree with minimum score for a given set of sequences, where the score is the amount of evolution measured in PAM distances. To do this, the problem of tree construction is reduced to the Traveling Salesman Problem (TSP). The input for the TSP algorithm are the pairwise distances of the sequences and the output is a circular tour through the optimal, unknown tree plus the minimum score of the tree. The circular order and the score can be used to construct the topology of the optimal tree. Our method can be used for any scoring function that correlates to the amount of changes along the branches of an evolutionary tree, for instance it could also be used for parsimony scores, but it cannot be used for least squares fit of distances. A TSP solution reduces the space of all possible trees to 2n. Using this order, we can guarantee that we reconstruct a correct evolutionary tree if the absolute value of the error for each distance measurement is smaller than f2.gif" BORDER="0">, where f3.gif" BORDER="0">is the length of the shortest edge in the tree. For data sets with large errors, a dynamic programming approach is used to reconstruct the tree. Finally simulations and experiments with real data are shown.

  1. Experimental realization of spatially separated entanglement with continuous variables using laser pulse trains

    PubMed Central

    Zhang, Yun; Okubo, Ryuhi; Hirano, Mayumi; Eto, Yujiro; Hirano, Takuya

    2015-01-01

    Spatially separated entanglement is demonstrated by interfering two high-repetition squeezed pulse trains. The entanglement correlation of the quadrature amplitudes between individual pulses is interrogated. It is characterized in terms of the sufficient inseparability criterion with an optimum result of in the frequency domain and in the time domain. The quantum correlation is also observed when the two measurement stations are separated by a physical distance of 4.5 m, which is sufficiently large to demonstrate the space-like separation, after accounting for the measurement time. PMID:26278478

  2. Planning Training Workload in Football Using Small-Sided Games' Density.

    PubMed

    Sangnier, Sebastien; Cotte, Thierry; Brachet, Olivier; Coquart, Jeremy; Tourny, Claire

    2018-05-08

    Sangnier, S, Cotte, T, Brachet, O, Coquart, J, and Tourny, C. Planning training workload in football using small-sided games density. J Strength Cond Res XX(X): 000-000, 2018-To develop the physical qualities, the small-sided games' (SSGs) density may be essential in soccer. Small-sided games are games in which the pitch size, players' number, and rules are different to those for traditional soccer matches. The purpose was to assess the relation between training workload and SSGs' density. The 33 densities data (41 practice games and 3 full games) were analyzed through global positioning system (GPS) data collected from 25 professional soccer players (80.7 ± 7.0 kg; 1.83 ± 0.05 m; 26.4 ± 4.9 years). From total distance, distance metabolic power, sprint distance, and acceleration distance, the data GPS were divided into 4 categories: endurance, power, speed, and strength. Statistical analysis compared the relation between GPS values and SSGs' densities, and 3 methods were applied to assess models (R-squared, root-mean-square error, and Akaike information criterion). The results suggest that all the GPS data match the player's essential athletic skills. They were all correlated with the game's density. Acceleration distance, deceleration distance, metabolic power, and total distance followed a logarithmic regression model, whereas distance and number of sprints follow a linear regression model. The research reveals options to monitor the training workload. Coaches could anticipate the load resulting from the SSGs and adjust the field size to the players' number. Taking into account the field size during SSGs enables coaches to target the most favorable density for developing expected physical qualities. Calibrating intensity during SSGs would allow coaches to assess each athletic skill in the same conditions of intensity as in the competition.

  3. Study of a New CPM Pair 2Mass 14515781-1619034

    NASA Astrophysics Data System (ADS)

    Falcon, Israel Tejera

    2013-04-01

    In this paper I present the results of a study of 2Mass 14515781-1619034 as components of a common proper motion pair. Because PPMXL catalog's proper motion data not provide any information about secondary star, I deduced it independently, obtaining similar proper motions for both components. Halbwalchs' criteria indicates that this is a CPM ystem. The criterion of Francisco Rica, which is based on the compatibility of the kinematic function of the equatorial coordinates, indicates that this pair has a 99% probability of being a physical one (Rica, 2007). Also other important criteria (Dommanget, 1956, Peter Van De Kamp, 1961, Sinachopoulus, 1992, Close, 2003), indicate a physical system. With the absolute visual magnitude of both components, I obtained distance modulus 7.29 and 7.59, which put the components of the system at a distance of 287.1 and 329.6 parsecs. Taking into account errors in determining the magnitudes, this means that the probability that both components are situated at the same distance is 96%. I suggest that this pair be included in the WDS catalog.

  4. Do Formal Inspections Ensure that British Zoos Meet and Improve on Minimum Animal Welfare Standards?

    PubMed

    Draper, Chris; Browne, William; Harris, Stephen

    2013-11-08

    We analysed two consecutive inspection reports for each of 136 British zoos made by government-appointed inspectors between 2005 and 2011 to assess how well British zoos were complying with minimum animal welfare standards; median interval between inspections was 1,107 days. There was no conclusive evidence for overall improvements in the levels of compliance by British zoos. Having the same zoo inspector at both inspections affected the outcome of an inspection; animal welfare criteria were more likely to be assessed as unchanged if the same inspector was present on both inspections. This, and erratic decisions as to whether a criterion applied to a particular zoo, suggest inconsistency in assessments between inspectors. Zoos that were members of a professional association (BIAZA) did not differ significantly from non-members in the overall number of criteria assessed as substandard at the second inspection but were more likely to meet the standards on both inspections and less likely to have criteria remaining substandard. Lack of consistency between inspectors, and the high proportion of zoos failing to meet minimum animal welfare standards nearly thirty years after the Zoo Licensing Act came into force, suggest that the current system of licensing and inspection is not meeting key objectives and requires revision.

  5. Separation Potential for Multicomponent Mixtures: State-of-the Art of the Problem

    NASA Astrophysics Data System (ADS)

    Sulaberidze, G. A.; Borisevich, V. D.; Smirnov, A. Yu.

    2017-03-01

    Various approaches used in introducing a separation potential (value function) for multicomponent mixtures have been analyzed. It has been shown that all known potentials do not satisfy the Dirac-Peierls axioms for a binary mixture of uranium isotopes, which makes their practical application difficult. This is mainly due to the impossibility of constructing a "standard" cascade, whose role in the case of separation of binary mixtures is played by the ideal cascade. As a result, the only universal search method for optimal parameters of the separation cascade is their numerical optimization by the criterion of the minimum number of separation elements in it.

  6. Predicting propagation limits of laser-supported detonation by Hugoniot analysis

    NASA Astrophysics Data System (ADS)

    Shimamura, Kohei; Ofosu, Joseph A.; Komurasaki, Kimiya; Koizumi, Hiroyuki

    2015-01-01

    Termination conditions of a laser-supported detonation (LSD) wave were investigated using control volume analysis with a Shimada-Hugoniot curve and a Rayleigh line. Because the geometric configurations strongly affect the termination condition, a rectangular tube was used to create the quasi-one-dimensional configuration. The LSD wave propagation velocity and the pressure behind LSD were measured. Results reveal that the detonation states during detonation and at the propagation limit are overdriven detonation and Chapman-Jouguet detonation, respectively. The termination condition is the minimum velocity criterion for the possible detonation solution. Results were verified using pressure measurements of the stagnation pressure behind the LSD wave.

  7. Did the American Academy of Orthopaedic Surgeons osteoarthritis guidelines miss the mark?

    PubMed

    Bannuru, Raveendhara R; Vaysbrot, Elizaveta E; McIntyre, Louis F

    2014-01-01

    The American Academy of Orthopaedic Surgeons (AAOS) 2013 guidelines for knee osteoarthritis recommended against the use of viscosupplementation for failing to meet the criterion of minimum clinically important improvement (MCII). However, the AAOS's methodology contained numerous flaws in obtaining, displaying, and interpreting MCII-based results. The current state of research on MCII allows it to be used only as a supplementary instrument, not a basis for clinical decision making. The AAOS guidelines should reflect this consideration in their recommendations to avoid condemning potentially viable treatments in the context of limited available alternatives. Copyright © 2014 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  8. Online clustering algorithms for radar emitter classification.

    PubMed

    Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max

    2005-08-01

    Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.

  9. 77 FR 75946 - Radio Broadcasting Services; Dove Creek, CO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ...]. Radio Broadcasting Services; Dove Creek, CO AGENCY: Federal Communications Commission. ACTION: Proposed... service at Dove Creek, Colorado. Channel 229C3 can be allotted at Dove Creek, Colorado, in compliance with the Commission's minimum distance separation requirements, at the proposed reference coordinates: 37...

  10. 40 CFR 258.55 - Assessment monitoring program.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... upgradient edge of the MSWLF unit and downgradient monitoring well screen (minimum distance of travel); (5... effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or mutation. (ii) [Reserved] (j) In establishing ground-water protection...

  11. 40 CFR 258.55 - Assessment monitoring program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... upgradient edge of the MSWLF unit and downgradient monitoring well screen (minimum distance of travel); (5... effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or mutation. (ii) [Reserved] (j) In establishing ground-water protection...

  12. 40 CFR 258.55 - Assessment monitoring program.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... upgradient edge of the MSWLF unit and downgradient monitoring well screen (minimum distance of travel); (5... effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or mutation. (ii) [Reserved] (j) In establishing ground-water protection...

  13. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  14. Study of hopping type conduction from AC conductivity in multiferroic composite

    NASA Astrophysics Data System (ADS)

    Pandey, Rabichandra; Guha, Shampa; Pradhan, Lagen Kumar; Kumar, Sunil; Supriya, Sweety; Kar, Manoranjan

    2018-05-01

    0.5BiFe0.80Ti0.20O3-0.5Co0.5Ni0.5Fe2O4(BFTO-CNFO) multiferroic composite was prepared by planetary ball mill method. X-ray diffraction analysis confirms the formation of the compound with the simultaneous presence of spinel Co0.5Ni0.5Fe2O4 (CNFO) and perovskite BiFe0.80Ti0.20O3 (BFTO) phase. Temperature dependent dielectric permittivity and loss tangent were studied with a frequency range of 100Hz to 1MHz. AC conductivity study was performed to analyze the electrical conduction behaviour in the composite. Johnscher's power law was employed to the AC conductivity data to understand the hopping of localized charge carrier in the compound. The binding energy, minimum hopping distance and density of states of the charge carriers in the composite were evaluated from the AC conductivity data. Minimum hopping distance is found to be in order of Angstrom (Å).

  15. Geometric characterization of separability and entanglement in pure Gaussian states by single-mode unitary operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; CNR-INFM Coherentia, Naples; CNISM, Unita di Salerno, Salerno

    2007-10-15

    We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1xM bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself andmore » the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a, uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.« less

  16. The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm

    NASA Technical Reports Server (NTRS)

    Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.

    2013-01-01

    This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.

  17. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  18. Optical design of microlens array for CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Rongzhu; Lai, Liping

    2016-10-01

    The optical crosstalk between the pixel units can influence the image quality of CMOS image sensor. In the meantime, the duty ratio of CMOS is low because of its pixel structure. These two factors cause the low detection sensitivity of CMOS. In order to reduce the optical crosstalk and improve the fill factor of CMOS image sensor, a microlens array has been designed and integrated with CMOS. The initial parameters of the microlens array have been calculated according to the structure of a CMOS. Then the parameters have been optimized by using ZEMAX and the microlens arrays with different substrate thicknesses have been compared. The results show that in order to obtain the best imaging quality, when the effect of optical crosstalk for CMOS is the minimum, the best distance between microlens array and CMOS is about 19.3 μm. When incident light successively passes through microlens array and the distance, obtaining the minimum facula is around 0.347 um in the active area. In addition, when the incident angle of the light is 0o 22o, the microlens array has obvious inhibitory effect on the optical crosstalk. And the anti-crosstalk distance between microlens array and CMOS is 0 μm 162 μm.

  19. A holistic framework for design of cost-effective minimum water utilization network.

    PubMed

    Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N

    2008-07-01

    Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion.

  20. MINIMUM CORE MASSES FOR GIANT PLANET FORMATION WITH REALISTIC EQUATIONS OF STATE AND OPACITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piso, Ana-Maria A.; Murray-Clay, Ruth A.; Youdin, Andrew N., E-mail: apiso@cfa.harvard.edu

    2015-02-20

    Giant planet formation by core accretion requires a core that is sufficiently massive to trigger runaway gas accretion in less than the typical lifetime of protoplanetary disks. We explore how the minimum required core mass, M {sub crit}, depends on a non-ideal equation of state (EOS) and on opacity changes due to grain growth across a range of stellocentric distances from 5-100 AU. This minimum M {sub crit} applies when planetesimal accretion does not substantially heat the atmosphere. Compared to an ideal gas polytrope, the inclusion of molecular hydrogen (H{sub 2}) dissociation and variable occupation of H{sub 2} rotational statesmore » increases M {sub crit}. Specifically, M {sub crit} increases by a factor of ∼2 if the H{sub 2} spin isomers, ortho- and parahydrogen, are in thermal equilibrium, and by a factor of ∼2-4 if the ortho-to-para ratio is fixed at 3:1. Lower opacities due to grain growth reduce M {sub crit}. For a standard disk model around a Solar mass star, we calculate M {sub crit} ∼ 8 M {sub ⊕} at 5 AU, decreasing to ∼5 M {sub ⊕} at 100 AU, for a realistic EOS with an equilibrium ortho-to-para ratio and for grain growth to centimeter-sizes. If grain coagulation is taken into account, M {sub crit} may further reduce by up to one order of magnitude. These results for the minimum critical core mass are useful for the interpretation of surveys that find exoplanets at a range of orbital distances.« less

  1. Packed Planetary Systems

    NASA Astrophysics Data System (ADS)

    Barnes, R.; Greenberg, R.

    2005-08-01

    Planetary systems display a wide range of appearances, with apparently arbitrary values of semi-major axis, eccentricity, etc. We reduce the complexity of orbital configurations to a single value, δ , which is a measure of how close, over secular timescales ( ˜10,000 orbits), two consecutive planets come to each other. We measure this distance relative to the sum of the radii of their Hill spheres, sometimes referred to as mutual Hill radii (MHR). We determine the closest approach distance by numerically integrating the entire system on coplanar orbits, using minimum masses. For non-resonant systems, close approach occurs during apsidal alignment, either parallel or anti-parallel. For resonant pairs the distance at conjunction determines the closest approach distance. Previous analytic work found that planets on circular orbits were assuredly unstable if they came within 3.5 MHR (i.e. Gladman 1993; Chambers, Wetherill & Boss 1996). We find that most known pairs of jovian planets (including those in our solar system) come within 3.5 -- 7 MHR of each other. We also find that several systems are unstable (their closest approach distance is less than 3.5 MHR). These systems, if they are real, probably exist in an observationally permitted location somewhat different from the current best fit. In these cases, the planets' closest approach distance will most likely also be slightly larger than 3.5 MHR. Most pairs beyond 7 MHR probably experienced post-formation migration (i.e. tidal circularization, inward scattering of small bodies) which moved them further apart. This result is even more remarkable since we have used the minimum masses; most likely the systems are inclined to the line of sight, making the Hill spheres larger, and shrinking δ . This dense packing may reflect a tendency for planets to form as close together as they can without being dynamically unstable. This result further implies there may be a large number of smaller, currently undetectable companions packed in orbits around stars with known planets.

  2. Covalence of atoms in the heavier transition metals*

    PubMed Central

    Pauling, Linus

    1977-01-01

    The observed magnetic properties of the heavier transition metals permit them to have larger metallic valences than their iron-group congeners. With 0.72 metallic orbital, as found for the iron-group metals, the maximum metallic valence and minimum interatomic distance would occur for 8.28 transargononic electrons. The curves of observed interatomic distances for the close-packed metals of the second and third long periods have minima at this point, supporting the assignment of high valences to these metals. Values of the single-bond radii corresponding to these valences are calculated. PMID:16592407

  3. Designing the optimal shutter sequences for the flutter shutter imaging method

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2010-04-01

    Acquiring iris or face images of moving subjects at larger distances using a flash to prevent the motion blur quickly runs into eye safety concerns as the acquisition distance is increased. For that reason the flutter shutter method recently proposed by Raskar et al.has generated considerable interest in the biometrics community. The paper concerns the design of shutter sequences that produce the best images. The number of possible sequences grows exponentially in both the subject' s motion velocity and desired exposure value, with their majority being useless. Because the exact solution leads to an intractable mixed integer programming problem, we propose an approximate solution based on pre - screening the sequences according to the distribution of roots in their Fourier transform. A very fast algorithm utilizing the Jury' s criterion allows the testing to be done without explicitly computing the roots, making the approach practical for moderately long sequences.

  4. Determination of Fracture Parameters for Multiple Cracks of Laminated Composite Finite Plate

    NASA Astrophysics Data System (ADS)

    Srivastava, Amit Kumar; Arora, P. K.; Srivastava, Sharad Chandra; Kumar, Harish; Lohumi, M. K.

    2018-04-01

    A predictive method for estimation of stress state at zone of crack tip and assessment of remaining component lifetime depend on the stress intensity factor (SIF). This paper discusses the numerical approach for prediction of first ply failure load (FL), progressive failure load, SIF and critical SIF for multiple cracks configurations of laminated composite finite plate using finite element method (FEM). The Hashin and Chang failure criterion are incorporated in ABAQUS using subroutine approach user defined field variables (USDFLD) for prediction of progressive fracture response of laminated composite finite plate, which is not directly available in the software. A tensile experiment on laminated composite finite plate with stress concentration is performed to validate the numerically predicted subroutine results, shows excellent agreement. The typical results are presented to examine effect of changing the crack tip distance (S), crack offset distance (H), and stacking fiber angle (θ) on FL, and SIF .

  5. An Experimental Study of Incremental Surface Loading of an Elastic Plate: Application to Volcano Tectonics

    NASA Technical Reports Server (NTRS)

    Williams, K. K.; Zuber, M. T.

    1995-01-01

    Models of surface fractures due to volcanic loading an elastic plate are commonly used to constrain thickness of planetary lithospheres, but discrepancies exist in predictions of the style of initial failure and in the nature of subsequent fracture evolution. In this study, we perform an experiment to determine the mode of initial failure due to the incremental addition of a conical load to the surface of an elastic plate and compare the location of initial failure with that predicted by elastic theory. In all experiments, the mode of initial failure was tension cracking at the surface of the plate, with cracks oriented circumferential to the load. The cracks nucleated at a distance from load center that corresponds the maximum radial stress predicted by analytical solutions, so a tensile failure criterion is appropriate for predictions of initial failure. With continued loading of the plate, migration of tensional cracks was observed. In the same azimuthal direction as the initial crack, subsequent cracks formed at a smaller radial distance than the initial crack. When forming in a different azimuthal direction, the subsequent cracks formed at a distance greater than the radial distance of the initial crack. The observed fracture pattern may explain the distribution of extensional structures in annular bands around many large scale, circular volcanic features.

  6. Automated Statistical Forecast Method to 36-48H ahead of Storm Wind and Dangerous Precipitation at the Mediterranean Region

    NASA Astrophysics Data System (ADS)

    Perekhodtseva, E. V.

    2009-09-01

    Development of successful method of forecast of storm winds, including squalls and tornadoes and heavy rainfalls, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal precipitation and wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 150x150km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. In order to change to the alternative forecast the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. In the accordance to the Pirsey-Obukhov criterion (T), the success of these automated statistical methods of forecast of squalls and tornadoes to 36 -48 hours ahead and heavy rainfalls in the warm season for the territory of Italy, Spain and Balkan countries is T = 1-a-b=0,54: 0,78 after author experiments. A lot of examples of very successful forecasts of summer storm wind and heavy rainfalls over the Italy and Spain territory are submitted at this report. The same decisive rules were applied to the forecast of these phenomena during cold period in this year too. This winter heavy snowfalls in Spain and in Italy and storm wind at this territory were observed very often. And our forecasts are successful.

  7. Do Formal Inspections Ensure that British Zoos Meet and Improve on Minimum Animal Welfare Standards?

    PubMed Central

    Draper, Chris; Browne, William; Harris, Stephen

    2013-01-01

    Simple Summary Key aims of the formal inspections of British zoos are to assess compliance with minimum standards of animal welfare and promote improvements in animal care and husbandry. We compared reports from two consecutive inspections of 136 British zoos to see whether these goals were being achieved. Most zoos did not meet all the minimum animal welfare standards and there was no clear evidence of improving levels of compliance with standards associated with the Zoo Licensing Act 1981. The current system of licensing and inspection does not ensure that British zoos meet and maintain, let alone exceed, the minimum animal welfare standards. Abstract We analysed two consecutive inspection reports for each of 136 British zoos made by government-appointed inspectors between 2005 and 2011 to assess how well British zoos were complying with minimum animal welfare standards; median interval between inspections was 1,107 days. There was no conclusive evidence for overall improvements in the levels of compliance by British zoos. Having the same zoo inspector at both inspections affected the outcome of an inspection; animal welfare criteria were more likely to be assessed as unchanged if the same inspector was present on both inspections. This, and erratic decisions as to whether a criterion applied to a particular zoo, suggest inconsistency in assessments between inspectors. Zoos that were members of a professional association (BIAZA) did not differ significantly from non-members in the overall number of criteria assessed as substandard at the second inspection but were more likely to meet the standards on both inspections and less likely to have criteria remaining substandard. Lack of consistency between inspectors, and the high proportion of zoos failing to meet minimum animal welfare standards nearly thirty years after the Zoo Licensing Act came into force, suggest that the current system of licensing and inspection is not meeting key objectives and requires revision. PMID:26479752

  8. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    PubMed

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  9. Classification of feeding and eating disorders: review of evidence and proposals for ICD-11

    PubMed Central

    UHER, RUDOLF; RUTTER, MICHAEL

    2012-01-01

    Current classification of eating disorders is failing to classify most clinical presentations; ignores continuities between child, adolescent and adult manifestations; and requires frequent changes of diagnosis to accommodate the natural course of these disorders. The classification is divorced from clinical practice, and investigators of clinical trials have felt compelled to introduce unsystematic modifications. Classification of feeding and eating disorders in ICD-11 requires substantial changes to remediate the shortcomings. We review evidence on the developmental and cross-cultural differences and continuities, course and distinctive features of feeding and eating disorders. We make the following recommendations: a) feeding and eating disorders should be merged into a single grouping with categories applicable across age groups; b) the category of anorexia nervosa should be broadened through dropping the requirement for amenorrhoea, extending the weight criterion to any significant underweight, and extending the cognitive criterion to include developmentally and culturally relevant presentations; c) a severity qualifier “with dangerously low body weight” should distinguish the severe cases of anorexia nervosa that carry the riskiest prognosis; d) bulimia nervosa should be extended to include subjective binge eating; e) binge eating disorder should be included as a specific category defined by subjective or objective binge eating in the absence of regular compensatory behaviour; f) combined eating disorder should classify subjects who sequentially or concurrently fulfil criteria for both anorexia and bulimia nervosa; g) avoidant/restrictive food intake disorder should classify restricted food intake in children or adults that is not accompanied by body weight and shape related psychopathology; h) a uniform minimum duration criterion of four weeks should apply. PMID:22654933

  10. SU-E-T-20: A Correlation Study of 2D and 3D Gamma Passing Rates for Prostate IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, D; Sun Yat-sen University Cancer Center, Guangzhou, Guangdong; Wang, B

    2015-06-15

    Purpose: To investigate the correlation between the two-dimensional gamma passing rate (2D %GP) and three-dimensional gamma passing rate (3D %GP) in prostate IMRT quality assurance. Methods: Eleven prostate IMRT plans were randomly selected from the clinical database and were used to obtain dose distributions in the phantom and patient. Three types of delivery errors (MLC bank sag errors, central MLC errors and monitor unit errors) were intentionally introduced to modify the clinical plans through an in-house Matlab program. This resulted in 187 modified plans. The 2D %GP and 3D %GP were analyzed using different dose-difference and distance-toagreement (1%-1mm, 2%-2mm andmore » 3%-3mm) and 20% dose threshold. The 2D %GP and 3D %GP were then compared not only for the whole region, but also for the PTVs and critical structures using the statistical Pearson’s correlation coefficient (γ). Results: For different delivery errors, the average comparison of 2D %GP and 3D %GP showed different conclusions. The statistical correlation coefficients between 2D %GP and 3D %GP for the whole dose distribution showed that except for 3%/3mm criterion, 2D %GP and 3D %GP of 1%/1mm criterion and 2%/2mm criterion had strong correlations (Pearson’s γ value >0.8). Compared with the whole region, the correlations of 2D %GP and 3D %GP for PTV were better (the γ value for 1%/1mm, 2%/2mm and 3%/3mm criterion was 0.959, 0.931 and 0.855, respectively). However for the rectum, there was no correlation between 2D %GP and 3D %GP. Conclusion: For prostate IMRT, the correlation between 2D %GP and 3D %GP for the PTV is better than that for normal structures. The lower dose-difference and DTA criterion shows less difference between 2D %GP and 3D %GP. Other factors such as the dosimeter characteristics and TPS algorithm bias may also influence the correlation between 2D %GP and 3D %GP.« less

  11. 13CO Survey of Northern Intermediate-Mass Star-Forming Regions

    NASA Astrophysics Data System (ADS)

    Lundquist, Michael J.; Kobulnicky, H. A.; Kerton, C. R.

    2014-01-01

    We conducted a survey of 13CO with the OSO 20-m telescope toward 68 intermediate-mass star-forming regions (IM SFRs) visible in the northern hemisphere. These regions have mostly been excluded from previous CO surveys and were selected from IRAS colors that specify cool dust and large PAH contribution. These regions are known to host stars up to, but not exceeding, about 8 solar masses. We detect 13CO in 57 of the 68 IM SFRs down to a typical RMS of ~50 mK. We present kinematic distances, minimum column densities, and minimum masses for these IM SFRs.

  12. Tracking of white-tailed deer migration by Global Positioning System

    USGS Publications Warehouse

    Nelson, M.E.; Mech, L.D.; Frame, P.F.

    2004-01-01

    Based on global positioning system (GPS) radiocollars in northeastern Minnesota, deer migrated 23-45 km in spring during 31-356 h, deviating a maximum 1.6-4.0 km perpendicular from a straight line of travel between their seasonal ranges. They migrated a minimum of 2.1-18.6 km/day over 11-56 h during 2-14 periods of travel. Minimum travel during 1-h intervals averaged 1.5 km/h. Deer paused 1-12 times, averaging 24 h/pause. Deer migrated similar distances in autumn with comparable rates and patterns of travel.

  13. Effects of Phasor Measurement Uncertainty on Power Line Outage Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chen; Wang, Jianhui; Zhu, Hao

    2014-12-01

    Phasor measurement unit (PMU) technology provides an effective tool to enhance the wide-area monitoring systems (WAMSs) in power grids. Although extensive studies have been conducted to develop several PMU applications in power systems (e.g., state estimation, oscillation detection and control, voltage stability analysis, and line outage detection), the uncertainty aspects of PMUs have not been adequately investigated. This paper focuses on quantifying the impact of PMU uncertainty on power line outage detection and identification, in which a limited number of PMUs installed at a subset of buses are utilized to detect and identify the line outage events. Specifically, the linemore » outage detection problem is formulated as a multi-hypothesis test, and a general Bayesian criterion is used for the detection procedure, in which the PMU uncertainty is analytically characterized. We further apply the minimum detection error criterion for the multi-hypothesis test and derive the expected detection error probability in terms of PMU uncertainty. The framework proposed provides fundamental guidance for quantifying the effects of PMU uncertainty on power line outage detection. Case studies are provided to validate our analysis and show how PMU uncertainty influences power line outage detection.« less

  14. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-08-13

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  15. Enhancing phonon flow through one-dimensional interfaces by impedance matching

    NASA Astrophysics Data System (ADS)

    Polanco, Carlos A.; Ghosh, Avik W.

    2014-08-01

    We extend concepts from microwave engineering to thermal interfaces and explore the principles of impedance matching in 1D. The extension is based on the generalization of acoustic impedance to nonlinear dispersions using the contact broadening matrix Γ(ω), extracted from the phonon self energy. For a single junction, we find that for coherent and incoherent phonons, the optimal thermal conductance occurs when the matching Γ(ω) equals the Geometric Mean of the contact broadenings. This criterion favors the transmission of both low and high frequency phonons by requiring that (1) the low frequency acoustic impedance of the junction matches that of the two contacts by minimizing the sum of interfacial resistances and (2) the cut-off frequency is near the minimum of the two contacts, thereby reducing the spillage of the states into the tunneling regime. For an ultimately scaled single atom/spring junction, the matching criterion transforms to the arithmetic mean for mass and the harmonic mean for spring constant. The matching can be further improved using a composite graded junction with an exponential varying broadening that functions like a broadband antireflection coating. There is, however, a trade off as the increased length of the interface brings in additional intrinsic sources of scattering.

  16. A Semi-analytic Criterion for the Spontaneous Initiation of Carbon Detonations in White Dwarfs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garg, Uma; Chang, Philip, E-mail: umagarg@uwm.edu, E-mail: chang65@uwm.edu

    Despite over 40 years of active research, the nature of the white dwarf progenitors of SNe Ia remains unclear. However, in the last decade, various progenitor scenarios have highlighted the need for detonations to be the primary mechanism by which these white dwarfs are consumed, but it is unclear how these detonations are triggered. In this paper we study how detonations are spontaneously initiated due to temperature inhomogeneities, e.g., hotspots, in burning nuclear fuel in a simplified physical scenario. Following the earlier work by Zel’Dovich, we describe the physics of detonation initiation in terms of the comparison between the spontaneousmore » wave speed and the Chapman–Jouguet speed. We develop an analytic expression for the spontaneous wave speed and utilize it to determine a semi-analytic criterion for the minimum size of a hotspot with a linear temperature gradient between a peak and base temperature for which detonations in burning carbon–oxygen material can occur. Our results suggest that spontaneous detonations may easily form under a diverse range of conditions, likely allowing a number of progenitor scenarios to initiate detonations that burn up the star.« less

  17. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  18. A Semi-analytic Criterion for the Spontaneous Initiation of Carbon Detonations in White Dwarfs

    NASA Astrophysics Data System (ADS)

    Garg, Uma; Chang, Philip

    2017-02-01

    Despite over 40 years of active research, the nature of the white dwarf progenitors of SNe Ia remains unclear. However, in the last decade, various progenitor scenarios have highlighted the need for detonations to be the primary mechanism by which these white dwarfs are consumed, but it is unclear how these detonations are triggered. In this paper we study how detonations are spontaneously initiated due to temperature inhomogeneities, e.g., hotspots, in burning nuclear fuel in a simplified physical scenario. Following the earlier work by Zel’Dovich, we describe the physics of detonation initiation in terms of the comparison between the spontaneous wave speed and the Chapman-Jouguet speed. We develop an analytic expression for the spontaneous wave speed and utilize it to determine a semi-analytic criterion for the minimum size of a hotspot with a linear temperature gradient between a peak and base temperature for which detonations in burning carbon-oxygen material can occur. Our results suggest that spontaneous detonations may easily form under a diverse range of conditions, likely allowing a number of progenitor scenarios to initiate detonations that burn up the star.

  19. 3D finite element modeling of epiretinal stimulation: Impact of prosthetic electrode size and distance from the retina.

    PubMed

    Sui, Xiaohong; Huang, Yu; Feng, Fuchen; Huang, Chenhui; Chan, Leanne Lai Hang; Wang, Guoxing

    2015-05-01

    A novel 3-dimensional (3D) finite element model was established to systematically investigate the impact of the diameter (Φ) of disc electrodes and the electrode-to-retina distance on the effectiveness of stimulation. The 3D finite element model was established based on a disc platinum stimulating electrode and a 6-layered retinal structure. The ground electrode was placed in the extraocular space in direct attachment with sclera and treated as a distant return electrode. An established criterion of electric-field strength of 1000 Vm-1 was adopted as the activation threshold for RGCs. The threshold current (TC) increased linearly with increasing Φ and electrode-to-retina distance and remained almost unchanged with further increases in diameter. However, the threshold charge density (TCD) increased dramatically with decreasing electrode diameter. TCD exceeded the electrode safety limit for an electrode diameter of 50 µm at an electrode-to-retina distance of 50 to 200 μm. The electric field distributions illustrated that smaller electrode diameters and shorter electrode-to-retina distances were preferred due to more localized excitation of RGC area under stimulation of different threshold currents in terms of varied electrode size and electrode-to-retina distances. Under the condition of same-amplitude current stimulation, a large electrode exhibited an improved potential spatial selectivity at large electrode-to-retina distances. Modeling results were consistent with those reported in animal electrophysiological experiments and clinical trials, validating the 3D finite element model of epiretinal stimulation. The computational model proved to be useful in optimizing the design of an epiretinal stimulating electrode for prosthesis.

  20. Real-time stop sign detection and distance estimation using a single camera

    NASA Astrophysics Data System (ADS)

    Wang, Wenpeng; Su, Yuxuan; Cheng, Ming

    2018-04-01

    In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.

  1. A simplified flight-test method for determining aircraft takeoff performance that includes effects of pilot technique

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Schweikhard, W. G.

    1974-01-01

    A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.

  2. Ranging algebraically with more observations than unknowns

    NASA Astrophysics Data System (ADS)

    Awange, J. L.; Fukuda, Y.; Takemoto, S.; Ateya, I. L.; Grafarend, E. W.

    2003-07-01

    In the recently developed Spatial Reference System that is designed to check and control the accuracy of the three-dimensional coordinate measuring machines and tooling equipment (Metronom US., Inc., Ann Arbor: http://www.metronomus.com), the coordinates of the edges of the instrument are computed from distances of the bars. The use of distances in industrial application is fast gaining momentum just as in Geodesy and in Geophysical applications and thus necessitating efficient algorithms to solve the nonlinear distance equations. Whereas the ranging problem with minimum known stations was considered in our previous contribution in the same Journal, the present contribution extends to the case where one is faced with many distance observations than unknowns (overdetermined case) as is usually the case in practise. Using the Gauss-Jacobi Combinatorial approach, we demonstrate how one can proceed to position without reverting to iterative and linearizing procedures such as Newton's or Least Squares approach.

  3. The Metabolic Demands of Kayaking: A Review

    PubMed Central

    Michael, Jacob S.; Rooney, Kieron B.; Smith, Richard

    2008-01-01

    Flat-water kayaking is one of the best-known competitive canoeing disciplines in Australia and across the European countries. From a stationary start, paddlers are required to paddle their kayaks with maximal effort along the length of the competing distance. The ultimate criterion of kayak performance is the time taken to paddle a designated competition distance. In flat-water racing, events are contested over 500 and 1000 metres. To approximate the ultimate criterion over these distances, the velocity of the kayak should be measured. Furthermore, other factors that affect performance, such as force, power, technique and aerobic fitness, would all provide a valuable insight to the success of the kayak paddler. Specific research performed examining the physiological demands on kayak paddlers demonstrate high levels of both aerobic power and anaerobic capacity. It is the purpose if this review to present the published physiological data relating to men’s and women’s kayaking. With a number of recent publications, a need for an updated review is necessary. The present review summarises recent data on anthropometrics, physiological characteristics of successful and unsuccessful kayak athletes and methods of physiological testing. Due to the fact that more data have been reported for male competitors than for their female counterparts, the demands of kayaking on male athletes will be the main focus for this review. The review also suggests areas for future research into flatwater kayaking performance. Understanding the physiological requirements of kayaking can assist coaches and athletes in a number of ways. During competition or training, such information is helpful in the selection of appropriate protocols and metabolic indices to monitor an athlete’s performance improvements and assess an athlete’s suitability for a particular race distance. Furthermore, it may aid the coach in the development of more specific training programs for their athletes. Key pointsFlat water kayaking is characterised by exceptional demands on upper body performance.When examining the oxygen consumption, it is notable that although a high value is attainable, they are not quite as high as other sporting events such as road cycling, rowing or running where lower body is dominant.Elite kayakers demonstrate superior aerobic and anaerobic quantities and have reported maximal oxygen consumptions of around 58 ml·kg-1·min-1 (4.7 L·min-1) and lactate values of around 12 mM during laboratory and on water testing. PMID:24150127

  4. Floating and Tether-Coupled Adhesion of Bacteria to Hydrophobic and Hydrophilic Surfaces

    PubMed Central

    2018-01-01

    Models for bacterial adhesion to substratum surfaces all include uncertainty with respect to the (ir)reversibility of adhesion. In a model, based on vibrations exhibited by adhering bacteria parallel to a surface, adhesion was described as a result of reversible binding of multiple bacterial tethers that detach from and successively reattach to a surface, eventually making bacterial adhesion irreversible. Here, we use total internal reflection microscopy to determine whether adhering bacteria also exhibit variations over time in their perpendicular distance above surfaces. Streptococci with fibrillar surface tethers showed perpendicular vibrations with amplitudes of around 5 nm, regardless of surface hydrophobicity. Adhering, nonfibrillated streptococci vibrated with amplitudes around 20 nm above a hydrophobic surface. Amplitudes did not depend on ionic strength for either strain. Calculations of bacterial energies from their distances above the surfaces using the Boltzman equation showed that bacteria with fibrillar tethers vibrated as a harmonic oscillator. The energy of bacteria without fibrillar tethers varied with distance in a comparable fashion as the DLVO (Derjaguin, Landau, Verwey, and Overbeek)-interaction energy. Distance variations above the surface over time of bacteria with fibrillar tethers are suggested to be governed by the harmonic oscillations, allowed by elasticity of the tethers, piercing through the potential energy barrier. Bacteria without fibrillar tethers “float” above a surface in the secondary energy minimum, with their perpendicular displacement restricted by their thermal energy and the width of the secondary minimum. The distinction between “tether-coupled” and “floating” adhesion is new, and may have implications for bacterial detachment strategies. PMID:29649869

  5. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  6. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  7. 78 FR 61251 - Radio Broadcasting Services; Heber Springs, Arkansas.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-03

    ...] Radio Broadcasting Services; Heber Springs, Arkansas. AGENCY: Federal Communications Commission. ACTION... third local service. Channel 270C3 can be allotted to Heber Springs consistent with the minimum distance... community. The reference coordinates are 35-34-12 NL and 91-55-41 WL. DATES: Comments must be filed on or...

  8. Some Evidence of Continuing Linguistic Acquisitions in Learning Adolescents.

    ERIC Educational Resources Information Center

    Thomas, Elizabeth K.; Walmsley, Sean A.

    The linguistic development of 42 learning disabled students 10-16 years old was examined. Responses were elicited to five linguistic structures, including the distinction between "ask" and "tell", pronominal restriction, and the minimum distance principle. Data were analyzed in terms of three groups based on Verbal and Performance differentials on…

  9. Locating sources within a dense sensor array using graph clustering

    NASA Astrophysics Data System (ADS)

    Gerstoft, P.; Riahi, N.

    2017-12-01

    We develop a model-free technique to identify weak sources within dense sensor arrays using graph clustering. No knowledge about the propagation medium is needed except that signal strengths decay to insignificant levels within a scale that is shorter than the aperture. We then reinterpret the spatial coherence matrix of a wave field as a matrix whose support is a connectivity matrix of a graph with sensors as vertices. In a dense network, well-separated sources induce clusters in this graph. The geographic spread of these clusters can serve to localize the sources. The support of the covariance matrix is estimated from limited-time data using a hypothesis test with a robust phase-only coherence test statistic combined with a physical distance criterion. The latter criterion ensures graph sparsity and thus prevents clusters from forming by chance. We verify the approach and quantify its reliability on a simulated dataset. The method is then applied to data from a dense 5200 element geophone array that blanketed of the city of Long Beach (CA). The analysis exposes a helicopter traversing the array and oil production facilities.

  10. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  11. Planetary Taxonomy: Label Round Bodies "Worlds"

    NASA Astrophysics Data System (ADS)

    Margot, Jean-Luc; Levison, H. F.

    2009-05-01

    The classification of planetary bodies is as important to Astronomy as taxonomy is to other sciences. The etymological, historical, and IAU definitions of planet rely on a dynamical criterion, but some authors prefer a geophysical criterion based on "roundness". Although the former criterion is superior when it comes to classifying newly discovered objects, the conflict need not exist if we agree to identify the subset of "round" planetary objects as "worlds". This addition to the taxonomy would conveniently recognize that "round" objects such as Earth, Europa, Titan, Triton, and Pluto share some common planetary-type processes regardless of their distance from the host star. Some of these worlds are planets, others are not. Defining how round is round and handling the inevitable transition objects are non-trivial tasks. Because images at sufficient resolution are not available for the overwhelming majority of newly discovered objects, the degree of roundness is not a directly observable property and is inherently problematic as a basis for classification. We can tolerate some uncertainty in establishing the "world" status of a newly discovered object, and still establish its planet or satellite status with existing dynamical criteria. Because orbital parameters are directly observable, and because mass can often be measured either from orbital perturbations or from the presence of companions, the dynamics provide a robust and practical planet classification scheme. It may also be possible to determine which bodies are dynamically dominant from observations of the population magnitude/size distribution.

  12. Simple analytical relations for ship bow waves

    NASA Astrophysics Data System (ADS)

    Noblesse, Francis; Delhommeau, G.?Rard; Guilbaud, Michel; Hendrix, Dane; Yang, Chi

    Simple analytical relations for the bow wave generated by a ship in steady motion are given. Specifically, simple expressions that define the height of a ship bow wave, the distance between the ship stem and the crest of the bow wave, the rise of water at the stem, and the bow wave profile, explicitly and without calculations, in terms of the ship speed, draught, and waterline entrance angle, are given. Another result is a simple criterion that predicts, also directly and without calculations, when a ship in steady motion cannot generate a steady bow wave. This unsteady-flow criterion predicts that a ship with a sufficiently fine waterline, specifically with waterline entrance angle 2, may generate a steady bow wave at any speed. However, a ship with a fuller waterline (25E) can only generate a steady bow wave if the ship speed is higher than a critical speed, defined in terms of αE by a simple relation. No alternative criterion for predicting when a ship in steady motion does not generate a steady bow wave appears to exist. A simple expression for the height of an unsteady ship bow wave is also given. In spite of their remarkable simplicity, the relations for ship bow waves obtained in the study (using only rudimentary physical and mathematical considerations) are consistent with experimental measurements for a number of hull forms having non-bulbous wedge-shaped bows with small flare angle, and with the authors' measurements and observations for a rectangular flat plate towed at a yaw angle.

  13. Diffusion modelling of metamorphic layered coronas with stability criterion and consideration of affinity

    NASA Astrophysics Data System (ADS)

    Ashworth, J. R.; Sheplev, V. S.

    1997-09-01

    Layered coronas between two reactant minerals can, in many cases, be attributed to diffusion-controlled growth with local equilibrium. This paper clarifies and unifies the previous approaches of various authors to the simplest form of modelling, which uses no assumed values for thermochemical quantities. A realistic overall reaction must be estimated from measured overall proportions of minerals and their major element compositions. Modelling is not restricted to a particular number of components S, relative to the number of phases Φ. IfΦ > S + 1, the overall reaction is a combination of simultaneous reactions. The stepwise method, solving for the local reaction at each boundary in turn, is extended to allow for recurrence of a mineral (its presence in two parts of the layer structure separated by a gap). The equations are also given in matrix form. A thermodynamic stability criterion is derived, determining which layer sequence is truly stable if several are computable from the same inputs. A layer structure satisfying the stability criterion has greater growth rate (and greater rate of entropy production) than the other computable layer sequences. This criterion of greatest entropy production is distinct from Prigogine's theorem of minimum entropy production, which distinguishes the stationary or quasi-stationary state from other states of the same layer sequence. The criterion leads to modification of previous results for coronas comprising hornblende, spinel, and orthopyroxene between olivine (Ol) and plagioclase (Pl). The outcome supports the previous inference that Si, and particularly Al, commonly behave as immobile relative to other cation-forming major elements. The affinity (-ΔG) of a corona-forming reaction is estimated, using previous estimates of diffusion coefficient and the duration t of reaction, together with a new model quantity (-ΔG) *. For an example of the Ol + Pl reaction, a rough calculation gives (-ΔG) > 1.7RT (per mole of P1 consumed, based on a 24-oxygen formula for Pl). At 600-700°C, this represents (-ΔG) > 10kJ mol -1 and departure from equilibrium temperature by at least ˜ 100°C. The lower end of this range is petrologically reasonable and, for t < 100Ma, corresponds to a Fick's-law diffusion coefficient for Al, DAl > 10 -25m 2s -1, larger than expected for lattice diffusion but consistent with fluid-absent grain-boundary diffusion and small concentration gradients.

  14. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  15. Effect of available space and previous contact in the social integration of Saint Croix and Suffolk ewes.

    PubMed

    Orihuela, A; Averós, X; Solano, J; Clemente, N; Estevez, I

    2016-03-01

    Reproduction in tropical sheep is not affected by season, whereas the reproductive cycle of temperate-climate breeds such as Suffolk depends on the photoperiod. Close contact with tropical ewes during the anestrous period might induce Suffolk ewes to cycle, making the use of artificial light or hormonal treatments unnecessary. However, the integration of both breeds within the social group would be necessary to trigger this effect, and so the aim of the experiment was to determine the speed of integration of 2 groups of Saint Croix and Suffolk ewes into a single flock, according to space allowance and previous experience. For this, 6 groups of 10 ewes (half from each breed) from both breeds, housed at 2 or 4 m/ewe (3 groups/treatment) and with or without previous contact with the other breed, were monitored for 3 d. Each observation day, the behavior, movement, and use of space of ewes were collected during 10 min at 1-h intervals between 0900 and 1400 h. Generalized linear mixed models were used to test the effects of breed, space allowance, and previous experience on behavior, movement, and use of space. Net distances, interbreed farthest neighbor distance, mean interbreed distance, and walking frequencies were greater at 4 m/ewe ( < 0.05). Intrabreed nearest neighbor, mean intrabreed neighbor, and interbreed nearest neighbor distances and minimum convex polygons at 4 m/ewe were greatest for Saint Croix ewes, whereas the opposite was found for lying down ( < 0.05). Experienced ewes showed larger intrabreed nearest neighbor distances, minimum convex polygons, and home range overlapping ( < 0.05). Experienced ewes at 4 m/ewe showed longest total distances and step lengths and greatest movement activity ( < 0.05). Experienced ewes walked longer total distances during Day 1 and 2 ( < 0.05). Lying down frequency was greater for Day 3 than Day 1 ( < 0.05), and Suffolk ewes kept longer interindividual distances during Day 1 ( < 0.05). After 3 d of cohabitation, Suffolk and Saint Croix ewes did not fully integrate into a cohesive flock, with each breed displaying specific behavioral patterns. Decreasing space allowance and previous experience resulted in limited benefits for the successful group cohesion. Longer cohabitation periods might result in complete integration, although practical implementation might be difficult.

  16. Node Deployment with k-Connectivity in Sensor Networks for Crop Information Full Coverage Monitoring

    PubMed Central

    Liu, Naisen; Cao, Weixing; Zhu, Yan; Zhang, Jingchao; Pang, Fangrong; Ni, Jun

    2016-01-01

    Wireless sensor networks (WSNs) are suitable for the continuous monitoring of crop information in large-scale farmland. The information obtained is great for regulation of crop growth and achieving high yields in precision agriculture (PA). In order to realize full coverage and k-connectivity WSN deployment for monitoring crop growth information of farmland on a large scale and to ensure the accuracy of the monitored data, a new WSN deployment method using a genetic algorithm (GA) is here proposed. The fitness function of GA was constructed based on the following WSN deployment criteria: (1) nodes must be located in the corresponding plots; (2) WSN must have k-connectivity; (3) WSN must have no communication silos; (4) the minimum distance between node and plot boundary must be greater than a specific value to prevent each node from being affected by the farmland edge effect. The deployment experiments were performed on natural farmland and on irregular farmland divided based on spatial differences of soil nutrients. Results showed that both WSNs gave full coverage, there were no communication silos, and the minimum connectivity of nodes was equal to k. The deployment was tested for different values of k and transmission distance (d) to the node. The results showed that, when d was set to 200 m, as k increased from 2 to 4 the minimum connectivity of nodes increases and is equal to k. When k was set to 2, the average connectivity of all nodes increased in a linear manner with the increase of d from 140 m to 250 m, and the minimum connectivity does not change. PMID:27941704

  17. Method for selecting minimum width of leaf in multileaf adjustable collimator while inhibiting passage of particle beams of radiation through sawtooth joints between collimator leaves

    DOEpatents

    Ludewigt, Bernhard; Bercovitz, John; Nyman, Mark; Chu, William

    1995-01-01

    A method is disclosed for selecting the minimum width of individual leaves of a multileaf adjustable collimator having sawtooth top and bottom surfaces between adjacent leaves of a first stack of leaves and sawtooth end edges which are capable of intermeshing with the corresponding sawtooth end edges of leaves in a second stack of leaves of the collimator. The minimum width of individual leaves in the collimator, each having a sawtooth configuration in the surface facing another leaf in the same stack and a sawtooth end edge, is selected to comprise the sum of the penetration depth or range of the particular type of radiation comprising the beam in the particular material used for forming the leaf; plus the total path length across all the air gaps in the area of the joint at the edges between two leaves defined between lines drawn across the peaks of adjacent sawtooth edges; plus at least one half of the length or period of a single sawtooth. To accomplish this, in accordance with the method of the invention, the penetration depth of the particular type of radiation in the particular material to be used for the collimator leaf is first measured. Then the distance or gap between adjoining or abutting leaves is selected, and the ratio of this distance to the height of the sawteeth is selected. Finally the number of air gaps through which the radiation will pass between sawteeth is determined by selecting the number of sawteeth to be formed in the joint. The measurement and/or selection of these parameters will permit one to determine the minimum width of the leaf which is required to prevent passage of the beam through the sawtooth joint.

  18. Minimum viewing angle for visually guided ground speed control in bumblebees.

    PubMed

    Baird, Emily; Kornfeldt, Torill; Dacke, Marie

    2010-05-01

    To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel's length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23-30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered.

  19. REDSHIFT-INDEPENDENT DISTANCES IN THE NASA/IPAC EXTRAGALACTIC DATABASE: METHODOLOGY, CONTENT, AND USE OF NED-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steer, Ian; Madore, Barry F.; Mazzarella, Joseph M.

    Estimates of galaxy distances based on indicators that are independent of cosmological redshift are fundamental to astrophysics. Researchers use them to establish the extragalactic distance scale, to underpin estimates of the Hubble constant, and to study peculiar velocities induced by gravitational attractions that perturb the motions of galaxies with respect to the “Hubble flow” of universal expansion. In 2006 the NASA/IPAC Extragalactic Database (NED) began making available a comprehensive compilation of redshift-independent extragalactic distance estimates. A decade later, this compendium of distances (NED-D) now contains more than 100,000 individual estimates based on primary and secondary indicators, available for more thanmore » 28,000 galaxies, and compiled from over 2000 references in the refereed astronomical literature. This paper describes the methodology, content, and use of NED-D, and addresses challenges to be overcome in compiling such distances. Currently, 75 different distance indicators are in use. We include a figure that facilitates comparison of the indicators with significant numbers of estimates in terms of the minimum, 25th percentile, median, 75th percentile, and maximum distances spanned. Brief descriptions of the indicators, including examples of their use in the database, are given in an appendix.« less

  20. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China

    PubMed Central

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-01-01

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2–3.9 cm and 4.8–5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8–24.7 cm and a minimum of 3.1–6.9 cm. PMID:27657064

  1. Classification of resistance to passive motion using minimum probability of error criterion.

    PubMed

    Chan, H C; Manry, M T; Kondraske, G V

    1987-01-01

    Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.

  2. Methodological basis for the optimization of a marine sea-urchin embryo test (SET) for the ecological assessment of coastal water quality.

    PubMed

    Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo

    2010-05-01

    The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.

  3. The "critical limits for crystallinity" in nanoparticles of the elements: A combined thermodynamic and crystallographic critique

    NASA Astrophysics Data System (ADS)

    Pelegrina, J. L.; Guillermet, A. Fernández

    2018-03-01

    The theme of the present work is the procedure for evaluating the minimum size for the stability of a crystalline particle with respect to the same group of atoms but in the amorphous state. A key goal of the study is the critical analysis of an extensively quoted paper by F.G. Shi [J. Mater. Res. 9 (1994) 1307-1313], who presented a criterion for evaluating a "crystallinity distance" (h) through its relation with the "critical diameter" (dC) of a particle, i.e., the diameter below which no particles with the crystalline structure are expected to exist at finite temperatures. Key assumptions of Shi's model are a direct proportionality relation between h and dC , and a prescription for estimating h from crystallographic information. In the present work the accuracy of the Shi model is assessed with particular reference to nanoparticles of the elements. To this end, an alternative way to obtain h, that better realizes Shi's idea of this quantity as "the height of a monolayer of atoms on the bulk crystal surface", is explored. Moreover, a thermodynamic calculation of dC , which involves a description of the bulk- and the surface contributions to the crystalline/amorphous relative phase stability for nanoparticles, is performed. It is shown that the Shi equation does not account for the key features of the h vs. dC relation established in the current work. Consequently, it is concluded that the parameter h obtained only from information about the structure of the crystalline phase, does not provide an accurate route to estimate the quantity dC . In fact, a key result of the current study is that dC crucially depends on the relation between bulk- and surface contributions to the crystalline/amorphous relative thermodynamic stability.

  4. Population genetic structure and phylogeographical pattern of a relict tree fern, Alsophila spinulosa (Cyatheaceae), inferred from cpDNA atpB- rbcL intergenic spacers.

    PubMed

    Su, Yingjuan; Wang, Ting; Zheng, Bo; Jiang, Yu; Chen, Guopei; Gu, Hongya

    2004-11-01

    Sequences of chloroplast DNA (cpDNA) atpB- rbcL intergenic spacers of individuals of a tree fern species, Alsophila spinulosa, collected from ten relict populations distributed in the Hainan and Guangdong provinces, and the Guangxi Zhuang region in southern China, were determined. Sequence length varied from 724 bp to 731 bp, showing length polymorphism, and base composition was with high A+T content between 63.17% and 63.95%. Sequences were neutral in terms of evolution (Tajima's criterion D=-1.01899, P>0.10 and Fu and Li's test D*=-1.39008, P>0.10; F*=-1.49775, P>0.10). A total of 19 haplotypes were identified based on nucleotide variation. High levels of haplotype diversity (h=0.744) and nucleotide diversity (Dij=0.01130) were detected in A. spinulosa, probably associated with its long evolutionary history, which has allowed the accumulation of genetic variation within lineages. Both the minimum spanning network and neighbor-joining trees generated for haplotypes demonstrated that current populations of A. spinulosa existing in Hainan, Guangdong, and Guangxi were subdivided into two geographical groups. An analysis of molecular variance indicated that most of the genetic variation (93.49%, P<0.001) was partitioned among regions. Wright's isolation by distance model was not supported across extant populations. Reduced gene flow by the Qiongzhou Strait and inbreeding may result in the geographical subdivision between the Hainan and Guangdong + Guangxi populations (FST=0.95, Nm=0.03). Within each region, the star-like pattern of phylogeography of haplotypes implied a population expansion process during evolutionary history. Gene genealogies together with coalescent theory provided significant information for uncovering phylogeography of A. spinulosa.

  5. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  6. Computer program grade 2 for the design and analysis of heat-pipe wicks

    NASA Technical Reports Server (NTRS)

    Eninger, J. E.; Edwards, D. K.

    1976-01-01

    This user's manual describes the revised version of the computer program GRADE(1), which designs and analyzes heat pipes with graded porosity fibrous slab wicks. The revisions are: (1) automatic calculation of the minimum condenser-end stress that will not result in an excess-liquid puddle or a liquid slug in the vapor space; (2) numerical solution of the equations describing flow in the circumferential grooves to assess the burnout criterion; (3) calculation of the contribution of excess liquid in fillets and puddles to the heat-transport; (4) calculation of the effect of partial saturation on the wick performance; and (5) calculation of the effect of vapor flow, which includes viscousinertial interactions.

  7. Transition of planar Couette flow at infinite Reynolds numbers.

    PubMed

    Itano, Tomoaki; Akinaga, Takeshi; Generalis, Sotos C; Sugihara-Seki, Masako

    2013-11-01

    An outline of the state space of planar Couette flow at high Reynolds numbers (Re<10^{5}) is investigated via a variety of efficient numerical techniques. It is verified from nonlinear analysis that the lower branch of the hairpin vortex state (HVS) asymptotically approaches the primary (laminar) state with increasing Re. It is also predicted that the lower branch of the HVS at high Re belongs to the stability boundary that initiates a transition to turbulence, and that one of the unstable manifolds of the lower branch of HVS lies on the boundary. These facts suggest HVS may provide a criterion to estimate a minimum perturbation arising transition to turbulent states at the infinite Re limit.

  8. Scaling laws for ignition at the National Ignition Facility from first principles.

    PubMed

    Cheng, Baolian; Kwan, Thomas J T; Wang, Yi-Ming; Batha, Steven H

    2013-10-01

    We have developed an analytical physics model from fundamental physics principles and used the reduced one-dimensional model to derive a thermonuclear ignition criterion and implosion energy scaling laws applicable to inertial confinement fusion capsules. The scaling laws relate the fuel pressure and the minimum implosion energy required for ignition to the peak implosion velocity and the equation of state of the pusher and the hot fuel. When a specific low-entropy adiabat path is used for the cold fuel, our scaling laws recover the ignition threshold factor dependence on the implosion velocity, but when a high-entropy adiabat path is chosen, the model agrees with recent measurements.

  9. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  10. Creep rupture of polymer-matrix composites

    NASA Technical Reports Server (NTRS)

    Brinson, H. F.; Morris, D. H.; Griffith, W. I.

    1981-01-01

    The time-dependent creep-rupture process in graphite-epoxy laminates is examined as a function of temperature and stress level. Moisture effects are not considered. An accelerated characterization method of composite-laminate viscoelastic modulus and strength properties is reviewed. It is shown that lamina-modulus master curves can be obtained using a minimum of normally performed quality-control-type testing. Lamina-strength master curves, obtained by assuming a constant-strain-failure criterion, are presented along with experimental data, and reasonably good agreement is shown to exist between the two. Various phenomenological delayed failure models are reviewed and two (the modified rate equation and the Larson-Miller parameter method) are compared to creep-rupture data with poor results.

  11. Optimal design of gas adsorption refrigerators for cryogenic cooling

    NASA Technical Reports Server (NTRS)

    Chan, C. K.

    1983-01-01

    The design of gas adsorption refrigerators used for cryogenic cooling in the temperature range of 4K to 120K was examined. The functional relationships among the power requirement for the refrigerator, the system mass, the cycle time and the operating conditions were derived. It was found that the precool temperature, the temperature dependent heat capacities and thermal conductivities, and pressure and temperature variations in the compressors have important impacts on the cooling performance. Optimal designs based on a minimum power criterion were performed for four different gas adsorption refrigerators and a multistage system. It is concluded that the estimates of the power required and the system mass are within manageable limits in various spacecraft environments.

  12. The origin of facet selectivity and alignment in anatase TiO 2 nanoparticles in electrolyte solutions: implications for oriented attachment in metal oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sushko, M. L.; Rosso, K. M.

    Atomic-to-mesoscale simulations were used to reveal the origin of oriented attachment between anatase TiO2 nanoparticles in aqueous HCl solutions. Analysis of the distance and pH dependence of interparticle interactions demonstrates that ion correlation forces are responsible for facet-specific attraction and rotation into lattice co-alignment at long-range. These forces give rise to a metastable solvent separated capture minimum on the disjoining pressure-distance curve, with the barrier to attachment largely due to steric hydration forces from structured intervening solvent.

  13. A study of polaritonic transparency in couplers made from excitonic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Mahi R.; Racknor, Chris

    2015-03-14

    We have studied light matter interaction in quantum dot and exciton-polaritonic coupler hybrid systems. The coupler is made by embedding two slabs of an excitonic material (CdS) into a host excitonic material (ZnO). An ensemble of non-interacting quantum dots is doped in the coupler. The bound exciton polariton states are calculated in the coupler using the transfer matrix method in the presence of the coupling between the external light (photons) and excitons. These bound exciton-polaritons interact with the excitons present in the quantum dots and the coupler is acting as a reservoir. The Schrödinger equation method has been used tomore » calculate the absorption coefficient in quantum dots. It is found that when the distance between two slabs (CdS) is greater than decay length of evanescent waves the absorption spectrum has two peaks and one minimum. The minimum corresponds to a transparent state in the system. However, when the distance between the slabs is smaller than the decay length of evanescent waves, the absorption spectra has three peaks and two transparent states. In other words, one transparent state can be switched to two transparent states when the distance between the two layers is modified. This could be achieved by applying stress and strain fields. It is also found that transparent states can be switched on and off by applying an external control laser field.« less

  14. Benthic macroinvertebrate field sampling effort required to ...

    EPA Pesticide Factsheets

    This multi-year pilot study evaluated a proposed field method for its effectiveness in the collection of a benthic macroinvertebrate sample adequate for use in the condition assessment of streams and rivers in the Neuquén Province, Argentina. A total of 13 sites, distributed across three rivers, were sampled. At each site, benthic macroinvertebrates were collected at 11 transects. Each sample was processed independently in the field and laboratory. Based on a literature review and resource considerations, the collection of 300 organisms (minimum) at each site was determined to be necessary to support a robust condition assessment, and therefore, selected as the criterion for judging the adequacy of the method. This targeted number of organisms was collected at all sites, at a minimum, when collections from all 11 transects were combined. Subsequent bootstrapping analysis of data was used to estimate whether collecting at fewer transects would reach the minimum target number of organisms for all sites. In a subset of sites, the total number of organisms frequently fell below the target when fewer than 11 transects collections were combined.Site conditions where <300 organisms might be collected are discussed. These preliminary results suggest that the proposed field method results in a sample that is adequate for robust condition assessment of the rivers and streams of interest. When data become available from a broader range of sites, the adequacy of the field

  15. Maximum ikelihood estimation for the double-count method with independent observers

    USGS Publications Warehouse

    Manly, Bryan F.J.; McDonald, Lyman L.; Garner, Gerald W.

    1996-01-01

    Data collected under a double-count protocol during line transect surveys were analyzed using new maximum likelihood methods combined with Akaike's information criterion to provide estimates of the abundance of polar bear (Ursus maritimus Phipps) in a pilot study off the coast of Alaska. Visibility biases were corrected by modeling the detection probabilities using logistic regression functions. Independent variables that influenced the detection probabilities included perpendicular distance of bear groups from the flight line and the number of individuals in the groups. A series of models were considered which vary from (1) the simplest, where the probability of detection was the same for both observers and was not affected by either distance from the flight line or group size, to (2) models where probability of detection is different for the two observers and depends on both distance from the transect and group size. Estimation procedures are developed for the case when additional variables may affect detection probabilities. The methods are illustrated using data from the pilot polar bear survey and some recommendations are given for design of a survey over the larger Chukchi Sea between Russia and the United States.

  16. 75 FR 71148 - Solicitation for a Cooperative Agreement-Production of Seven Live Satellite/Internet Broadcasts

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... defined as training/ education transpiring between trainers and facilitators at one location and... NIC's distance learning administrator (DLA) on program design, program coordination, design and field... activities that support each broadcast. A minimum of one face-to-face planning session will be held for each...

  17. 76 FR 295 - Proposed Amendments to the Water Quality Regulations, Water Code and Comprehensive Plan To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-04

    ... endangered (T&E) species. Minimum setbacks from water bodies, wetlands, surface water supply intakes and water supply reservoirs at distances specified in the regulations, and from occupied homes, public buildings, public roads, public water supply wells, and domestic water supply wells as provided by...

  18. 24 CFR 3280.611 - Vents and venting.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... drain piping for each toilet shall be vented by a 11/2 inch minimum diameter vent or rectangular vent of..., connected to the toilet drain by one of the following methods: (i) A 11/2 inch diameter (min.) individual vent pipe or equivalent directly connected to the toilet drain within the distance allowed in § 3280...

  19. 75 FR 70854 - Harmonization of Various Airworthiness Standards for Transport Category Airplanes-Flight Rules

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-19

    ..., deploy speed brakes) to stop the airplane within the accelerate stop distance. It also means the minimum... flight diving speed. List of Subjects in 14 CFR Part 25 Aircraft, Aviation safety, Reporting and... transport category airplanes. This action would harmonize the requirements for takeoff speeds, static...

  20. 40 CFR 86.436-78 - Additional service accumulation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... minimum test distance and at the useful life, and, (3) The results of the half life emission tests, when... Regulations for 1978 and Later New Motorcycles, General Provisions § 86.436-78 Additional service accumulation. (a) Additional service up to the useful life will be accumulated under the same conditions as the...

Top