1978-03-01
for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented
Ordonez, Alejandro; Svenning, Jens-Christian
2017-02-23
Current and historical environmental conditions are known to determine jointly contemporary species distributions and richness patterns. However, whether historical dynamics in species distributions and richness translate to functional diversity patterns remains, for the most part, unknown. The geographic patterns of plant functional space size (richness) and packing (dispersion) for six widely distributed orders of European angiosperms were estimated using atlas distribution data and trait information. Then the relative importance of late-Quaternary glacial-interglacial climate change and contemporary environmental factors (climate, productivity, and topography) as determinants of functional diversity of evaluated orders was assesed. Functional diversity patterns of all evaluated orders exhibited prominent glacial-interglacial climate change imprints, complementing the influence of contemporary environmental conditions. The importance of Quaternary glacial-interglacial climate change factors was comparable to that of contemporary environmental factors across evaluated orders. Therefore, high long-term paleoclimate variability has imposed consistent supplementary constraints on functional diversity of multiple plant groups, a legacy that may permeate to ecosystem functioning and resilience. These findings suggest that strong near-future anthropogenic climate change may elicit long-term functional disequilibria in plant functional diversity.
Renormalizability of quasiparton distribution functions
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...
2017-11-21
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
Renormalizability of quasiparton distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
NASA Astrophysics Data System (ADS)
Attard, Phil
The second moment of the Lennard-Jones local field distribution in a hard-sphere fluid is evaluated using the PY3 three-particle distribution function. An approximation due to Lado that avoids the explicit calculation of the latter is shown to be accurate. Partial results are also given for certain cavity-hard-sphere radial distribution functions that occur in a closest particle expansion for the local field.
Brown, Jeffrey S; Holmes, John H; Shah, Kiran; Hall, Ken; Lazarus, Ross; Platt, Richard
2010-06-01
Comparative effectiveness research, medical product safety evaluation, and quality measurement will require the ability to use electronic health data held by multiple organizations. There is no consensus about whether to create regional or national combined (eg, "all payer") databases for these purposes, or distributed data networks that leave most Protected Health Information and proprietary data in the possession of the original data holders. Demonstrate functions of a distributed research network that supports research needs and also address data holders concerns about participation. Key design functions included strong local control of data uses and a centralized web-based querying interface. We implemented a pilot distributed research network and evaluated the design considerations, utility for research, and the acceptability to data holders of methods for menu-driven querying. We developed and tested a central, web-based interface with supporting network software. Specific functions assessed include query formation and distribution, query execution and review, and aggregation of results. This pilot successfully evaluated temporal trends in medication use and diagnoses at 5 separate sites, demonstrating some of the possibilities of using a distributed research network. The pilot demonstrated the potential utility of the design, which addressed the major concerns of both users and data holders. No serious obstacles were identified that would prevent development of a fully functional, scalable network. Distributed networks are capable of addressing nearly all anticipated uses of routinely collected electronic healthcare data. Distributed networks would obviate the need for centralized databases, thus avoiding numerous obstacles.
2008-01-01
A second objective is to characterize variability in the volume scattering function and particle size distribution for various optical water types...volume scattering function (VSF) and the particle size distribution (PSD) • Analysis of in situ optical measurements and particle size distributions ...SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION /AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY
Automated generation of influence functions for planar crack problems
NASA Technical Reports Server (NTRS)
Sire, Robert A.; Harris, David O.; Eason, Ernest D.
1989-01-01
A numerical procedure for the generation of influence functions for Mode I planar problems is described. The resulting influence functions are in a form for convenient evaluation of stress-intensity factors for complex stress distributions. Crack surface displacements are obtained by a least-squares solution of the Williams eigenfunction expansion for displacements in a cracked body. Discrete values of the influence function, evaluated using the crack surface displacements, are curve fit using an assumed functional form. The assumed functional form includes appropriate limit-behavior terms for very deep and very shallow cracks. Continuous representation of the influence function provides a convenient means for evaluating stress-intensity factors for arbitrary stress distributions by numerical integration. The procedure is demonstrated for an edge-cracked strip and a radially cracked disk. Comparisons with available published results demonstrate the accuracy of the procedure.
NASA Astrophysics Data System (ADS)
Dunn, S. M.; Colohan, R. J. E.
1999-09-01
A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.
Quang V. Cao; Shanna M. McCarty
2006-01-01
Diameter distributions in a forest stand have been successfully characterized by use of the Weibull function. Of special interest are cases where parameters of a Weibull distribution that models a future stand are predicted, either directly or indirectly, from current stand density and dominant height. This study evaluated four methods of predicting the Weibull...
Voltage stress effects on microcircuit accelerated life test failure rates
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1976-01-01
The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.
Evaluation model of distribution network development based on ANP and grey correlation analysis
NASA Astrophysics Data System (ADS)
Ma, Kaiqiang; Zhan, Zhihong; Zhou, Ming; Wu, Qiang; Yan, Jun; Chen, Genyong
2018-06-01
The existing distribution network evaluation system cannot scientifically and comprehensively reflect the distribution network development status. Furthermore, the evaluation model is monotonous and it is not suitable for horizontal analysis of many regional power grids. For these reason, this paper constructs a set of universal adaptability evaluation index system and model of distribution network development. Firstly, distribution network evaluation system is set up by power supply capability, power grid structure, technical equipment, intelligent level, efficiency of the power grid and development benefit of power grid. Then the comprehensive weight of indices is calculated by combining the AHP with the grey correlation analysis. Finally, the index scoring function can be obtained by fitting the index evaluation criterion to the curve, and then using the multiply plus operator to get the result of sample evaluation. The example analysis shows that the model can reflect the development of distribution network and find out the advantages and disadvantages of distribution network development. Besides, the model provides suggestions for the development and construction of distribution network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Nagarajan, Adarsh; Baggu, Murali
This paper evaluated the impact of smart inverter Volt-VAR function on voltage reduction energy saving and power quality in electric power distribution systems. A methodology to implement the voltage reduction optimization was developed by controlling the substation LTC and capacitor banks, and having smart inverters participate through their autonomous Volt-VAR control. In addition, a power quality scoring methodology was proposed and utilized to quantify the effect on power distribution system power quality. All of these methodologies were applied to a utility distribution system model to evaluate the voltage reduction energy saving and power quality under various PV penetrations and smartmore » inverter densities.« less
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
TMD parton distributions based on three-body decay functions in NLL order of QCD
NASA Astrophysics Data System (ADS)
Tanaka, Hidekazu
2015-04-01
Three-body decay functions in space-like parton branches are implemented to evaluate transverse-momentum-dependent (TMD) parton distribution functions in the next-to-leading logarithmic (NLL) order of quantum chromodynamics (QCD). Interference contributions due to the next-to-leading-order terms are taken into account for the evaluation of the transverse momenta in initial state parton radiations. Some properties of the decay functions are also examined. As an example, the calculated results are compared with those evaluated by an algorithm proposed in [M. A. Kimber, A. D. Martin, and M. G. Ryskin, Eur. Phys. J. C 12, 655 (2000)], [M. A. Kimber, A. D. Martin, and M. G. Ryskin, Phys. Rev. D 63, 11402 (2001)], [G. Watt, A. D. Martin, and M. G. Ryskin, Eur. Phys. J. C 31, 73 (2003)], and [A. D. Martin, M. G. Ryskin, and G. Watt, Eur. Phys. J. C 66, 167 (2010)], in which the TMD parton distributions are defined based on the k_t-factorization method with angular ordering conditions due to interference effects.
A Transparent Framework for Evaluating the Effects of DGPV on Distribution System Costs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horowitz, Kelsey A; Mather, Barry A; Ding, Fei
Assessing the costs and benefits of distributed photovoltaic generators (DGPV) to the power system and electricity consumers is key to determining appropriate policies, tariff designs, and power system upgrades for the modern grid. We advance understanding of this topic by providing a transparent framework, terminology, and data set for evaluating distribution system upgrade costs, line losses, and interconnection costs as a function of DGPV penetration level.
1976-03-01
RESEARCH IN FUNCTIONALLY DISTRIBUTED COMPUTER SYSTEMS DEVEI.OPME--ETClU) MAR 76 P S FISHER, F MARYANSKI DAA629-76-6-0108 UNCLASSIFIED CS-76-08AN...RESEARCH IN FUNCTIONALLY !DISTRIBUTED COMPUTER SYSTEMS DEVELOPMENT Kansas State University Virgil Wallentine Principal Investigator Approved for public...reme; disiribution unlimited DTIC \\4JWE III ELECTi"U ~E V0AI. Ill ~1ONTAUG 2 0 1981&EV .IAIN LiSP4 F U.S. ARMY COMPUTER SYSTEMS COMMAND FT BELVOIR, VA
NASA Technical Reports Server (NTRS)
Watkins, Charles E; Berman, Julian H
1956-01-01
This report treats the Kernel function of the integral equation that relates a known or prescribed downwash distribution to an unknown lift distribution for harmonically oscillating wings in supersonic flow. The treatment is essentially an extension to supersonic flow of the treatment given in NACA report 1234 for subsonic flow. For the supersonic case the Kernel function is derived by use of a suitable form of acoustic doublet potential which employs a cutoff or Heaviside unit function. The Kernel functions are reduced to forms that can be accurately evaluated by considering the functions in two parts: a part in which the singularities are isolated and analytically expressed, and a nonsingular part which can be tabulated.
A test of the cross-scale resilience model: Functional richness in Mediterranean-climate ecosystems
Wardwell, D.A.; Allen, Craig R.; Peterson, G.D.; Tyre, A.J.
2008-01-01
Ecological resilience has been proposed to be generated, in part, in the discontinuous structure of complex systems. Environmental discontinuities are reflected in discontinuous, aggregated animal body mass distributions. Diversity of functional groups within body mass aggregations (scales) and redundancy of functional groups across body mass aggregations (scales) has been proposed to increase resilience. We evaluate that proposition by analyzing mammalian and avian communities of Mediterranean-climate ecosystems. We first determined that body mass distributions for each animal community were discontinuous. We then calculated the variance in richness of function across aggregations in each community, and compared observed values with distributions created by 1000 simulations using a null of random distribution of function, with the same n, number of discontinuities and number of functional groups as the observed data. Variance in the richness of functional groups across scales was significantly lower in real communities than in simulations in eight of nine sites. The distribution of function across body mass aggregations in the animal communities we analyzed was non-random, and supports the contentions of the cross-scale resilience model. ?? 2007 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Gregory; Mistrick, Ph.D., Richard; Lee, Eleanor
2011-01-21
We describe two methods which rely on bidirectional scattering distribution functions (BSDFs) to model the daylighting performance of complex fenestration systems (CFS), enabling greater flexibility and accuracy in evaluating arbitrary assemblies of glazing, shading, and other optically-complex coplanar window systems. Two tools within Radiance enable a) efficient annual performance evaluations of CFS, and b) accurate renderings of CFS despite the loss of spatial resolution associated with low-resolution BSDF datasets for inhomogeneous systems. Validation, accuracy, and limitations of the methods are discussed.
Linking Health Concepts in the Assessment and Evaluation of Water Distribution Systems
ERIC Educational Resources Information Center
Karney, Bryan W.; Filion, Yves R.
2005-01-01
The concept of health is not only a specific criterion for evaluation of water quality delivered by a distribution system but also a suitable paradigm for overall functioning of the hydraulic and structural components of the system. This article views health, despite its complexities, as the only criterion with suitable depth and breadth to allow…
lsjk—a C++ library for arbitrary-precision numeric evaluation of the generalized log-sine functions
NASA Astrophysics Data System (ADS)
Kalmykov, M. Yu.; Sheplyakov, A.
2005-10-01
Generalized log-sine functions Lsj(k)(θ) appear in higher order ɛ-expansion of different Feynman diagrams. We present an algorithm for the numerical evaluation of these functions for real arguments. This algorithm is implemented as a C++ library with arbitrary-precision arithmetics for integer 0⩽k⩽9 and j⩾2. Some new relations and representations of the generalized log-sine functions are given. Program summaryTitle of program:lsjk Catalogue number:ADVS Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVS Program obtained from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing terms: GNU General Public License Computers:all Operating systems:POSIX Programming language:C++ Memory required to execute:Depending on the complexity of the problem, at least 32 MB RAM recommended No. of lines in distributed program, including testing data, etc.:41 975 No. of bytes in distributed program, including testing data, etc.:309 156 Distribution format:tar.gz Other programs called:The CLN library for arbitrary-precision arithmetics is required at version 1.1.5 or greater External files needed:none Nature of the physical problem:Numerical evaluation of the generalized log-sine functions for real argument in the region 0<θ<π. These functions appear in Feynman integrals Method of solution:Series representation for the real argument in the region 0<θ<π Restriction on the complexity of the problem:Limited up to Lsj(9)(θ), and j is an arbitrary integer number. Thus, all function up to the weight 12 in the region 0<θ<π can be evaluated. The algorithm can be extended up to higher values of k(k>9) without modification Typical running time:Depending on the complexity of problem. See text below.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Influence of emphysema distribution on pulmonary function parameters in COPD patients
Bastos, Helder Novais e; Neves, Inês; Redondo, Margarida; Cunha, Rui; Pereira, José Miguel; Magalhães, Adriana; Fernandes, Gabriela
2015-01-01
ABSTRACT OBJECTIVE: To evaluate the impact that the distribution of emphysema has on clinical and functional severity in patients with COPD. METHODS: The distribution of the emphysema was analyzed in COPD patients, who were classified according to a 5-point visual classification system of lung CT findings. We assessed the influence of emphysema distribution type on the clinical and functional presentation of COPD. We also evaluated hypoxemia after the six-minute walk test (6MWT) and determined the six-minute walk distance (6MWD). RESULTS: Eighty-six patients were included. The mean age was 65.2 ± 12.2 years, 91.9% were male, and all but one were smokers (mean smoking history, 62.7 ± 38.4 pack-years). The emphysema distribution was categorized as obviously upper lung-predominant (type 1), in 36.0% of the patients; slightly upper lung-predominant (type 2), in 25.6%; homogeneous between the upper and lower lung (type 3), in 16.3%; and slightly lower lung-predominant (type 4), in 22.1%. Type 2 emphysema distribution was associated with lower FEV1, FVC, FEV1/FVC ratio, and DLCO. In comparison with the type 1 patients, the type 4 patients were more likely to have an FEV1 < 65% of the predicted value (OR = 6.91, 95% CI: 1.43-33.45; p = 0.016), a 6MWD < 350 m (OR = 6.36, 95% CI: 1.26-32.18; p = 0.025), and post-6MWT hypoxemia (OR = 32.66, 95% CI: 3.26-326.84; p = 0.003). The type 3 patients had a higher RV/TLC ratio, although the difference was not significant. CONCLUSIONS: The severity of COPD appears to be greater in type 4 patients, and type 3 patients tend to have greater hyperinflation. The distribution of emphysema could have a major impact on functional parameters and should be considered in the evaluation of COPD patients. PMID:26785956
NASA Astrophysics Data System (ADS)
Prapavat, Viravuth; Schuetz, Rijk; Runge, Wolfram; Beuthan, Juergen; Mueller, Gerhard J.
1995-12-01
This paper presents in-vitro-studies using the scattered intensity distribution obtained by cw- transillumination to examine the condition of rheumatic disorders of interphalangeal joints. Inflammation of joints, due to rheumatic diseases, leads to changes in the synovial membrane, synovia composition and content, and anatomic geometrical variations. Measurements have shown that these rheumatic induced inflammation processes result in a variation in optical properties of joint systems. With a scanning system the interphalangeal joint is transilluminated with diode lasers (670 nm, 905 nm) perpendicular to the joint cavity. The detection of the entire distribution of the transmitted radiation intensity was performed with a CCD camera. As a function of the structure and optical properties of the transilluminated volume we achieved distributions of scattered radiation which show characteristic variations in intensity and shape. Using signal and image processing procedures we evaluated the measured scattered distributions regarding their information weight, shape and scale features. Mathematical methods were used to find classification criteria to determine variations of the joint condition.
ERIC Educational Resources Information Center
Moses, Tim; Liu, Jinghua
2011-01-01
In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…
A descriptive model of resting-state networks using Markov chains.
Xie, H; Pal, R; Mitra, S
2016-08-01
Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.
An evaluation of procedures to estimate monthly precipitation probabilities
NASA Astrophysics Data System (ADS)
Legates, David R.
1991-01-01
Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.
Gothe, Emma; Sandin, Leonard; Allen, Craig R.; Angeler, David G.
2014-01-01
The distribution of functional traits within and across spatiotemporal scales has been used to quantify and infer the relative resilience across ecosystems. We use explicit spatial modeling to evaluate within- and cross-scale redundancy in headwater streams, an ecosystem type with a hierarchical and dendritic network structure. We assessed the cross-scale distribution of functional feeding groups of benthic invertebrates in Swedish headwater streams during two seasons. We evaluated functional metrics, i.e., Shannon diversity, richness, and evenness, and the degree of redundancy within and across modeled spatial scales for individual feeding groups. We also estimated the correlates of environmental versus spatial factors of both functional composition and the taxonomic composition of functional groups for each spatial scale identified. Measures of functional diversity and within-scale redundancy of functions were similar during both seasons, but both within- and cross-scale redundancy were low. This apparent low redundancy was partly attributable to a few dominant taxa explaining the spatial models. However, rare taxa with stochastic spatial distributions might provide additional information and should therefore be considered explicitly for complementing future resilience assessments. Otherwise, resilience may be underestimated. Finally, both environmental and spatial factors correlated with the scale-specific functional and taxonomic composition. This finding suggests that resilience in stream networks emerges as a function of not only local conditions but also regional factors such as habitat connectivity and invertebrate dispersal.
Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels
NASA Astrophysics Data System (ADS)
Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan
2017-12-01
This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.
Distributed Evaluation Functions for Fault Tolerant Multi-Rover Systems
NASA Technical Reports Server (NTRS)
Agogino, Adrian; Turner, Kagan
2005-01-01
The ability to evolve fault tolerant control strategies for large collections of agents is critical to the successful application of evolutionary strategies to domains where failures are common. Furthermore, while evolutionary algorithms have been highly successful in discovering single-agent control strategies, extending such algorithms to multiagent domains has proven to be difficult. In this paper we present a method for shaping evaluation functions for agents that provide control strategies that both are tolerant to different types of failures and lead to coordinated behavior in a multi-agent setting. This method neither relies of a centralized strategy (susceptible to single point of failures) nor a distributed strategy where each agent uses a system wide evaluation function (severe credit assignment problem). In a multi-rover problem, we show that agents using our agent-specific evaluation perform up to 500% better than agents using the system evaluation. In addition we show that agents are still able to maintain a high level of performance when up to 60% of the agents fail due to actuator, communication or controller faults.
Cumulative Poisson Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert
1990-01-01
Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.
Flight Crew Workload Evaluation Based on the Workload Function Distribution Method.
Zheng, Yiyuan; Lu, Yanyu; Jie, Yuwen; Fu, Shan
2017-05-01
The minimum flight crew on the flight deck should be established according to the workload for individual crewmembers. Typical workload measures consist of three types: subjective rating scale, task performance, and psychophysiological measures. However, all these measures have their own limitations. To reflect flight crew workload more specifically and comprehensively within the flight environment, and more directly comply with airworthiness regulations, the Workload Function Distribution Method, which combined the basic six workload functions, was proposed. The analysis was based on the different conditions of workload function numbers. Each condition was analyzed from two aspects, which were overall proportion and effective proportion. Three types of approach tasks were used in this study and the NASA-TLX scale was implemented for comparison. Neither the one-function condition nor the two-function condition had the same results with NASA-TLX. However, both the three-function and the four- to six- function conditions were identical with NASA-TLX. Further, the significant differences were different on four to six conditions. The overall proportion was insignificant, while the effective proportions were significant. The results show that the conditions with one function and two functions seemed to have no influence on workload, while executing three functions and four to six functions had an impact on workload. Besides, effective proportions of workload functions were more precisely compared with the overall proportions to indicate workload, especially in the conditions with multiple functions.Zheng Y, Lu Y, Jie Y, Fu S. Flight crew workload evaluation based on the workload function distribution method. Aerosp Med Hum Perform. 2017; 88(5):481-486.
Kratzer, Markus; Lasnik, Michael; Röhrig, Sören; Teichert, Christian; Deluca, Marco
2018-01-11
Lead zirconate titanate (PZT) is one of the prominent materials used in polycrystalline piezoelectric devices. Since the ferroelectric domain orientation is the most important parameter affecting the electromechanical performance, analyzing the domain orientation distribution is of great importance for the development and understanding of improved piezoceramic devices. Here, vector piezoresponse force microscopy (vector-PFM) has been applied in order to reconstruct the ferroelectric domain orientation distribution function of polished sections of device-ready polycrystalline lead zirconate titanate (PZT) material. A measurement procedure and a computer program based on the software Mathematica have been developed to automatically evaluate the vector-PFM data for reconstructing the domain orientation function. The method is tested on differently in-plane and out-of-plane poled PZT samples, and the results reveal the expected domain patterns and allow determination of the polarization orientation distribution function at high accuracy.
Research Governance and the Role of Evaluation: A Comparative Study
ERIC Educational Resources Information Center
Molas-Gallart, Jordi
2012-01-01
Through a comparative study of the United Kingdom and Spain, this article addresses the effect of different research governance structures on the functioning and uses of research evaluation. It distinguishes three main evaluation uses: distributive, improvement, and controlling. Research evaluation in the United Kingdom plays important…
Standardization of Broadband UV Measurements for 365 nm LED Sources
Eppeldauer, George P.
2012-01-01
Broadband UV measurements are evaluated when UV-A irradiance meters measure optical radiation from 365 nm UV sources. The CIE standardized rectangular-shape UV-A function can be realized only with large spectral mismatch errors. The spectral power-distribution of the 365 nm excitation source is not standardized. Accordingly, the readings made with different types of UV meters, even if they measure the same UV source, can be very different. Available UV detectors and UV meters were measured and evaluated for spectral responsivity. The spectral product of the source-distribution and the meter’s spectral-responsivity were calculated for different combinations to estimate broad-band signal-measurement errors. Standardization of both the UV source-distribution and the meter spectral-responsivity is recommended here to perform uniform broad-band measurements with low uncertainty. It is shown what spectral responsivity function(s) is needed for new and existing UV irradiance meters to perform low-uncertainty broadband 365 nm measurements. PMID:26900516
Belmaati, Esther Okeke; Iversen, Martin; Kofoed, Klaus F; Nielsen, Michael B; Mortensen, Jann
2012-06-01
Scintigraphy has been used as a tool to detect dysfunction of the lung before and after transplantation. The aims of this study were to evaluate the development of the ventilation-perfusion relationships in single lung transplant recipients in the first year, at 3 months after transplantation, and to investigate whether scintigraphic findings at 3 months were predictive for the outcome at 12 months in relation to primary graft dysfunction (PGD) and lung function. A retrospective study was carried out on all patients who prospectively and consecutively were referred for a routine lung scintigraphy procedure 3 months after single lung transplantation (SLTX). A total of 41 patients were included in the study: 20 women and 21 men with the age span of patients at transplantation being 38-66 years (mean ± SD: 54.2 ± 6.0). Patient records also included lung function tests and chest X-ray images. We found no significant correlation between lung function distribution at 3 months and PGD at 72 h. There was also no significant correlation between PGD scores at 72 h and lung function at 6 and 12 months. The same applied to scintigraphic scores for heterogeneity at 3 months compared with lung function at 6 and 12 months. Fifty-five percent of all patients had decreased ventilation function measured in the period from 6 to 12 months. Forty-nine percent of the patients had normal perfusion evaluations, and 51% had abnormal perfusion evaluations at 3 months. For ventilation evaluations, 72% were normal and 28% were abnormal. There was a significant difference in the normal versus abnormal perfusion and ventilation scintigraphic images evaluated from the same patients. Ventilation was distributed more homogenously in the transplanted lung than perfusion in the same lung. The relative distribution of perfusion and ventilation to the transplanted lung of patients with and without a primary diagnosis of fibrosis did not differ significantly from each other. We conclude that PGD defined at 72 h does not lead to recognizable changes in ventilation-perfusion scintigrapy at 3 months, and scintigraphic findings do not correlate with development in lung function in the first 12 months.
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
Efficient Evaluation Functions for Multi-Rover Systems
NASA Technical Reports Server (NTRS)
Agogino, Adrian; Tumer, Kagan
2004-01-01
Evolutionary computation can be a powerful tool in cresting a control policy for a single agent receiving local continuous input. This paper extends single-agent evolutionary computation to multi-agent systems, where a collection of agents strives to maximize a global fitness evaluation function that rates the performance of the entire system. This problem is solved in a distributed manner, where each agent evolves its own population of neural networks that are used as the control policies for the agent. Each agent evolves its population using its own agent-specific fitness evaluation function. We propose to create these agent-specific evaluation functions using the theory of collectives to avoid the coordination problem where each agent evolves a population that maximizes its own fitness function, yet the system has a whole achieves low values of the global fitness function. Instead we will ensure that each fitness evaluation function is both "aligned" with the global evaluation function and is "learnable," i.e., the agents can readily see how their behavior affects their evaluation function. We then show how these agent-specific evaluation functions outperform global evaluation methods by up to 600% in a domain where a set of rovers attempt to maximize the amount of information observed while navigating through a simulated environment.
Evaluation of probabilistic forecasts with the scoringRules package
NASA Astrophysics Data System (ADS)
Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian
2017-04-01
Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.
Fan, Yuting; Li, Jianqiang; Xu, Kun; Chen, Hao; Lu, Xun; Dai, Yitang; Yin, Feifei; Ji, Yuefeng; Lin, Jintong
2013-09-09
In this paper, we analyze the performance of IEEE 802.11 distributed coordination function in simulcast radio-over-fiber-based distributed antenna systems (RoF-DASs) where multiple remote antenna units (RAUs) are connected to one wireless local-area network (WLAN) access point (AP) with different-length fiber links. We also present an analytical model to evaluate the throughput of the systems in the presence of both the inter-RAU hidden-node problem and fiber-length difference effect. In the model, the unequal delay induced by different fiber length is involved both in the backoff stage and in the calculation of Ts and Tc, which are the period of time when the channel is sensed busy due to a successful transmission or a collision. The throughput performances of WLAN-RoF-DAS in both basic access and request to send/clear to send (RTS/CTS) exchange modes are evaluated with the help of the derived model.
Pasternak, Amy; Sideridis, Georgios; Fragala-Pinkham, Maria; Glanzman, Allan M; Montes, Jacqueline; Dunaway, Sally; Salazar, Rachel; Quigley, Janet; Pandya, Shree; O'Riley, Susan; Greenwood, Jonathan; Chiriboga, Claudia; Finkel, Richard; Tennekoon, Gihan; Martens, William B; McDermott, Michael P; Fournier, Heather Szelag; Madabusi, Lavanya; Harrington, Timothy; Cruz, Rosangel E; LaMarca, Nicole M; Videon, Nancy M; Vivo, Darryl C De; Darras, Basil T
2016-12-01
In this study we evaluated the suitability of a caregiver-reported functional measure, the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test (PEDI-CAT), for children and young adults with spinal muscular atrophy (SMA). PEDI-CAT Mobility and Daily Activities domain item banks were administered to 58 caregivers of children and young adults with SMA. Rasch analysis was used to evaluate test properties across SMA types. Unidimensional content for each domain was confirmed. The PEDI-CAT was most informative for type III SMA, with ability levels distributed close to 0.0 logits in both domains. It was less informative for types I and II SMA, especially for mobility skills. Item and person abilities were not distributed evenly across all types. The PEDI-CAT may be used to measure functional performance in SMA, but additional items are needed to identify small changes in function and best represent the abilities of all types of SMA. Muscle Nerve 54: 1097-1107, 2016. © 2016 Wiley Periodicals, Inc.
Chord-length and free-path distribution functions for many-body systems
NASA Astrophysics Data System (ADS)
Lu, Binglin; Torquato, S.
1993-04-01
We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.
A simulator for evaluating methods for the detection of lesion-deficit associations
NASA Technical Reports Server (NTRS)
Megalooikonomou, V.; Davatzikos, C.; Herskovits, E. H.
2000-01-01
Although much has been learned about the functional organization of the human brain through lesion-deficit analysis, the variety of statistical and image-processing methods developed for this purpose precludes a closed-form analysis of the statistical power of these systems. Therefore, we developed a lesion-deficit simulator (LDS), which generates artificial subjects, each of which consists of a set of functional deficits, and a brain image with lesions; the deficits and lesions conform to predefined distributions. We used probability distributions to model the number, sizes, and spatial distribution of lesions, to model the structure-function associations, and to model registration error. We used the LDS to evaluate, as examples, the effects of the complexities and strengths of lesion-deficit associations, and of registration error, on the power of lesion-deficit analysis. We measured the numbers of recovered associations from these simulated data, as a function of the number of subjects analyzed, the strengths and number of associations in the statistical model, the number of structures associated with a particular function, and the prior probabilities of structures being abnormal. The number of subjects required to recover the simulated lesion-deficit associations was found to have an inverse relationship to the strength of associations, and to the smallest probability in the structure-function model. The number of structures associated with a particular function (i.e., the complexity of associations) had a much greater effect on the performance of the analysis method than did the total number of associations. We also found that registration error of 5 mm or less reduces the number of associations discovered by approximately 13% compared to perfect registration. The LDS provides a flexible framework for evaluating many aspects of lesion-deficit analysis.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
Ferguson, Sue A.; Allread, W. Gary; Burr, Deborah L.; Heaney, Catherine; Marras, William S.
2013-01-01
Background Biomechanical, psychosocial and individual risk factors for low back disorder have been studied extensively however few researchers have examined all three risk factors. The objective of this was to develop a low back disorder risk model in furniture distribution workers using biomechanical, psychosocial and individual risk factors. Methods This was a prospective study with a six month follow-up time. There were 454 subjects at 9 furniture distribution facilities enrolled in the study. Biomechanical exposure was evaluated using the American Conference of Governmental Industrial Hygienists (2001) lifting threshold limit values for low back injury risk. Psychosocial and individual risk factors were evaluated via questionnaires. Low back health functional status was measured using the lumbar motion monitor. Low back disorder cases were defined as a loss of low back functional performance of −0.14 or more. Findings There were 92 cases of meaningful loss in low back functional performance and 185 non cases. A multivariate logistic regression model included baseline functional performance probability, facility, perceived workload, intermediated reach distance number of exertions above threshold limit values, job tenure manual material handling, and age combined to provide a model sensitivity of 68.5% and specificity of 71.9%. Interpretation: The results of this study indicate which biomechanical, individual and psychosocial risk factors are important as well as how much of each risk factor is too much resulting in increased risk of low back disorder among furniture distribution workers. PMID:21955915
Enhanced production of ψ (2 S ) mesons in heavy ion collisions
NASA Astrophysics Data System (ADS)
Cho, Sungtae
2015-05-01
I study the production of a ψ (2 S ) meson in heavy ion collisions. I evaluate Wigner functions for the ψ (2 S ) meson using both Gaussian and Coulomb wave functions, and investigate the wave function dependence in the ψ (2 S ) meson production by recombination of charm and anticharm quarks. The enhanced transverse momentum distribution of ψ (2 S ) mesons compared to that of J /ψ mesons, originated from wave function distributions of the ψ (2 S ) and J /ψ meson in momentum space, provides a plausible explanation for the recent measurement of the nuclear modification factor ratio between the ψ (2 S ) and J /ψ meson.
Alternative Approaches to Evaluation in Empirical Microeconomics
ERIC Educational Resources Information Center
Blundell, Richard; Dias, Monica Costa
2009-01-01
This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Automated Power Systems Management (APSM)
NASA Technical Reports Server (NTRS)
Bridgeforth, A. O.
1981-01-01
A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.
NASA Astrophysics Data System (ADS)
Riandry, M. A.; Ismet, I.; Akhsan, H.
2017-09-01
This study aims to produce a valid and practical statistical physics course handout on distribution function materials based on STEM. Rowntree development model is used to produce this handout. The model consists of three stages: planning, development and evaluation stages. In this study, the evaluation stage used Tessmer formative evaluation. It consists of 5 stages: self-evaluation, expert review, one-to-one evaluation, small group evaluation and field test stages. However, the handout is limited to be tested on validity and practicality aspects, so the field test stage is not implemented. The data collection technique used walkthroughs and questionnaires. Subjects of this study are students of 6th and 8th semester of academic year 2016/2017 Physics Education Study Program of Sriwijaya University. The average result of expert review is 87.31% (very valid category). One-to-one evaluation obtained the average result is 89.42%. The result of small group evaluation is 85.92%. From one-to-one and small group evaluation stages, averagestudent response to this handout is 87,67% (very practical category). Based on the results of the study, it can be concluded that the handout is valid and practical.
Analysis of shifts in the spatial distribution of vegetation due to climate change
NASA Astrophysics Data System (ADS)
del Jesus, Manuel; Díez-Sierra, Javier; Rinaldo, Andrea; Rodríguez-Iturbe, Ignacio
2017-04-01
Climate change will modify the statistical regime of most climatological variables, inducing changes on average values and in the natural variability of environmental variables. These environmental variables may be used to explain the spatial distribution of functional types of vegetation in arid and semiarid watersheds through the use of plant optimization theories. Therefore, plant optimization theories may be used to approximate the response of the spatial distribution of vegetation to a changing climate. Predicting changes in these spatial distributions is important to understand how climate change may affect vegetated ecosystems, but it is also important for hydrological engineering applications where climate change effects on water availability are assessed. In this work, Maximum Entropy Production (MEP) is used as the plant optimization theory that describes the spatial distribution of functional types of vegetation. Current climatological conditions are obtained from direct observations from meteorological stations. Climate change effects are evaluated for different temporal horizons and different climate change scenarios using numerical model outputs from the CMIP5. Rainfall estimates are downscaled by means of a stochastic point process used to model rainfall. The study is carried out for the Rio Salado watershed, located within the Sevilleta LTER site, in New Mexico (USA). Results show the expected changes in the spatial distribution of vegetation and allow to evaluate the expected variability of the changes. The updated spatial distributions allow to evaluate the vegetated ecosystem health and its updated resilience. These results can then be used to inform the hydrological modeling part of climate change assessments analyzing water availability in arid and semiarid watersheds.
Qian, Yishan; Huang, Jia; Zhou, Xingtao; Hanna, Rewais Benjamin
2015-08-01
To evaluate corneal power distribution using the ray tracing method (corneal power) in eyes undergoing small incision lenticule extraction (SMILE) surgery and compare the functional optical zone with two lenticular sizes. This retrospective study evaluated 128 patients who underwent SMILE for the correction of myopia and astigmatism with a lenticular diameter of 6.5 mm (the 6.5-mm group) and 6.2 mm (the 6.2-mm group). The data include refraction, correction, and corneal power obtained via a Scheimpflug camera from the pupil center to 8 mm. The surgically induced changes in corneal power (Δcorneal power) were compared to correction and Δrefraction. The functional optical zone was defined as the largest ring diameter when the difference between the ring power and the pupil center power was 1.50 diopters or less. The functional optical zone was compared between two lenticular diameter groups. Corneal power distribution was measured by the ray tracing method. In the 6.5-mm group (n=100), Δcorneal power at 5 mm showed the smallest difference from Δrefraction and Δcorneal power at 0 mm exhibited the smallest difference from correction. In the 6.2-mm group (n=28), Δcorneal power at 2 mm displayed the lowest dissimilarity from Δrefraction and Δcorneal power at 4 mm demonstrated the lowest dissimilarity from correction. There was no significant difference between the mean postoperative functional optical zones in either group when their spherical equivalents were matched. Total corneal refactive power can be used in the evaluation of surgically induced changes following SMILE. A lenticular diameter of 6.2 mm should be recommended for patients with high myopia because there is no functional difference in the optical zone. Copyright 2015, SLACK Incorporated.
scoringRules - A software package for probabilistic model evaluation
NASA Astrophysics Data System (ADS)
Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian
2016-04-01
Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.
A probabilistic approach to photovoltaic generator performance prediction
NASA Astrophysics Data System (ADS)
Khallat, M. A.; Rahman, S.
1986-09-01
A method for predicting the performance of a photovoltaic (PV) generator based on long term climatological data and expected cell performance is described. The equations for cell model formulation are provided. Use of the statistical model for characterizing the insolation level is discussed. The insolation data is fitted to appropriate probability distribution functions (Weibull, beta, normal). The probability distribution functions are utilized to evaluate the capacity factors of PV panels or arrays. An example is presented revealing the applicability of the procedure.
Representations and uses of light distribution functions
NASA Astrophysics Data System (ADS)
Lalonde, Paul Albert
1998-11-01
At their lowest level, all rendering algorithms depend on models of local illumination to define the interplay of light with the surfaces being rendered. These models depend both on the representations of light scattering at a surface due to reflection and to an equal extent on the representation of light sources and light fields. Both emission and reflection have in common that they describe how light leaves a surface as a function of direction. Reflection also depends on an incident light direction. Emission can depend on the position on the light source We call the functions representing emission and reflection light distribution functions (LDF's). There are some difficulties to using measured light distribution functions. The data sets are very large-the size of the data grows with the fourth power of the sampling resolution. For example, a bidirectional reflectance distribution function (BRDF) sampled at five degrees angular resolution, which is arguably insufficient to capture highlights and other high frequency effects in the reflection, can easily require one and a half million samples. Once acquired this data requires some form of interpolation to use them. Any compression method used must be efficient, both in space and in the time required to evaluate the function at a point or over a range of points. This dissertation examines a wavelet representation of light distribution functions that addresses these issues. A data structure is presented that allows efficient reconstruction of LDFs for a given set of parameters, making the wavelet representation feasible for rendering tasks. Texture mapping methods that take advantage of our LDF representations are examined, as well as techniques for filtering LDFs, and methods for using wavelet compressed bidirection reflectance distribution functions (BRDFs) and light sources with Monte Carlo path tracing algorithms. The wavelet representation effectively compresses BRDF and emission data while inducing only a small error in the reconstructed signal. The representation can be used to evaluate efficiently some integrals that appear in shading computation which allows fast, accurate computation of local shading. The representation can be used to represent light fields and is used to reconstruct views of environments interactively from a precomputed set of views. The representation of the BRDF also allows the efficient generation of reflected directions for Monte Carlo array tracing applications. The method can be integrated into many different global illumination algorithms, including ray tracers and wavelet radiosity systems.
NASA Astrophysics Data System (ADS)
Kitada, N.; Inoue, N.; Tonagi, M.
2016-12-01
The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault. This result indicates that the fault displacement is difficult to appear at the edge of the fault displacement in Japan. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (NRA), Japan.
Evaluation of statistical distributions to analyze the pollution of Cd and Pb in urban runoff.
Toranjian, Amin; Marofi, Safar
2017-05-01
Heavy metal pollution in urban runoff causes severe environmental damage. Identification of these pollutants and their statistical analysis is necessary to provide management guidelines. In this study, 45 continuous probability distribution functions were selected to fit the Cd and Pb data in the runoff events of an urban area during October 2014-May 2015. The sampling was conducted from the outlet of the city basin during seven precipitation events. For evaluation and ranking of the functions, we used the goodness of fit Kolmogorov-Smirnov and Anderson-Darling tests. The results of Cd analysis showed that Hyperbolic Secant, Wakeby and Log-Pearson 3 are suitable for frequency analysis of the event mean concentration (EMC), the instantaneous concentration series (ICS) and instantaneous concentration of each event (ICEE), respectively. In addition, the LP3, Wakeby and Generalized Extreme Value functions were chosen for the EMC, ICS and ICEE related to Pb contamination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Dotto, Alessio; Pace, Emanuele; Salme, Giovanni
Poincare covariant definitions for the spin-dependent spectral function and for the momentum distributions within the light-front Hamiltonian dynamics are proposed for a three-fermion bound system, starting from the light-front wave function of the system. The adopted approach is based on the Bakamjian–Thomas construction of the Poincaré generators, which allows one to easily import the familiar and wide knowledge on the nuclear interaction into a light-front framework. The proposed formalism can find useful applications in refined nuclear calculations, such as those needed for evaluating the European Muon Collaboration effect or the semi-inclusive deep inelastic cross sections with polarized nuclear targets, sincemore » remarkably the light-front unpolarized momentum distribution by definition fulfills both normalization and momentum sum rules. As a result, also shown is a straightforward generalization of the definition of the light-front spectral function to an A-nucleon system.« less
End-to-end distance and contour length distribution functions of DNA helices
NASA Astrophysics Data System (ADS)
Zoli, Marco
2018-06-01
I present a computational method to evaluate the end-to-end and the contour length distribution functions of short DNA molecules described by a mesoscopic Hamiltonian. The method generates a large statistical ensemble of possible configurations for each dimer in the sequence, selects the global equilibrium twist conformation for the molecule, and determines the average base pair distances along the molecule backbone. Integrating over the base pair radial and angular fluctuations, I derive the room temperature distribution functions as a function of the sequence length. The obtained values for the most probable end-to-end distance and contour length distance, providing a measure of the global molecule size, are used to examine the DNA flexibility at short length scales. It is found that, also in molecules with less than ˜60 base pairs, coiled configurations maintain a large statistical weight and, consistently, the persistence lengths may be much smaller than in kilo-base DNA.
Del Dotto, Alessio; Pace, Emanuele; Salme, Giovanni; ...
2017-01-10
Poincare covariant definitions for the spin-dependent spectral function and for the momentum distributions within the light-front Hamiltonian dynamics are proposed for a three-fermion bound system, starting from the light-front wave function of the system. The adopted approach is based on the Bakamjian–Thomas construction of the Poincaré generators, which allows one to easily import the familiar and wide knowledge on the nuclear interaction into a light-front framework. The proposed formalism can find useful applications in refined nuclear calculations, such as those needed for evaluating the European Muon Collaboration effect or the semi-inclusive deep inelastic cross sections with polarized nuclear targets, sincemore » remarkably the light-front unpolarized momentum distribution by definition fulfills both normalization and momentum sum rules. As a result, also shown is a straightforward generalization of the definition of the light-front spectral function to an A-nucleon system.« less
Pectus excavatum in children: pulmonary scintigraphy before and after corrective surgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blickman, J.G.; Rosen, P.R.; Welch, K.J.
1985-09-01
Regional distribution of pulmonary function was evaluated preoperatively and postoperatively with xenon-133 perfusion and ventilation scintigraphy in 17 patients with pectus excavatum. Ventilatory preoperative studies were abnormal in 12 of 17 patients, resolving in seven of 12 postoperatively. Perfusion scans were abnormal in ten of 17 patients preoperatively; six of ten showed improvement postoperatively. Ventilation-perfusion ratios were abnormal in ten of 17 patients, normalizing postoperatively in six of ten. Symmetry of ventilation-perfusion ratio images improved in six out of nine in the latter group. The distribution of regional lung function in pectus excavatum can be evaluated preoperatively to support indicationsmore » for surgery. Postoperative improvement can be documented by physiological changes produced by the surgical correction.« less
NASA Technical Reports Server (NTRS)
Medan, R. T.; Ray, K. S.
1974-01-01
A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.
Green's function of radial inhomogeneous spheres excited by internal sources.
Zouros, Grigorios P; Kokkorakis, Gerassimos C
2011-01-01
Green's function in the interior of penetrable bodies with inhomogeneous compressibility by sources placed inside them is evaluated through a Schwinger-Lippmann volume integral equation. In the case of a radial inhomogeneous sphere, the radial part of the unknown Green's function can be expanded in a double Dini's series, which allows analytical evaluation of the involved cumbersome integrals. The simple case treated here can be extended to more difficult situations involving inhomogeneous density as well as to the corresponding electromagnetic or elastic problem. Finally, numerical results are given for various inhomogeneous compressibility distributions.
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
He, Yujun; Zhang, Jin; Li, Dongqi; Wang, Jiangtao; Wu, Qiong; Wei, Yang; Zhang, Lina; Wang, Jiaping; Liu, Peng; Li, Qunqing; Fan, Shoushan; Jiang, Kaili
2013-01-01
We show that the Schottky barrier at the metal-single walled carbon nanotube (SWCNT) contact can be clearly observed in scanning electron microscopy (SEM) images as a bright contrast segment with length up to micrometers due to the space charge distribution in the depletion region. The lengths of the charge depletion increase with the diameters of semiconducting SWCNTs (s-SWCNTs) when connected to one metal electrode, which enables direct and efficient evaluation of the bandgap distributions of s-SWCNTs. Moreover, this approach can also be applied for a wide variety of semiconducting nanomaterials, adding a new function to conventional SEM.
Milne, a routine for the numerical solution of Milne's problem
NASA Astrophysics Data System (ADS)
Rawat, Ajay; Mohankumar, N.
2010-11-01
The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct. Program summaryProgram title: Milne Catalogue identifier: AEGS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 701 No. of bytes in distributed program, including test data, etc.: 6845 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC under Linux or Windows Operating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XP Classification: 4.11, 21.1, 21.2 Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature. Running time: The test included in the distribution takes a few seconds to run.
NASA Astrophysics Data System (ADS)
Hasan, E.; Dimitrova, M.; Havlicek, J.; Mitošinková, K.; Stöckel, J.; Varju, J.; Popov, Tsv K.; Komm, M.; Dejarnac, R.; Hacek, P.; Panek, R.; the COMPASS Team
2018-02-01
This paper presents the results from swept probe measurements in the divertor region of the COMPASS tokamak in D-shaped, L-mode discharges, with toroidal magnetic field BT = 1.15 T, plasma current Ip = 180 kA and line-average electron densities varying from 2 to 8×1019 m-3. Using neutral beam injection heating, the electron energy distribution function is studied before and during the application of the beam. The current-voltage characteristics data are processed using the first-derivative probe technique. This technique allows one to evaluate the plasma potential and the real electron energy distribution function (respectively, the electron temperatures and densities). At the low average electron density of 2×1019 m-3, the electron energy distribution function is bi-Maxwellian with a low-energy electron population with temperatures 4-6 eV and a high-energy electron group 12-25 eV. As the line-average electron density is increased, the electron temperatures decrease. At line-average electron densities above 7×1019 m-3, the electron energy distribution function is found to be Maxwellian with a temperature of 6-8.5 eV. The effect of the neutral beam injection heating power in the divertor region is also studied.
NASA Astrophysics Data System (ADS)
Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.
2016-12-01
Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.
Evaluated community fire safety interventions in the United States: a review of current literature.
Ta, Van M; Frattaroli, Shannon; Bergen, Gwendolyn; Gielen, Andrea Carlson
2006-06-01
The purpose of the study was to assess the state of fire prevention research, provide an updated synthesis of evaluated fire prevention programs, and discuss the role of fire fighters and data systems in prevention efforts. The review included all evaluations of U.S. based fire prevention interventions published between January 1998 and September 2004 and any earlier articles about U.S. fire prevention interventions not included in two prior review articles. We retrieved information from each identified study including evaluation findings, involvement of fire service personnel and use of existing data systems. We identified twelve articles: seven reported on smoke alarm interventions, three on multi-faceted programs, and two other programs. Five programs involved fire service personnel in the design, implementation, and/or evaluation, and three used existing data systems. Studies reviewed suggest that canvassing and smoke alarm installations are the most effective means of distributing alarms and increasing the functional status of distributed alarms. The functionality of smoke alarms, an issue noted in earlier reviews, remains a problem. Programs involving partnerships with fire departments have indicated success in preventing fires and deaths, improving smoke alarm ownership and functional status, and improving children's fire safety knowledge. Using existing data systems to target and to evaluate interventions was effective. In the years since prior reviews, some improvements in the rigor of evaluation designs have been made, but there is still a need for high quality evaluations that will inform fire injury prevention efforts.
Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.
Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael
2018-04-01
The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Yung-Sung; Kenoyer, Judson L.; Guilmette, Raymond A.
2009-03-01
The Capstone Depleted Uranium (DU) Aerosol Study, which generated and characterized aerosols containing depleted uranium from perforation of armored vehicles with large-caliber DU penetrators, incorporated a sampling protocol to evaluated particle size distributions. Aerosol particle size distribution is an important parameter that influences aerosol transport and deposition processes as well as the dosimetry of the inhaled particles. These aerosols were collected on cascade impactor substrates using a pre-established time sequence following the firing event to analyze the uranium concentration and particle size of the aerosols as a function of time. The impactor substrates were analyzed using beta spectrometry, and themore » derived uranium content of each served as input to the evaluation of particle size distributions. Activity median aerodynamic diameters (AMADs) of the particle size distributions were evaluated using unimodal and bimodal models. The particle size data from the impactor measurements was quite variable. Most size distributions measured in the test based on activity had bimodal size distributions with a small particle size mode in the range of between 0.2 and 1.2 um and a large size mode between 2 and 15 um. In general, the evolution of particle size over time showed an overall decrease of average particle size from AMADs of 5 to 10 um shortly after perforation to around 1 um at the end of the 2-hr sampling period. The AMADs generally decreased over time because of settling. Additionally, the median diameter of the larger size mode decreased with time. These results were used to estimate the dosimetry of inhaled DU particles.« less
Genome-Wide Association Study of the Genetic Determinants of Emphysema Distribution.
Boueiz, Adel; Lutz, Sharon M; Cho, Michael H; Hersh, Craig P; Bowler, Russell P; Washko, George R; Halper-Stromberg, Eitan; Bakke, Per; Gulsvik, Amund; Laird, Nan M; Beaty, Terri H; Coxson, Harvey O; Crapo, James D; Silverman, Edwin K; Castaldi, Peter J; DeMeo, Dawn L
2017-03-15
Emphysema has considerable variability in the severity and distribution of parenchymal destruction throughout the lungs. Upper lobe-predominant emphysema has emerged as an important predictor of response to lung volume reduction surgery. Yet, aside from alpha-1 antitrypsin deficiency, the genetic determinants of emphysema distribution remain largely unknown. To identify the genetic influences of emphysema distribution in non-alpha-1 antitrypsin-deficient smokers. A total of 11,532 subjects with complete genotype and computed tomography densitometry data in the COPDGene (Genetic Epidemiology of Chronic Obstructive Pulmonary Disease [COPD]; non-Hispanic white and African American), ECLIPSE (Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints), and GenKOLS (Genetics of Chronic Obstructive Lung Disease) studies were analyzed. Two computed tomography scan emphysema distribution measures (difference between upper-third and lower-third emphysema; ratio of upper-third to lower-third emphysema) were tested for genetic associations in all study subjects. Separate analyses in each study population were followed by a fixed effect metaanalysis. Single-nucleotide polymorphism-, gene-, and pathway-based approaches were used. In silico functional evaluation was also performed. We identified five loci associated with emphysema distribution at genome-wide significance. These loci included two previously reported associations with COPD susceptibility (4q31 near HHIP and 15q25 near CHRNA5) and three new associations near SOWAHB, TRAPPC9, and KIAA1462. Gene set analysis and in silico functional evaluation revealed pathways and cell types that may potentially contribute to the pathogenesis of emphysema distribution. This multicohort genome-wide association study identified new genomic loci associated with differential emphysematous destruction throughout the lungs. These findings may point to new biologic pathways on which to expand diagnostic and therapeutic approaches in chronic obstructive pulmonary disease. Clinical trial registered with www.clinicaltrials.gov (NCT 00608764).
A Functional Model for Management of Large Scale Assessments.
ERIC Educational Resources Information Center
Banta, Trudy W.; And Others
This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
NASA Astrophysics Data System (ADS)
Pedretti, Daniele
2017-04-01
Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.
ERIC Educational Resources Information Center
Ritz, John M.
The intent of this field tested instructional package is to familiarize the student with the marketing and distribution element of industry and its function in the production of goods and services. Defining behavioral objectives, the course description offers a media guide, suggested classroom activities, and sample student evaluation forms as…
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.; Tubbs, J. D.
1982-01-01
A five-parameter gamma distribution (BGD) having two shape parameters, two location parameters, and a correlation parameter is investigated. This general BGD is expressed as a double series and as a single series of the modified Bessel function. It reduces to the known special case for equal shape parameters. Practical functions for computer evaluations for the general BGD and for special cases are presented. Applications to wind gust modeling for the ascent flight of the space shuttle are illustrated.
USDA-ARS?s Scientific Manuscript database
Bidirectional Reflectance Distribution Function (BRDF) model parameters, Albedo quantities, and Nadir BRDF Adjusted Reflectance (NBAR) products derived from the Visible Infrared Imaging Radiometer Suite (VIIRS), on the Suomi-NPP (National Polar-orbiting Partnership) satellite are evaluated through c...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loebl, N.; Maruhn, J. A.; Reinhard, P.-G.
2011-09-15
By calculating the Wigner distribution function in the reaction plane, we are able to probe the phase-space behavior in the time-dependent Hartree-Fock scheme during a heavy-ion collision in a consistent framework. Various expectation values of operators are calculated by evaluating the corresponding integrals over the Wigner function. In this approach, it is straightforward to define and analyze quantities even locally. We compare the Wigner distribution function with the smoothed Husimi distribution function. Different reaction scenarios are presented by analyzing central and noncentral {sup 16}O +{sup 16}O and {sup 96}Zr +{sup 132}Sn collisions. Although we observe strong dissipation in the timemore » evolution of global observables, there is no evidence for complete equilibration in the local analysis of the Wigner function. Because the initial phase-space volumes of the fragments barely merge and mean values of the observables are conserved in fusion reactions over thousands of fm/c, we conclude that the time-dependent Hartree-Fock method provides a good description of the early stage of a heavy-ion collision but does not provide a mechanism to change the phase-space structure in a dramatic way necessary to obtain complete equilibration.« less
Performance prediction evaluation of ceramic materials in point-focusing solar receivers
NASA Technical Reports Server (NTRS)
Ewing, J.; Zwissler, J.
1979-01-01
A performance prediction was adapted to evaluate the use of ceramic materials in solar receivers for point focusing distributed applications. System requirements were determined including the receiver operating environment and system operating parameters for various engine types. Preliminary receiver designs were evolved from these system requirements. Specific receiver designs were then evaluated to determine material functional requirements.
A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines
2011-01-01
Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538
A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.
Cieślik, Marcin; Mura, Cameron
2011-02-25
Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.
[Rank distributions in community ecology from the statistical viewpoint].
Maksimov, V N
2004-01-01
Traditional statistical methods for definition of empirical functions of abundance distribution (population, biomass, production, etc.) of species in a community are applicable for processing of multivariate data contained in the above quantitative indices of the communities. In particular, evaluation of moments of distribution suffices for convolution of the data contained in a list of species and their abundance. At the same time, the species should be ranked in the list in ascending rather than descending population and the distribution models should be analyzed on the basis of the data on abundant species only.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ALAM,TODD M.
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
Mapping local and global variability in plant trait distributions
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc; ...
2017-12-01
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Mapping local and global variability in plant trait distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Uncertainty Evaluation and Appropriate Distribution for the RDHM in the Rockies
NASA Astrophysics Data System (ADS)
Kim, J.; Bastidas, L. A.; Clark, E. P.
2010-12-01
The problems that hydrologic models have in properly reproducing the processes involved in mountainous areas, and in particular the Rocky Mountains, are widely acknowledged. Herein, we present an application of the National Weather Service RDHM distributed model over the Durango River basin in Colorado. We focus primarily in the assessment of the model prediction uncertainty associated with the parameter estimation and the comparison of the model performance using parameters obtained with a priori estimation following the procedure of Koren et al., and those obtained via inverse modeling using a variety of Markov chain Monte Carlo based optimization algorithms. The model evaluation is based on traditional procedures as well as non-traditional ones based on the use of shape matching functions, which are more appropriate for the evaluation of distributed information (e.g. Hausdorff distance, earth movers distance). The variables used for the model performance evaluation are discharge (with internal nodes), snow cover and snow water equivalent. An attempt to establish the proper degree of distribution, for the Durango basin with the RDHM model, is also presented.
Efficient Credit Assignment through Evaluation Function Decomposition
NASA Technical Reports Server (NTRS)
Agogino, Adrian; Turner, Kagan; Mikkulainen, Risto
2005-01-01
Evolutionary methods are powerful tools in discovering solutions for difficult continuous tasks. When such a solution is encoded over multiple genes, a genetic algorithm faces the difficult credit assignment problem of evaluating how a single gene in a chromosome contributes to the full solution. Typically a single evaluation function is used for the entire chromosome, implicitly giving each gene in the chromosome the same evaluation. This method is inefficient because a gene will get credit for the contribution of all the other genes as well. Accurately measuring the fitness of individual genes in such a large search space requires many trials. This paper instead proposes turning this single complex search problem into a multi-agent search problem, where each agent has the simpler task of discovering a suitable gene. Gene-specific evaluation functions can then be created that have better theoretical properties than a single evaluation function over all genes. This method is tested in the difficult double-pole balancing problem, showing that agents using gene-specific evaluation functions can create a successful control policy in 20 percent fewer trials than the best existing genetic algorithms. The method is extended to more distributed problems, achieving 95 percent performance gains over tradition methods in the multi-rover domain.
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
NASA Astrophysics Data System (ADS)
Basu, A.; Das, B.; Middya, T. R.; Bhattacharya, D. P.
2017-01-01
The phonon growth characteristic in a degenerate semiconductor has been calculated under the condition of low temperature. If the lattice temperature is high, the energy of the intravalley acoustic phonon is negligibly small compared to the average thermal energy of the electrons. Hence one can traditionally assume the electron-phonon collisions to be elastic and approximate the Bose-Einstein (B.E.) distribution for the phonons by the simple equipartition law. However, in the present analysis at the low lattice temperatures, the interaction of the non equilibrium electrons with the acoustic phonons becomes inelastic and the simple equipartition law for the phonon distribution is not valid. Hence the analysis is made taking into account the inelastic collisions and the complete form of the B.E. distribution. The high-field distribution function of the carriers given by Fermi-Dirac (F.D.) function at the field dependent carrier temperature, has been approximated by a well tested model that apparently overcomes the intrinsic problem of correct evaluation of the integrals involving the product and powers of the Fermi function. Hence the results thus obtained are more reliable compared to the rough estimation that one may obtain from using the exact F.D. function, but taking recourse to some over simplified approximations.
Study of the zinc-silver oxide battery system
NASA Technical Reports Server (NTRS)
Nanis, L.
1973-01-01
Theoretical and experimental models for the evaluation of current distribution in flooded, porous electrodes are discussed. An approximation for the local current distribution function was derived for conditions of a linear overpotential, a uniform concentration, and a very conductive matrix. By considering the porous electrode to be an analog of chemical catalyst structures, a dimensionless performance parameter was derived from the approximated current distribution function. In this manner the electrode behavior was characterized in terms of an electrochemical Thiele parameter and an effectiveness factor. It was shown that the electrochemical engineering approach makes possible the organizations of theoretical descriptions and of practical experience in the form of dimensionless parameters, such as the electrochemical Thiele parameters, and hence provides useful information for the design of new electrochemical systems.
Hawaiian Electric Advanced Inverter Grid Support Function Laboratory Validation and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin; Nagarajan, Adarsh; Prabakar, Kumar
The objective for this test plan was to better understand how to utilize the performance capabilities of advanced inverter functions to allow the interconnection of distributed energy resource (DER) systems to support the new Customer Self-Supply, Customer Grid-Supply, and other future DER programs. The purpose of this project was: 1) to characterize how the tested grid supportive inverters performed the functions of interest, 2) to evaluate the grid supportive inverters in an environment that emulates the dynamics of O'ahu's electrical distribution system, and 3) to gain insight into the benefits of the grid support functions on selected O'ahu island distributionmore » feeders. These goals were achieved through laboratory testing of photovoltaic inverters, including power hardware-in-the-loop testing.« less
Dempsey, Steven J; Gese, Eric M; Kluever, Bryan M; Lonsinger, Robert C; Waits, Lisette P
2015-01-01
Development and evaluation of noninvasive methods for monitoring species distribution and abundance is a growing area of ecological research. While noninvasive methods have the advantage of reduced risk of negative factors associated with capture, comparisons to methods using more traditional invasive sampling is lacking. Historically kit foxes (Vulpes macrotis) occupied the desert and semi-arid regions of southwestern North America. Once the most abundant carnivore in the Great Basin Desert of Utah, the species is now considered rare. In recent decades, attempts have been made to model the environmental variables influencing kit fox distribution. Using noninvasive scat deposition surveys for determination of kit fox presence, we modeled resource selection functions to predict kit fox distribution using three popular techniques (Maxent, fixed-effects, and mixed-effects generalized linear models) and compared these with similar models developed from invasive sampling (telemetry locations from radio-collared foxes). Resource selection functions were developed using a combination of landscape variables including elevation, slope, aspect, vegetation height, and soil type. All models were tested against subsequent scat collections as a method of model validation. We demonstrate the importance of comparing multiple model types for development of resource selection functions used to predict a species distribution, and evaluating the importance of environmental variables on species distribution. All models we examined showed a large effect of elevation on kit fox presence, followed by slope and vegetation height. However, the invasive sampling method (i.e., radio-telemetry) appeared to be better at determining resource selection, and therefore may be more robust in predicting kit fox distribution. In contrast, the distribution maps created from the noninvasive sampling (i.e., scat transects) were significantly different than the invasive method, thus scat transects may be appropriate when used in an occupancy framework to predict species distribution. We concluded that while scat deposition transects may be useful for monitoring kit fox abundance and possibly occupancy, they do not appear to be appropriate for determining resource selection. On our study area, scat transects were biased to roadways, while data collected using radio-telemetry was dictated by movements of the kit foxes themselves. We recommend that future studies applying noninvasive scat sampling should consider a more robust random sampling design across the landscape (e.g., random transects or more complete road coverage) that would then provide a more accurate and unbiased depiction of resource selection useful to predict kit fox distribution.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
Probabilistic analysis of bladed turbine disks and the effect of mistuning
NASA Technical Reports Server (NTRS)
Shah, A. R.; Nagpal, V. K.; Chamis, Christos C.
1990-01-01
Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping are included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.
Probabilistic analysis of bladed turbine disks and the effect of mistuning
NASA Technical Reports Server (NTRS)
Shah, Ashwin; Nagpal, V. K.; Chamis, C. C.
1990-01-01
Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping have been included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Tools for Supporting Distributed Agile Project Planning
NASA Astrophysics Data System (ADS)
Wang, Xin; Maurer, Frank; Morgan, Robert; Oliveira, Josyleuda
Agile project planning plays an important part in agile software development. In distributed settings, project planning is severely impacted by the lack of face-to-face communication and the inability to share paper index cards amongst all meeting participants. To address these issues, several distributed agile planning tools were developed. The tools vary in features, functions and running platforms. In this chapter, we first summarize the requirements for distributed agile planning. Then we give an overview on existing agile planning tools. We also evaluate existing tools based on tool requirements. Finally, we present some practical advices for both designers and users of distributed agile planning tools.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
NASA Astrophysics Data System (ADS)
Izacard, Olivier
2016-08-01
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izacard, Olivier, E-mail: izacard@llnl.gov
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less
Bhattacharjee, Biplab
2003-04-01
The paper presents a general formalism for the nth-nearest-neighbor distribution (NND) of identical interacting particles in a fluid confined in a nu-dimensional space. The nth-NND functions, W(n,r) (for n=1,2,3, em leader) in a fluid are obtained hierarchically in terms of the pair correlation function and W(n-1,r) alone. The radial distribution function (RDF) profiles obtained from the molecular dynamics (MD) simulation of Lennard-Jones (LJ) fluid is used to illustrate the results. It is demonstrated that the collective structural information contained in the maxima and minima of the RDF profiles being resolved in terms of individual NND functions may provide more insights about the microscopic neighborhood structure around a reference particle in a fluid. Representative comparison between the results obtained from the formalism and the MD simulation data shows good agreement. Apart from the quantities such as nth-NND functions and nth-nearest-neighbor distances, the average neighbor population number is defined. These quantities are evaluated for the LJ model system and interesting density dependence of the microscopic neighborhood shell structures are discussed in terms of them. The relevance of the NND functions in various phenomena is also pointed out.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Biplab
2003-04-01
The paper presents a general formalism for the nth-nearest-neighbor distribution (NND) of identical interacting particles in a fluid confined in a ν-dimensional space. The nth-NND functions, W(n,r¯) (for n=1,2,3,…) in a fluid are obtained hierarchically in terms of the pair correlation function and W(n-1,r¯) alone. The radial distribution function (RDF) profiles obtained from the molecular dynamics (MD) simulation of Lennard-Jones (LJ) fluid is used to illustrate the results. It is demonstrated that the collective structural information contained in the maxima and minima of the RDF profiles being resolved in terms of individual NND functions may provide more insights about the microscopic neighborhood structure around a reference particle in a fluid. Representative comparison between the results obtained from the formalism and the MD simulation data shows good agreement. Apart from the quantities such as nth-NND functions and nth-nearest-neighbor distances, the average neighbor population number is defined. These quantities are evaluated for the LJ model system and interesting density dependence of the microscopic neighborhood shell structures are discussed in terms of them. The relevance of the NND functions in various phenomena is also pointed out.
Measurement of the distribution coefficient of neodymium in cubic ZrO 2
NASA Astrophysics Data System (ADS)
Römer, H.; Luther, K.-D.; Assmus, W.
1993-05-01
The incorporation of solute elements into single crystals has been examined for many years. In this paper we investigate the distribution coefficient of Nd 2O 3 in cubic stabilized zirconiumdioxide crystals. The distribution coefficient is measured as a function of the growth velocity. The validity of the Burton-Prim-Slichter theory [J.A. Burton, R.C. Prim and W.P. Slichter, J. Chem. Phys. 21 (1953) 1987] for the system zirconium dioxide/yttrium oxide is confirmed by the experimental results. The value for the equilibrium distribution coefficient is evaluated as k0 = 0.426.
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xia, Xintao; Wang, Zhongyu
2008-10-01
For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.
CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.
NASA Astrophysics Data System (ADS)
Okabe, Shigemitsu; Tsuboi, Toshihiro; Takami, Jun
The power-frequency withstand voltage tests are regulated on electric power equipment in JEC by evaluating the lifetime reliability with a Weibull distribution function. The evaluation method is still controversial in terms of consideration of a plural number of faults and some alternative methods were proposed on this subject. The present paper first discusses the physical meanings of the various kinds of evaluating methods and secondly examines their effects on the power-frequency withstand voltage tests. Further, an appropriate method is investigated for an oil-filled transformer and a gas insulated switchgear with taking notice of dielectric breakdown or partial discharge mechanism under various insulating material and structure conditions and the tentative conclusion gives that the conventional method would be most pertinent under the present conditions.
Liu, Jian; Miller, William H
2011-03-14
We show the exact expression of the quantum mechanical time correlation function in the phase space formulation of quantum mechanics. The trajectory-based dynamics that conserves the quantum canonical distribution-equilibrium Liouville dynamics (ELD) proposed in Paper I is then used to approximately evaluate the exact expression. It gives exact thermal correlation functions (of even nonlinear operators, i.e., nonlinear functions of position or momentum operators) in the classical, high temperature, and harmonic limits. Various methods have been presented for the implementation of ELD. Numerical tests of the ELD approach in the Wigner or Husimi phase space have been made for a harmonic oscillator and two strongly anharmonic model problems, for each potential autocorrelation functions of both linear and nonlinear operators have been calculated. It suggests ELD can be a potentially useful approach for describing quantum effects for complex systems in condense phase.
Molecular weight distribution of organic matter by ozonation and biofiltration.
Lin, Yen-Hui
2012-01-01
Molecular weight (MW) distribution of organic matter by ozonation and biofiltration was evaluated using gel chromatography. The MW distribution of organic matter by Sephadex G-25 was observed from groups 2 (MW = 1,029-7,031 g/mol) and 3 (MW = 303-1,029 g/mol) shifted to groups 2, 3 and 4 (MW < 303 g/mol) under ozone doses of 0.1 and 0.4 mg O₃/mg total organic carbon (TOC). The shift in MW increases as ozone dosage increases. Biofiltration effectively degraded the organic molecule of group 2; however, the biofiltration only slightly degraded the organic molecule of group 4. Increased ozone dose destroyed functional groups C═C in phenolic and C-O in alcoholic compounds and increased UV-insensitive biodegradable organic carbon for subsequent biofiltration. Biofiltration effectively degraded organic compounds of alcohols and alkenes at an ozone dose of 0.1 mg O₃/mg TOC. Experimental approaches in this study can be applied to evaluate and diagnose the function of a full-scale process combining ozonation and biofiltration in drinking water treatment plants.
Thornton, B S; Hung, W T; Irving, J
1991-01-01
The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.
Hinman, Sarah E; Blackburn, Jason K; Curtis, Andrew
2006-01-01
Background To better understand the distribution of typhoid outbreaks in Washington, D.C., the U.S. Public Health Service (PHS) conducted four investigations of typhoid fever. These studies included maps of cases reported between 1 May – 31 October 1906 – 1909. These data were entered into a GIS database and analyzed using Ripley's K-function followed by the Gi* statistic in yearly intervals to evaluate spatial clustering, the scale of clustering, and the temporal stability of these clusters. Results The Ripley's K-function indicated no global spatial autocorrelation. The Gi* statistic indicated clustering of typhoid at multiple scales across the four year time period, refuting the conclusions drawn in all four PHS reports concerning the distribution of cases. While the PHS reports suggested an even distribution of the disease, this study quantified both areas of localized disease clustering, as well as mobile larger regions of clustering. Thus, indicating both highly localized and periodic generalized sources of infection within the city. Conclusion The methodology applied in this study was useful for evaluating the spatial distribution and annual-level temporal patterns of typhoid outbreaks in Washington, D.C. from 1906 to 1909. While advanced spatial analyses of historical data sets must be interpreted with caution, this study does suggest that there is utility in these types of analyses and that they provide new insights into the urban patterns of typhoid outbreaks during the early part of the twentieth century. PMID:16566830
Hinman, Sarah E; Blackburn, Jason K; Curtis, Andrew
2006-03-27
To better understand the distribution of typhoid outbreaks in Washington, D.C., the U.S. Public Health Service (PHS) conducted four investigations of typhoid fever. These studies included maps of cases reported between 1 May - 31 October 1906 - 1909. These data were entered into a GIS database and analyzed using Ripley's K-function followed by the Gi* statistic in yearly intervals to evaluate spatial clustering, the scale of clustering, and the temporal stability of these clusters. The Ripley's K-function indicated no global spatial autocorrelation. The Gi* statistic indicated clustering of typhoid at multiple scales across the four year time period, refuting the conclusions drawn in all four PHS reports concerning the distribution of cases. While the PHS reports suggested an even distribution of the disease, this study quantified both areas of localized disease clustering, as well as mobile larger regions of clustering. Thus, indicating both highly localized and periodic generalized sources of infection within the city. The methodology applied in this study was useful for evaluating the spatial distribution and annual-level temporal patterns of typhoid outbreaks in Washington, D.C. from 1906 to 1909. While advanced spatial analyses of historical data sets must be interpreted with caution, this study does suggest that there is utility in these types of analyses and that they provide new insights into the urban patterns of typhoid outbreaks during the early part of the twentieth century.
Njeh, Ines; Sallemi, Lamia; Ayed, Ismail Ben; Chtourou, Khalil; Lehericy, Stephane; Galanaud, Damien; Hamida, Ahmed Ben
2015-03-01
This study investigates a fast distribution-matching, data-driven algorithm for 3D multimodal MRI brain glioma tumor and edema segmentation in different modalities. We learn non-parametric model distributions which characterize the normal regions in the current data. Then, we state our segmentation problems as the optimization of several cost functions of the same form, each containing two terms: (i) a distribution matching prior, which evaluates a global similarity between distributions, and (ii) a smoothness prior to avoid the occurrence of small, isolated regions in the solution. Obtained following recent bound-relaxation results, the optima of the cost functions yield the complement of the tumor region or edema region in nearly real-time. Based on global rather than pixel wise information, the proposed algorithm does not require an external learning from a large, manually-segmented training set, as is the case of the existing methods. Therefore, the ensuing results are independent of the choice of a training set. Quantitative evaluations over the publicly available training and testing data set from the MICCAI multimodal brain tumor segmentation challenge (BraTS 2012) demonstrated that our algorithm yields a highly competitive performance for complete edema and tumor segmentation, among nine existing competing methods, with an interesting computing execution time (less than 0.5s per image). Copyright © 2014 Elsevier Ltd. All rights reserved.
Genome-Wide Association Study of the Genetic Determinants of Emphysema Distribution
Boueiz, Adel; Lutz, Sharon M.; Cho, Michael H.; Hersh, Craig P.; Bowler, Russell P.; Washko, George R.; Halper-Stromberg, Eitan; Bakke, Per; Gulsvik, Amund; Laird, Nan M.; Beaty, Terri H.; Coxson, Harvey O.; Crapo, James D.; Silverman, Edwin K.; Castaldi, Peter J.
2017-01-01
Rationale: Emphysema has considerable variability in the severity and distribution of parenchymal destruction throughout the lungs. Upper lobe–predominant emphysema has emerged as an important predictor of response to lung volume reduction surgery. Yet, aside from alpha-1 antitrypsin deficiency, the genetic determinants of emphysema distribution remain largely unknown. Objectives: To identify the genetic influences of emphysema distribution in non–alpha-1 antitrypsin–deficient smokers. Methods: A total of 11,532 subjects with complete genotype and computed tomography densitometry data in the COPDGene (Genetic Epidemiology of Chronic Obstructive Pulmonary Disease [COPD]; non-Hispanic white and African American), ECLIPSE (Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints), and GenKOLS (Genetics of Chronic Obstructive Lung Disease) studies were analyzed. Two computed tomography scan emphysema distribution measures (difference between upper-third and lower-third emphysema; ratio of upper-third to lower-third emphysema) were tested for genetic associations in all study subjects. Separate analyses in each study population were followed by a fixed effect metaanalysis. Single-nucleotide polymorphism–, gene-, and pathway-based approaches were used. In silico functional evaluation was also performed. Measurements and Main Results: We identified five loci associated with emphysema distribution at genome-wide significance. These loci included two previously reported associations with COPD susceptibility (4q31 near HHIP and 15q25 near CHRNA5) and three new associations near SOWAHB, TRAPPC9, and KIAA1462. Gene set analysis and in silico functional evaluation revealed pathways and cell types that may potentially contribute to the pathogenesis of emphysema distribution. Conclusions: This multicohort genome-wide association study identified new genomic loci associated with differential emphysematous destruction throughout the lungs. These findings may point to new biologic pathways on which to expand diagnostic and therapeutic approaches in chronic obstructive pulmonary disease. Clinical trial registered with www.clinicaltrials.gov (NCT 00608764). PMID:27669027
1986-07-01
COMPUTER-AIDED OPERATION MANAGEMENT SYSTEM ................. 29 Functions of an Off-Line Computer-Aided Operation Management System Applications of...System Comparisons 85 DISTRIBUTION 5V J. • 0. FIGURES Number Page 1 Hardware Components 21 2 Basic Functions of a Computer-Aided Operation Management System...Plant Visits 26 4 Computer-Aided Operation Management Systems Reviewed for Analysis of Basic Functions 29 5 Progress of Software System Installation and
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sharma, Prabhat Kumar
2016-11-01
A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.
Flood impacts on a water distribution network
NASA Astrophysics Data System (ADS)
Arrighi, Chiara; Tarani, Fabio; Vicario, Enrico; Castelli, Fabio
2017-12-01
Floods cause damage to people, buildings and infrastructures. Water distribution systems are particularly exposed, since water treatment plants are often located next to the rivers. Failure of the system leads to both direct losses, for instance damage to equipment and pipework contamination, and indirect impact, since it may lead to service disruption and thus affect populations far from the event through the functional dependencies of the network. In this work, we present an analysis of direct and indirect damages on a drinking water supply system, considering the hazard of riverine flooding as well as the exposure and vulnerability of active system components. The method is based on interweaving, through a semi-automated GIS procedure, a flood model and an EPANET-based pipe network model with a pressure-driven demand approach, which is needed when modelling water distribution networks in highly off-design conditions. Impact measures are defined and estimated so as to quantify service outage and potential pipe contamination. The method is applied to the water supply system of the city of Florence, Italy, serving approximately 380 000 inhabitants. The evaluation of flood impact on the water distribution network is carried out for different events with assigned recurrence intervals. Vulnerable elements exposed to the flood are identified and analysed in order to estimate their residual functionality and to simulate failure scenarios. Results show that in the worst failure scenario (no residual functionality of the lifting station and a 500-year flood), 420 km of pipework would require disinfection with an estimated cost of EUR 21 million, which is about 0.5 % of the direct flood losses evaluated for buildings and contents. Moreover, if flood impacts on the water distribution network are considered, the population affected by the flood is up to 3 times the population directly flooded.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
Multi-Function Displays: A Guide for Human Factors Evaluation
2013-11-01
mental workload in rotary wing aircraft . Ergonomics , 36, 1121 - 40. Smith, S., & Mosier, J. (1984). Design guidelines for the user interface for...Monterey Technologies, Inc., except one designated by (*), who is from CAMI. 16. Abstract This guide is designed to assist aircraft ...section. 17. Key Words 18. Distribution Statement Multi-Function Displays, Display Design , Avionics, Human Factors Criteria, Aircraft
Whitney, James E.; Whittier, Joanna B.; Paukert, Craig P.
2017-01-01
Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the similarity of environmental niches among warmwater centrarchids, ictalurids, fundulids, and poeciliids in the UCRB indicated that dam removals could influence the distribution of these nonnatives simultaneously, thus providing greater conservation benefits. However, this same management strategy would have more limited effects on nonnative salmonids, catostomids, and percids with colder temperature preferences, thus necessitating other management strategies to control these species.
Zhao, Chao; Jiang, Jingchi; Guan, Yi; Guo, Xitong; He, Bin
2018-05-01
Electronic medical records (EMRs) contain medical knowledge that can be used for clinical decision support (CDS). Our objective is to develop a general system that can extract and represent knowledge contained in EMRs to support three CDS tasks-test recommendation, initial diagnosis, and treatment plan recommendation-given the condition of a patient. We extracted four kinds of medical entities from records and constructed an EMR-based medical knowledge network (EMKN), in which nodes are entities and edges reflect their co-occurrence in a record. Three bipartite subgraphs (bigraphs) were extracted from the EMKN, one to support each task. One part of the bigraph was the given condition (e.g., symptoms), and the other was the condition to be inferred (e.g., diseases). Each bigraph was regarded as a Markov random field (MRF) to support the inference. We proposed three graph-based energy functions and three likelihood-based energy functions. Two of these functions are based on knowledge representation learning and can provide distributed representations of medical entities. Two EMR datasets and three metrics were utilized to evaluate the performance. As a whole, the evaluation results indicate that the proposed system outperformed the baseline methods. The distributed representation of medical entities does reflect similarity relationships with respect to knowledge level. Combining EMKN and MRF is an effective approach for general medical knowledge representation and inference. Different tasks, however, require individually designed energy functions. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Omar, Artur; Andreo, Pedro; Poludniowski, Gavin
2018-07-01
Different theories of the intrinsic bremsstrahlung angular distribution (i.e., the shape function) have been evaluated using Monte Carlo calculations for various target materials and incident electron energies between 20 keV and 300 keV. The shape functions considered were the plane-wave first Born approximation cross sections (i) 2BS [high-energy result, screened nucleus], (ii) 2BN [general result, bare nucleus], (iii) KM [2BS modified to emulate 2BN], and (iv) SIM [leading term of 2BN]; (v) expression based on partial-waves expansion, KQP; and (vi) a uniform spherical distribution, UNI [a common approximation in certain analytical models]. The shape function was found to have an important impact on the bremsstrahlung emerging from thin foil targets in which the incident electrons undergo few elastic scatterings before exiting the target material. For thick transmission and reflection targets the type of shape function had less importance, as the intrinsic bremsstrahlung angular distribution was masked by the diffuse directional distribution of multiple scattered electrons. Predictions made using the 2BN and KQP theories were generally in good agreement, suggesting that the effect of screening and the constraints of the Born approximation on the intrinsic angular distribution may be acceptable. The KM and SIM shape functions deviated notably from KQP for low electron energies (< 50 keV), while 2BS and UNI performed poorly over most of the energy range considered; the 2BS shape function was found to be too forward-focused in emission, while UNI was not forward-focused enough. The results obtained emphasize the importance of the intrinsic bremsstrahlung angular distribution for theoretical predictions of x-ray emission, which is relevant in various applied disciplines, including x-ray crystallography, electron-probe microanalysis, security and industrial inspection, medical imaging, as well as low- and medium (orthovoltage) energy radiotherapy.
A Statistical Treatment of Bioassay Pour Fractions
NASA Technical Reports Server (NTRS)
Barengoltz, Jack; Hughes, David W.
2014-01-01
The binomial probability distribution is used to treat the statistics of a microbiological sample that is split into two parts, with only one part evaluated for spore count. One wishes to estimate the total number of spores in the sample based on the counts obtained from the part that is evaluated (pour fraction). Formally, the binomial distribution is recharacterized as a function of the observed counts (successes), with the total number (trials) an unknown. The pour fraction is the probability of success per spore (trial). This distribution must be renormalized in terms of the total number. Finally, the new renormalized distribution is integrated and mathematically inverted to yield the maximum estimate of the total number as a function of a desired level of confidence ( P(
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Sanderson, A. C.
1994-01-01
Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.
Responses of the Jovian Atmosphere to Cometary Particles and Photon Impacts
NASA Technical Reports Server (NTRS)
Dalgarno, A.
1998-01-01
Spectra of soft x-ray and EUV emissions of oxygen ions, precipitating into the Jovian atmosphere, are calculated, taking into account the dynamical character of the energy and charge distributions of the ions as they propagate. Monte-Carlo simulations are performed using experimental and theoretical cross sections of ion collisions with the atmospheric gases. The numbers of x-ray and EUV photons produced per precipitating oxygen ion are calculated as functions of the initial ion energy and charge. The energy and charge distribution functions are used to evaluate the intensities of characteristic x-ray and EUV spectral emission lines of oxygen ions in the Jovian aurora.
Dolan, Paul; Tsuchiya, Aki
2009-01-01
The literature on income distribution has attempted to evaluate different degrees of inequality using a social welfare function (SWF) approach. However, it has largely ignored the source of such inequalities, and has thus failed to consider different degrees of inequity. The literature on egalitarianism has addressed issues of equity, largely in relation to individual responsibility. This paper builds upon these two literatures, and introduces individual responsibility into the SWF. Results from a small-scale study of people's preferences in relation to the distribution of health benefits are presented to illustrate how the parameter values of a SWF might be determined.
Examples of measurement uncertainty evaluations in accordance with the revised GUM
NASA Astrophysics Data System (ADS)
Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.
2016-11-01
The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.
Radiation dose in temporomandibular joint zonography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coucke, M.E.; Bourgoignie, R.R.; Dermaut, L.R.
1991-06-01
Temporomandibular joint morphology and function can be evaluated by panoramic zonography. Thermoluminescent dosimetry was applied to evaluate the radiation dose to predetermined sites on a phantom eye, thyroid, pituitary, and parotid, and the dose distribution on the skin of the head and neck when the TMJ program of the Zonarc panoramic x-ray unit was used. Findings are discussed with reference to similar radiographic techniques.
Potential Role of Lung Ventilation Scintigraphy in the Assessment of COPD
Cukic, Vesna; Begic, Amela
2014-01-01
Objective: To highlight the importance of the lung ventilation scintigraphy (LVS) to study the regional distribution of lung ventilation and to describe most frequent abnormal patterns of lung ventilation distribution obtained by this technique in COPD and to compare the information obtained by LVS with the that obtained by traditional lung function tests. Material and methods: The research was done in 20 patients with previously diagnosed COPD who were treated in Intensive care unit of Clinic for pulmonary diseases and TB “Podhrastovi” Clinical Center, University of Sarajevo in exacerbation of COPD during first three months of 2014. Each patient was undergone to testing of pulmonary function by body plethysmography and ventilation/perfusion lung scintigraphy with radio pharmaceutics Technegas, 111 MBq Tc -99m-MAA. We compared the results obtained by these two methods. Results: All patients with COPD have a damaged lung function tests examined by body plethysmography implying airflow obstruction, but LVS indicates not only airflow obstruction and reduced ventilation, but also indicates the disorders in distribution in lung ventilation. Conclusion: LVS may add further information to the functional evaluation of COPD to that provided by traditional lung function tests and may contribute to characterizing the different phenotypes of COPD. PMID:25132709
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
Testing the anisotropy in the angular distribution of Fermi/GBM gamma-ray bursts
NASA Astrophysics Data System (ADS)
Tarnopolski, M.
2017-12-01
Gamma-ray bursts (GRBs) were confirmed to be of extragalactic origin due to their isotropic angular distribution, combined with the fact that they exhibited an intensity distribution that deviated strongly from the -3/2 power law. This finding was later confirmed with the first redshift, equal to at least z = 0.835, measured for GRB970508. Despite this result, the data from CGRO/BATSE and Swift/BAT indicate that long GRBs are indeed distributed isotropically, but the distribution of short GRBs is anisotropic. Fermi/GBM has detected 1669 GRBs up to date, and their sky distribution is examined in this paper. A number of statistical tests are applied: nearest neighbour analysis, fractal dimension, dipole and quadrupole moments of the distribution function decomposed into spherical harmonics, binomial test and the two-point angular correlation function. Monte Carlo benchmark testing of each test is performed in order to evaluate its reliability. It is found that short GRBs are distributed anisotropically in the sky, and long ones have an isotropic distribution. The probability that these results are not a chance occurrence is equal to at least 99.98 per cent and 30.68 per cent for short and long GRBs, respectively. The cosmological context of this finding and its relation to large-scale structures is discussed.
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
Efficient evaluation of the material response of tissues reinforced by statistically oriented fibres
NASA Astrophysics Data System (ADS)
Hashlamoun, Kotaybah; Grillo, Alfio; Federico, Salvatore
2016-10-01
For several classes of soft biological tissues, modelling complexity is in part due to the arrangement of the collagen fibres. In general, the arrangement of the fibres can be described by defining, at each point in the tissue, the structure tensor (i.e. the tensor product of the unit vector of the local fibre arrangement by itself) and a probability distribution of orientation. In this approach, assuming that the fibres do not interact with each other, the overall contribution of the collagen fibres to a given mechanical property of the tissue can be estimated by means of an averaging integral of the constitutive function describing the mechanical property at study over the set of all possible directions in space. Except for the particular case of fibre constitutive functions that are polynomial in the transversely isotropic invariants of the deformation, the averaging integral cannot be evaluated directly, in a single calculation because, in general, the integrand depends both on deformation and on fibre orientation in a non-separable way. The problem is thus, in a sense, analogous to that of solving the integral of a function of two variables, which cannot be split up into the product of two functions, each depending only on one of the variables. Although numerical schemes can be used to evaluate the integral at each deformation increment, this is computationally expensive. With the purpose of containing computational costs, this work proposes approximation methods that are based on the direct integrability of polynomial functions and that do not require the step-by-step evaluation of the averaging integrals. Three different methods are proposed: (a) a Taylor expansion of the fibre constitutive function in the transversely isotropic invariants of the deformation; (b) a Taylor expansion of the fibre constitutive function in the structure tensor; (c) for the case of a fibre constitutive function having a polynomial argument, an approximation in which the directional average of the constitutive function is replaced by the constitutive function evaluated at the directional average of the argument. Each of the proposed methods approximates the averaged constitutive function in such a way that it is multiplicatively decomposed into the product of a function of the deformation only and a function of the structure tensors only. In order to assess the accuracy of these methods, we evaluate the constitutive functions of the elastic potential and the Cauchy stress, for a biaxial test, under different conditions, i.e. different fibre distributions and different ratios of the nominal strains in the two directions. The results are then compared against those obtained for an averaging method available in the literature, as well as against the integration made at each increment of deformation.
NASA Technical Reports Server (NTRS)
Beach, R. F.; Kimnach, G. L.; Jett, T. A.; Trash, L. M.
1989-01-01
The Lewis Research Center's Power Management and Distribution (PMAD) System testbed and its use in the evaluation of control concepts applicable to the NASA Space Station Freedom electric power system (EPS) are described. The facility was constructed to allow testing of control hardware and software in an environment functionally similar to the space station electric power system. Control hardware and software have been developed to allow operation of the testbed power system in a manner similar to a supervisory control and data acquisition (SCADA) system employed by utility power systems for control. The system hardware and software are described.
NASA Astrophysics Data System (ADS)
Cho, Jeonghyun; Han, Cheolheui; Cho, Leesang; Cho, Jinsoo
2003-08-01
This paper treats the kernel function of an integral equation that relates a known or prescribed upwash distribution to an unknown lift distribution for a finite wing. The pressure kernel functions of the singular integral equation are summarized for all speed range in the Laplace transform domain. The sonic kernel function has been reduced to a form, which can be conveniently evaluated as a finite limit from both the subsonic and supersonic sides when the Mach number tends to one. Several examples are solved including rectangular wings, swept wings, a supersonic transport wing and a harmonically oscillating wing. Present results are given with other numerical data, showing continuous results through the unit Mach number. Computed results are in good agreement with other numerical results.
Peculiar velocity effect on galaxy correlation functions in nonlinear clustering regime
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
1994-03-01
We studied the distortion of the apparent distribution of galaxies in redshift space contaminated by the peculiar velocity effect. Specifically we obtained the expressions for N-point correlation functions in redshift space with given functional form for velocity distribution f(v) and evaluated two- and three-point correlation functions quantitatively. The effect of velocity correlations is also discussed. When the two-point correlation function in real space has a power-law form, Xir(r) is proportional to r(-gamma), the redshift-space counterpart on small scales also has a power-law form but with an increased power-law index: Xis(s) is proportional to s(1-gamma). When the three-point correlation function has the hierarchical form and the two-point correlation function has the power-law form in real space, the hierarchical form of the three-point correlation function is almost preserved in redshift space. The above analytic results are compared with the direct analysis based on N-body simulation data for cold dark matter models. Implications on the hierarchical clustering ansatz are discussed in detail.
Myocardium tracking via matching distributions.
Ben Ayed, Ismail; Li, Shuo; Ross, Ian; Islam, Ali
2009-01-01
The goal of this study is to investigate automatic myocardium tracking in cardiac Magnetic Resonance (MR) sequences using global distribution matching via level-set curve evolution. Rather than relying on the pixelwise information as in existing approaches, distribution matching compares intensity distributions, and consequently, is well-suited to the myocardium tracking problem. Starting from a manual segmentation of the first frame, two curves are evolved in order to recover the endocardium (inner myocardium boundary) and the epicardium (outer myocardium boundary) in all the frames. For each curve, the evolution equation is sought following the maximization of a functional containing two terms: (1) a distribution matching term measuring the similarity between the non-parametric intensity distributions sampled from inside and outside the curve to the model distributions of the corresponding regions estimated from the previous frame; (2) a gradient term for smoothing the curve and biasing it toward high gradient of intensity. The Bhattacharyya coefficient is used as a similarity measure between distributions. The functional maximization is obtained by the Euler-Lagrange ascent equation of curve evolution, and efficiently implemented via level-set. The performance of the proposed distribution matching was quantitatively evaluated by comparisons with independent manual segmentations approved by an experienced cardiologist. The method was applied to ten 2D mid-cavity MR sequences corresponding to ten different subjects. Although neither shape prior knowledge nor curve coupling were used, quantitative evaluation demonstrated that the results were consistent with manual segmentations. The proposed method compares well with existing methods. The algorithm also yields a satisfying reproducibility. Distribution matching leads to a myocardium tracking which is more flexible and applicable than existing methods because the algorithm uses only the current data, i.e., does not require a training, and consequently, the solution is not bounded to some shape/intensity prior information learned from of a finite training set.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
1976-08-01
bare soil and grass areas, Vicksburg, Mississippi . ....... .. 101 25 Schematic of typical thermal IR scanner system . . . . 103 26 Sensor spatial...following categories: a. Soils b. Vegetation S. Topography d. Bedrock It is the knowledge of these characteristics and their distribution within the...necessary to know the changes in soil , vegetation, topography, and bedrock characteristics as a function of time as well as their spa- tial distribution at
Beyond-Standard-Model Tensor Interaction and Hadron Phenomenology.
Courtoy, Aurore; Baeßler, Stefan; González-Alonso, Martín; Liuti, Simonetta
2015-10-16
We evaluate the impact of recent developments in hadron phenomenology on extracting possible fundamental tensor interactions beyond the standard model. We show that a novel class of observables, including the chiral-odd generalized parton distributions, and the transversity parton distribution function can contribute to the constraints on this quantity. Experimental extractions of the tensor hadronic matrix elements, if sufficiently precise, will provide a, so far, absent testing ground for lattice QCD calculations.
A database system to support image algorithm evaluation
NASA Technical Reports Server (NTRS)
Lien, Y. E.
1977-01-01
The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
Izacard, Olivier
2016-08-02
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less
Predicting functional communication ability in children with cerebral palsy at school entry.
Coleman, Andrea; Weir, Kelly; Ware, Robert S; Boyd, Roslyn
2015-03-01
To explore the value of demographic, environmental, and early clinical characteristics in predicting functional communication in children with cerebral palsy (CP) at school entry. Data are from an Australian prospective longitudinal study of children with CP. Children assessed at 18 to 24 and 48 to 60 months corrected age were included in the study. Functional communication was classified at 48 to 60 months using the Communication Function Classification System (CFCS). Predictive variables included communication skills at 18 to 24 months, evaluated using the Communication and Symbolic Behavioural Scales Developmental Profile (CSBS-DP) Infant-Toddler Checklist. Early Gross Motor Function Classification System (GMFCS), Manual Ability Classification System, and motor type and distribution were evaluated by two physiotherapists. Demographic and comorbid variables were obtained through parent interview with a paediatrician or rehabilitation specialist. A total of 114 children (76 males, 38 females) were included in the study. At 18 to 24 months the mean CSBS-DP was 84.9 (SD 19.0). The CFCS distribution at 48 to 60 months was I=36(32%), II=25(22%), III=20(18%), IV=19(17%), and V=14(12%). In multivariable regression analysis, only CSBS-DP (p<0.01) and GMFCS (p<0.01) at 18 to 24 months were predictors of functional communication at school entry. Body structure and function and not environmental factors impact functional communication at school entry in children with CP. This provides valuable guidance for early screening, parent education, and future planning of intervention programs to improve functional communication. © 2014 Mac Keith Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertelli, N.; Valeo, E.J.; Green, D.L.
At the power levels required for significant heating and current drive in magnetically-confined toroidal plasma, modification of the particle distribution function from a Maxwellian shape is likely [T. H. Stix, Nucl. Fusion, 15 737 (1975)], with consequent changes in wave propagation and in the location and amount of absorption. In order to study these effects computationally, both the finite-Larmor-radius and the high-harmonic fast wave (HHFW), versions of the full-wave, hot-plasma toroidal simulation code TORIC [M. Brambilla, Plasma Phys. Control. Fusion 41, 1 (1999) and M. Brambilla, Plasma Phys. Control. Fusion 44, 2423 (2002)], have been extended to allow the prescriptionmore » of arbitrary velocity distributions of the form f(v||, v_perp, psi , theta). For hydrogen (H) minority heating of a deuterium (D) plasma with anisotropic Maxwellian H distributions, the fractional H absorption varies significantly with changes in parallel temperature but is essentially independent of perpendicular temperature. On the other hand, for HHFW regime with anisotropic Maxwellian fast ion distribution, the fractional beam ion absorption varies mainly with changes in the perpendicular temperature. The evaluation of the wave-field and power absorption, through the full wave solver, with the ion distribution function provided by either aMonte-Carlo particle and Fokker-Planck codes is also examined for Alcator C-Mod and NSTX plasmas. Non-Maxwellian effects generally tends to increase the absorption with respect to the equivalent Maxwellian distribution.« less
NASA Astrophysics Data System (ADS)
Bertelli, N.; Valeo, E. J.; Green, D. L.; Gorelenkova, M.; Phillips, C. K.; Podestà, M.; Lee, J. P.; Wright, J. C.; Jaeger, E. F.
2017-05-01
At the power levels required for significant heating and current drive in magnetically-confined toroidal plasma, modification of the particle distribution function from a Maxwellian shape is likely (Stix 1975 Nucl. Fusion 15 737), with consequent changes in wave propagation and in the location and amount of absorption. In order to study these effects computationally, both the finite-Larmor-radius and the high-harmonic fast wave (HHFW), versions of the full-wave, hot-plasma toroidal simulation code TORIC (Brambilla 1999 Plasma Phys. Control. Fusion 41 1 and Brambilla 2002 Plasma Phys. Control. Fusion 44 2423), have been extended to allow the prescription of arbitrary velocity distributions of the form f≤ft({{v}\\parallel},{{v}\\bot},\\psi,θ \\right) . For hydrogen (H) minority heating of a deuterium (D) plasma with anisotropic Maxwellian H distributions, the fractional H absorption varies significantly with changes in parallel temperature but is essentially independent of perpendicular temperature. On the other hand, for HHFW regime with anisotropic Maxwellian fast ion distribution, the fractional beam ion absorption varies mainly with changes in the perpendicular temperature. The evaluation of the wave-field and power absorption, through the full wave solver, with the ion distribution function provided by either a Monte-Carlo particle and Fokker-Planck codes is also examined for Alcator C-Mod and NSTX plasmas. Non-Maxwellian effects generally tend to increase the absorption with respect to the equivalent Maxwellian distribution.
Computer simulation of ledge formation and ledge interaction for the silicon (111) free surface
NASA Technical Reports Server (NTRS)
Balamane, H.; Halicioglu, T.; Tiller, W. A.
1987-01-01
Both strip and triangular clusters, composed of 2 -1 -1 line ledges, have been simulated on the Si (111) surface. The long-range ledge-ledge interaction and the surface stress tensor distribution have been evaluated for these two pill-box geometries using a semiempirical potential-energy function that incorporates both two-body and three-body contributions. The consequences of the ledge-ledge interaction on two-dimensional nucleation for Si (111) has been evaluated as a function of Si adatom supersaturation and shown to differ significantly from conventional theory, where such interaction is neglected.
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
Hesse, Bettina; Fröber, Rosemarie; Fischer, Martin S; Schilling, Nadja
2013-12-01
Human back muscles have been classified as local stabilizers, global stabilizers and global mobilizers. This concept is supported by the distribution of slow and fast muscle fibres in quadrupedal mammals, but has not been evaluated for humans because detailed information on the fibre type composition of their perivertebral musculature is rare. Moreover, such information is derived from spot samples, which are assumed to be representative for the respective muscle. In accordance with the proposed classification, numerous studies in animals indicate great differences in the fibre distribution within and among the muscles due to fibre type regionalization. The aims of this study were to (1) qualitatively explore the applicability of the proposed functional classification for human back muscles by studying their fibre type composition and (2) evaluate the representativeness of spot sampling techniques. For this, the fibre type distribution of the whole lumbar perivertebral musculature of two male cadavers was investigated three-dimensionally using immunohistochemistry. Despite great local variations (e.g., among fascicles), all muscles were composed of about 50% slow and 50% fast fibres. Thus, contradicting the concepts of lumbar muscle function, no functional differentiation of the muscles was observed in our study of the muscle contractile properties. The great similarity in fibre composition among the muscles equips each muscle equally well for a broad range of tasks and therefore has the potential to allow for great functional versatility of the human back musculature. Spot samples do not prove to be representative for the whole muscle. The great intraspecific variability observed previously in single-spot samples is potentially misleading. Copyright © 2013 Elsevier GmbH. All rights reserved.
Dai, Qi; Yang, Yanchun; Wang, Tianming
2008-10-15
Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.
The MONET code for the evaluation of the dose in hadrontherapy
NASA Astrophysics Data System (ADS)
Embriaco, A.
2018-01-01
The MONET is a code for the computation of the 3D dose distribution for protons in water. For the lateral profile, MONET is based on the Molière theory of multiple Coulomb scattering. To take into account also the nuclear interactions, we add to this theory a Cauchy-Lorentz function, where the two parameters are obtained by a fit to a FLUKA simulation. We have implemented the Papoulis algorithm for the passage from the projected to a 2D lateral distribution. For the longitudinal profile, we have implemented a new calculation of the energy loss that is in good agreement with simulations. The inclusion of the straggling is based on the convolution of energy loss with a Gaussian function. In order to complete the longitudinal profile, also the nuclear contributions are included using a linear parametrization. The total dose profile is calculated in a 3D mesh by evaluating at each depth the 2D lateral distributions and by scaling them at the value of the energy deposition. We have compared MONET with FLUKA in two cases: a single Gaussian beam and a lateral scan. In both cases, we have obtained a good agreement for different energies of protons in water.
Marketing and population problems.
Farley, J U; Leavitt, H J
1971-07-01
There are many elements in population programs that are more familiar to marketing men than to some population experts. Advertising is essential to reach the target population, and advertising evaluation techniques (e.g., surrogate indexes or audience measures) might be useful for evaluating both population information activities and the import of the entire program. Fundamental research on basid demand for fertility control is needed and a marketer's experience with planning and evaluating test markets can be useful in assessing potential selling targets and evaluating alternative promotional and distributional strategies. Special family planning clinics have certain disadvantages: expensive and scarce personnel are needed; red tape may be present; the network is based on the assumption that the client is willing to travel relatively great distances repeatedly; and clinics lack anonymity which may scare potential acceptors away. Most developing cultures have an intensively functioning distribution structure which delivers basic commodities to the most remote areas, providing relatively anonymous outlets that are physically close to the customs. Materials requiring a prescription might be distributed in exchange for script issued at and ultimately redeemed by clinics, this requiring only an occasional visit to a clinic. Mail-order service can be used to supplement a clinic's distribution of some contraceptives. It should be remembered that population administrators often have an antipathetic view toward business and marketing and "suspect" the profit motive.
On the Performance Evaluation of Query-Based Wireless Sensor Networks
2012-01-01
is ∆ ≡ P(T > X) = π0 ∫ ∞ 0 [1−B(x)] dH(x). (2) Proposition 1 can be proved using a simple conditioning argument . The expression for the proportion of...node by α ≡ α1. Assuming the event lifetime distribution function G has an increasing failure rate ( IFR ), then 0 < α ≤ α2 ≤ α3 ≤ · · · . Proposition 3...Suppose G is an IFR distribution function so that 0 < α ≤ α2 ≤ α3 ≤ · · · . Then for a fixed time-to-live counter ℓ, λe ≤ λ [ 1− (1− α)ℓ α ] ≤ λℓ
NASA Astrophysics Data System (ADS)
Krautschneider, W.; Wagemann, H. G.
1983-10-01
Kuhn's quasi-static C(V)-method has been extended to MOS transistors by considering the capacitances of the source and drain p-n junctions additionally to the MOS varactor circuit model. The width of the space charge layers w(phi sub s) is calculated as a function of the surface potential phi sub s and applied to the MOS capacitance as a function of the gate voltage. Capacitance behavior for different channel length is presented as a model and compared to measurement results and evaluations of energetic distributions of interface states Dit(phi sub s) for MOS transistor and MOS varactor on the same chip.
Theory for solubility in static systems
NASA Astrophysics Data System (ADS)
Gusev, Andrei A.; Suter, Ulrich W.
1991-06-01
A theory for the solubility of small particles in static structures has been developed. The distribution function of the solute in a frozen solid has been derived in analytical form for the quantum and the quasiclassical cases. The solubility at infinitesimal gas pressure (Henry's constant) as well as the pressure dependence of the solute concentration at elevated pressures has been found from the statistical equilibrium between the solute in the static matrix and the ideal-gas phase. The distribution function of a solute containing different particles has been evaluated in closed form. An application of the theory to the sorption of methane in the computed structures of glassy polycarbonate has resulted in a satisfactory agreement with experimental data.
Quasi-parton distribution functions: A study in the diquark spectator model
Gamberg, Leonard; Kang, Zhong -Bo; Vitev, Ivan; ...
2015-02-12
A set of quasi-parton distribution functions (quasi-PDFs) have been recently proposed by Ji. Defined as the matrix elements of equal-time spatial correlations, they can be computed on the lattice and should reduce to the standard PDFs when the proton momentum P z is very large. Since taking the P z → ∞ limit is not feasible in lattice simulations, it is essential to provide guidance for which values of P z the quasi-PDFs are good approximations of standard PDFs. Within the framework of the spectator diquark model, we evaluate both the up and down quarks' quasi-PDFs and standard PDFs formore » all leading-twist distributions (unpolarized distribution f₁, helicity distribution g₁, and transversity distribution h₁). We find that, for intermediate parton momentum fractions x , quasi-PDFs are good approximations to standard PDFs (within 20–30%) when P z ≳ 1.5–2 GeV. On the other hand, for large x~1 much larger P z > 4 GeV is necessary to obtain a satisfactory agreement between the two sets. We further test the Soffer positivity bound, and find that it does not hold in general for quasi-PDFs.« less
Probabilistic Evaluation of Blade Impact Damage
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Abumeri, G. H.
2003-01-01
The response to high velocity impact of a composite blade is probabilistically evaluated. The evaluation is focused on quantifying probabilistically the effects of uncertainties (scatter) in the variables that describe the impact, the blade make-up (geometry and material), the blade response (displacements, strains, stresses, frequencies), the blade residual strength after impact, and the blade damage tolerance. The results of probabilistic evaluations results are in terms of probability cumulative distribution functions and probabilistic sensitivities. Results show that the blade has relatively low damage tolerance at 0.999 probability of structural failure and substantial at 0.01 probability.
Evaluation of a black-footed ferret resource utilization function model
Eads, D.A.; Millspaugh, J.J.; Biggins, D.E.; Jachowski, D.S.; Livieri, T.M.
2011-01-01
Resource utilization function (RUF) models permit evaluation of potential habitat for endangered species; ideally such models should be evaluated before use in management decision-making. We evaluated the predictive capabilities of a previously developed black-footed ferret (Mustela nigripes) RUF. Using the population-level RUF, generated from ferret observations at an adjacent yet distinct colony, we predicted the distribution of ferrets within a black-tailed prairie dog (Cynomys ludovicianus) colony in the Conata Basin, South Dakota, USA. We evaluated model performance, using data collected during post-breeding spotlight surveys (2007-2008) by assessing model agreement via weighted compositional analysis and count-metrics. Compositional analysis of home range use and colony-level availability, and core area use and home range availability, demonstrated ferret selection of the predicted Very high and High occurrence categories in 2007 and 2008. Simple count-metrics corroborated these findings and suggested selection of the Very high category in 2007 and the Very high and High categories in 2008. Collectively, these results suggested that the RUF was useful in predicting occurrence and intensity of space use of ferrets at our study site, the 2 objectives of the RUF. Application of this validated RUF would increase the resolution of habitat evaluations, permitting prediction of the distribution of ferrets within distinct colonies. Additional model evaluation at other sites, on other black-tailed prairie dog colonies of varying resource configuration and size, would increase understanding of influences upon model performance and the general utility of the RUF. ?? 2011 The Wildlife Society.
Function-based payment model for inpatient medical rehabilitation: an evaluation.
Sutton, J P; DeJong, G; Wilkerson, D
1996-07-01
To describe the components of a function-based prospective payment model for inpatient medical rehabilitation that parallels diagnosis-related groups (DRGs), to evaluate this model in relation to stakeholder objectives, and to detail the components of a quality of care incentive program that, when combined with this payment model, creates an incentive for provides to maximize functional outcomes. This article describes a conceptual model, involving no data collection or data synthesis. The basic payment model described parallels DRGs. Information on the potential impact of this model on medical rehabilitation is gleaned from the literature evaluating the impact of DRGs. The conceptual model described is evaluated against the results of a Delphi Survey of rehabilitation providers, consumers, policymakers, and researchers previously conducted by members of the research team. The major shortcoming of a function-based prospective payment model for inpatient medical rehabilitation is that it contains no inherent incentive to maximize functional outcomes. Linkage of reimbursement to outcomes, however, by withholding a fixed proportion of the standard FRG payment amount, placing that amount in a "quality of care" pool, and distributing that pool annually among providers whose predesignated, facility-level, case-mix-adjusted outcomes are attained, may be one strategy for maximizing outcome goals.
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
Design methodology and results evaluation of a heating functionality in modular lab-on-chip systems
NASA Astrophysics Data System (ADS)
Streit, Petra; Nestler, Joerg; Shaporin, Alexey; Graunitz, Jenny; Otto, Thomas
2018-06-01
Lab-on-a-chip (LoC) systems offer the opportunity of fast and customized biological analyses executed at the ‘point-of-need’ without expensive lab equipment. Some biological processes need a temperature treatment. Therefore, it is important to ensure a defined and stable temperature distribution in the biosensor area. An integrated heating functionality is realized with discrete resistive heating elements including temperature measurement. The focus of this contribution is a design methodology and evaluation technique of the temperature distribution in the biosensor area with regard to the thermal-electrical behaviour of the heat sources. Furthermore, a sophisticated control of the biosensor temperature is proposed. A finite element (FE) model with one and more integrated heat sources in a polymer-based LoC system is used to investigate the impact of the number and arrangement of heating elements on the temperature distribution around the heating elements and in the biosensor area. Based on this model, various LOC systems are designed and fabricated. Electrical characterization of the heat sources and independent temperature measurements with infrared technique are performed to verify the model parameters and prove the simulation approach. The FE model and the proposed methodology is the foundation for optimization and evaluation of new designs with regard to temperature requirements of the biosensor. Furthermore, a linear dependency of the heater temperature on the electric current is demonstrated in the targeted temperature range of 20 °C to 70 °C enabling the usage of the heating functionality for biological reactions requiring a steady-state temperature up to 70 °C. The correlation between heater and biosensor area temperature is derived for a direct control through the heating current.
NASA Astrophysics Data System (ADS)
Tully, Katherine C.; Whitacre, Jay F.; Litster, Shawn
2014-02-01
This paper presents in-situ spatiotemporal measurements of the electrolyte phase potential within an electric double layer capacitor (EDLC) negative electrode as envisaged for use in an aqueous hybrid battery for grid-scale energy storage. The ultra-thick electrodes used in these batteries to reduce non-functional material costs require sufficiently fast through-plane mass and charge transport to attain suitable charging and discharging rates. To better evaluate the through-plane transport, we have developed an electrode scaffold (ES) for making in situ electrolyte potential distribution measurements at discrete known distances across the thickness of an uninterrupted EDLC negative electrode. Using finite difference methods, we calculate local current, volumetric charging current and charge storage distributions from the spatiotemporal electrolyte potential measurements. These potential distributions provide insight into complex phenomena that cannot be directly observed using other existing methods. Herein, we use the distributions to identify areas of the electrode that are underutilized, assess the effects of various parameters on the cumulative charge storage distribution, and evaluate an effectiveness factor for charge storage in EDLC electrodes.
NASA Astrophysics Data System (ADS)
Podladchikova, O.; Lefebvre, B.; Krasnoselskikh, V.; Podladchikov, V.
An important task for the problem of coronal heating is to produce reliable evaluation of the statistical properties of energy release and eruptive events such as micro-and nanoflares in the solar corona. Different types of distributions for the peak flux, peak count rate measurements, pixel intensities, total energy flux or emission measures increases or waiting times have appeared in the literature. This raises the question of a precise evaluation and classification of such distributions. For this purpose, we use the method proposed by K. Pearson at the beginning of the last century, based on the relationship between the first 4 moments of the distribution. Pearson's technique encompasses and classifies a broad range of distributions, including some of those which have appeared in the literature about coronal heating. This technique is successfully applied to simulated data from the model of Krasnoselskikh et al. (2002). It allows to provide successful fits to the empirical distributions of the dissipated energy, and to classify them as a function of model parameters such as dissipation mechanisms and threshold.
Mesoscale mapping of available solar energy at the earth's surface by use of satellites
NASA Technical Reports Server (NTRS)
Hiser, H. W.; Senn, H. V.
1980-01-01
A method is presented for use of cloud images in the visual spectrum from the SMS/GOES geostationary satellites to determine the hourly distribution of sunshine on the mesoscale. Cloud coverage and density as a function of time of day and season are evaluated through the use of digital data processing techniques. Seasonal geographic distributions of cloud cover/sunshine are converted to joules of solar radiation received at the earth's surface through relationships developed from long-term measurements of these two parameters at six widely distributed stations. The technique can be used to generate maps showing the geographic distribution of total solar radiation on the mesoscale which is received at the earth's surface.
NASA Technical Reports Server (NTRS)
Leibecki, H. F.; King, R. B.; Fordyce, J. S.
1974-01-01
The City of Cleveland Division of Air Pollution Control and NASA jointly investigated the chemical and physical characteristics of the suspended particulate matter in Cleveland, and as part of the program, measurements of the particle size distribution of ambient air samples at five urban locations during August and September 1972 were made using high-volume cascade impactions. The distributions were evaluated for lognormality, and the mass median diameters were compared between locations and as a function of resultant wind direction. Junge-type distributions were consistent with dirty continental aerosols. About two-thirds of the suspended particulate matter observed in Cleveland is less than 7 microns in diameter.
The influence of sub-grid scale motions on particle collision in homogeneous isotropic turbulence
NASA Astrophysics Data System (ADS)
Xiong, Yan; Li, Jing; Liu, Zhaohui; Zheng, Chuguang
2018-02-01
The absence of sub-grid scale (SGS) motions leads to severe errors in particle pair dynamics, which represents a great challenge to the large eddy simulation of particle-laden turbulent flow. In order to address this issue, data from direct numerical simulation (DNS) of homogenous isotropic turbulence coupled with Lagrangian particle tracking are used as a benchmark to evaluate the corresponding results of filtered DNS (FDNS). It is found that the filtering process in FDNS will lead to a non-monotonic variation of the particle collision statistics, including radial distribution function, radial relative velocity, and the collision kernel. The peak of radial distribution function shifts to the large-inertia region due to the lack of SGS motions, and the analysis of the local flowstructure characteristic variable at particle position indicates that the most effective interaction scale between particles and fluid eddies is increased in FDNS. Moreover, this scale shifting has an obvious effect on the odd-order moments of the probability density function of radial relative velocity, i.e. the skewness, which exhibits a strong correlation to the variance of radial distribution function in FDNS. As a whole, the radial distribution function, together with radial relative velocity, can compensate the SGS effects for the collision kernel in FDNS when the Stokes number based on the Kolmogorov time scale is greater than 3.0. However, it still leaves considerable errors for { St}_k <3.0.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
NASA Astrophysics Data System (ADS)
Piégay, H.; Bertrand, M.; Liébault, F.; Pont, D.; Sauquet, E.
2011-12-01
The present contribution aims to put into practice the conceptual framework defined in Pont et al. (2009) to the Drôme River Basin (France) in order to test the capacity of functional reach concept to be used to assess risks in environmental changes. The methodology is illustrated by examples focusing on the potential changes in functional reach diversity as a proxy of habitat diversity, and on potential impact on trout distribution at a network scale due to actions of sediment reintroduction. We used remote sensing and GIS methods to provide original data and to analyze them. A cluster analysis performed on the components of a PCA has been done to establish a functional reach typology based on planform parameters, used as a proxy of habitat typology following a review of literature. We calculated for the entire channel network an index of present and 1948 states of the functional reach types diversity to highlight past evolution. Various options of changes in functional reach types diversity were compared in relation to various increases in bedload delivery following planned deforestation. A similar risk assessment procedure is proposed in relation to changes in canopy cover and associated changes in summer temperature to evaluate impacts on brown trout distribution. Two practical examples are used as pilots for evaluating the risk assessment approach based on functional reach typology and its potential applicability for testing management actions for improving aquatic ecology. Limitations and improvements are then discussed.
NASA Astrophysics Data System (ADS)
Le, Anh H.; Deshpande, Ruchi; Liu, Brent J.
2010-03-01
The electronic patient record (ePR) has been developed for prostate cancer patients treated with proton therapy. The ePR has functionality to accept digital input from patient data, perform outcome analysis and patient and physician profiling, provide clinical decision support and suggest courses of treatment, and distribute information across different platforms and health information systems. In previous years, we have presented the infrastructure of a medical imaging informatics based ePR for PT with functionality to accept digital patient information and distribute this information across geographical location using Internet protocol. In this paper, we present the ePR decision support tools which utilize the imaging processing tools and data collected in the ePR. The two decision support tools including the treatment plan navigator and radiation toxicity tool are presented to evaluate prostate cancer treatment to improve proton therapy operation and improve treatment outcomes analysis.
1988-09-01
Unfortunately, although current construction practices can produce functional HVAC systems that provide adequate heating and cooling , they do not guarantee...developed by interviewing heating, ventilating, and air-conditioning ( HVAC ) profes- sionals, reviewing technical literature, and consolidating these...for recording this information. A glossary of possibly unfamiliar HVAC terms is included. An informal evaluation of the procedure showed that
CommWalker: correctly evaluating modules in molecular networks in light of annotation bias.
Luecken, M D; Page, M J T; Crosby, A J; Mason, S; Reinert, G; Deane, C M
2018-03-15
Detecting novel functional modules in molecular networks is an important step in biological research. In the absence of gold standard functional modules, functional annotations are often used to verify whether detected modules/communities have biological meaning. However, as we show, the uneven distribution of functional annotations means that such evaluation methods favor communities of well-studied proteins. We propose a novel framework for the evaluation of communities as functional modules. Our proposed framework, CommWalker, takes communities as inputs and evaluates them in their local network environment by performing short random walks. We test CommWalker's ability to overcome annotation bias using input communities from four community detection methods on two protein interaction networks. We find that modules accepted by CommWalker are similarly co-expressed as those accepted by current methods. Crucially, CommWalker performs well not only in well-annotated regions, but also in regions otherwise obscured by poor annotation. CommWalker community prioritization both faithfully captures well-validated communities and identifies functional modules that may correspond to more novel biology. The CommWalker algorithm is freely available at opig.stats.ox.ac.uk/resources or as a docker image on the Docker Hub at hub.docker.com/r/lueckenmd/commwalker/. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Papadatou, Eleni; Del Águila-Carrasco, Antonio J; Esteve-Taboada, José J; Madrid-Costa, David; Cerviño-Expósito, Alejandro
2017-01-01
To analytically assess the effect of pupil size upon the refractive power distributions of different designs of multifocal contact lenses. Two multifocal contact lenses of center-near design and one multifocal contact lens of center-distance design were used in this study. Their power profiles were measured using the NIMO TR1504 device (LAMBDA-X, Belgium). Based on their power profiles, the power distribution was assessed as a function of pupil size. For the high addition lenses, the resulting refractive power as a function of viewing distance (far, intermediate, and near) and pupil size was also analyzed. The power distribution of the lenses was affected by pupil size differently. One of the lenses showed a significant spread in refractive power distribution, from about -3 D to 0 D. Generally, the power distribution of the lenses expanded as the pupil diameter became greater. The surface of the lens dedicated for each distance varied substantially with the design of the lens. In an experimental basis, our results show how the lenses power distribution is affected by the pupil size and underlined the necessity of careful evaluation of the patient's visual needs and the optical properties of a multifocal contact lens for achieving the optimal visual outcome.
Improved Tandem Measurement Techniques for Aerosol Particle Analysis
NASA Astrophysics Data System (ADS)
Rawat, Vivek Kumar
Non-spherical, chemically inhomogeneous (complex) nanoparticles are encountered in a number of natural and engineered environments, including combustion systems (which produces highly non-spherical aggregates), reactors used in gas-phase materials synthesis of doped or multicomponent materials, and in ambient air. These nanoparticles are often highly diverse in size, composition and shape, and hence require determination of property distribution functions for accurate characterization. This thesis focuses on development of tandem mobility-mass measurement techniques coupled with appropriate data inversion routines to facilitate measurement of two dimensional size-mass distribution functions while correcting for the non-idealities of the instruments. Chapter 1 provides the detailed background and motivation for the studies performed in this thesis. In chapter 2, the development of an inversion routine is described which is employed to determine two dimensional size-mass distribution functions from Differential Mobility Analyzer-Aerosol Particle Mass analyzer tandem measurements. Chapter 3 demonstrates the application of the two dimensional distribution function to compute cumulative mass distribution function and also evaluates the validity of this technique by comparing the calculated total mass concentrations to measured values for a variety of aerosols. In Chapter 4, this tandem measurement technique with the inversion routine is employed to analyze colloidal suspensions. Chapter 5 focuses on application of a transverse modulation ion mobility spectrometer coupled with a mass spectrometer to study the effect of vapor dopants on the mobility shifts of sub 2 nm peptide ion clusters. These mobility shifts are then compared to models based on vapor uptake theories. Finally, in Chapter 6, a conclusion of all the studies performed in this thesis is provided and future avenues of research are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertelli, N.; Valeo, E. J.; Green, D. L.
At the power levels required for significant heating and current drive in magnetically-confined toroidal plasma, modification of the particle distribution function from a Maxwellian shape is likely (Stix 1975 Nucl. Fusion 15 737), with consequent changes in wave propagation and in the location and amount of absorption. In order to study these effects computationally, both the finite-Larmor-radius and the high-harmonic fast wave (HHFW), versions of the full-wave, hot-plasma toroidal simulation code TORIC (Brambilla 1999 Plasma Phys. Control. Fusion 41 1 and Brambilla 2002 Plasma Phys. Control. Fusion 44 2423), have been extended to allow the prescription of arbitrary velocity distributionsmore » of the form f(v(parallel to), v(perpendicular to) , psi, theta). For hydrogen (H) minority heating of a deuterium (D) plasma with anisotropic Maxwellian H distributions, the fractional H absorption varies significantly with changes in parallel temperature but is essentially independent of perpendicular temperature. On the other hand, for HHFW regime with anisotropic Maxwellian fast ion distribution, the fractional beam ion absorption varies mainly with changes in the perpendicular temperature. The evaluation of the wave-field and power absorption, through the full wave solver, with the ion distribution function provided by either a Monte-Carlo particle and Fokker-Planck codes is also examined for Alcator C-Mod and NSTX plasmas. Non-Maxwellian effects generally tend to increase the absorption with respect to the equivalent Maxwellian distribution.« less
Bertelli, N.; Valeo, E. J.; Green, D. L.; ...
2017-04-03
At the power levels required for significant heating and current drive in magnetically-confined toroidal plasma, modification of the particle distribution function from a Maxwellian shape is likely (Stix 1975 Nucl. Fusion 15 737), with consequent changes in wave propagation and in the location and amount of absorption. In order to study these effects computationally, both the finite-Larmor-radius and the high-harmonic fast wave (HHFW), versions of the full-wave, hot-plasma toroidal simulation code TORIC (Brambilla 1999 Plasma Phys. Control. Fusion 41 1 and Brambilla 2002 Plasma Phys. Control. Fusion 44 2423), have been extended to allow the prescription of arbitrary velocity distributionsmore » of the form f(v(parallel to), v(perpendicular to) , psi, theta). For hydrogen (H) minority heating of a deuterium (D) plasma with anisotropic Maxwellian H distributions, the fractional H absorption varies significantly with changes in parallel temperature but is essentially independent of perpendicular temperature. On the other hand, for HHFW regime with anisotropic Maxwellian fast ion distribution, the fractional beam ion absorption varies mainly with changes in the perpendicular temperature. The evaluation of the wave-field and power absorption, through the full wave solver, with the ion distribution function provided by either a Monte-Carlo particle and Fokker-Planck codes is also examined for Alcator C-Mod and NSTX plasmas. Non-Maxwellian effects generally tend to increase the absorption with respect to the equivalent Maxwellian distribution.« less
Subchondral bone density distribution of the talus in clinically normal Labrador Retrievers.
Dingemanse, W; Müller-Gerbl, M; Jonkers, I; Vander Sloten, J; van Bree, H; Gielen, I
2016-03-15
Bones continually adapt their morphology to their load bearing function. At the level of the subchondral bone, the density distribution is highly correlated with the loading distribution of the joint. Therefore, subchondral bone density distribution can be used to study joint biomechanics non-invasively. In addition physiological and pathological joint loading is an important aspect of orthopaedic disease, and research focusing on joint biomechanics will benefit veterinary orthopaedics. This study was conducted to evaluate density distribution in the subchondral bone of the canine talus, as a parameter reflecting the long-term joint loading in the tarsocrural joint. Two main density maxima were found, one proximally on the medial trochlear ridge and one distally on the lateral trochlear ridge. All joints showed very similar density distribution patterns and no significant differences were found in the localisation of the density maxima between left and right limbs and between dogs. Based on the density distribution the lateral trochlear ridge is most likely subjected to highest loads within the tarsocrural joint. The joint loading distribution is very similar between dogs of the same breed. In addition, the joint loading distribution supports previous suggestions of the important role of biomechanics in the development of OC lesions in the tarsus. Important benefits of computed tomographic osteoabsorptiometry (CTOAM), i.e. the possibility of in vivo imaging and temporal evaluation, make this technique a valuable addition to the field of veterinary orthopaedic research.
Characterization of technical surfaces by structure function analysis
NASA Astrophysics Data System (ADS)
Kalms, Michael; Kreis, Thomas; Bergmann, Ralf B.
2018-03-01
The structure function is a tool for characterizing technical surfaces that exhibits a number of advantages over Fourierbased analysis methods. So it is optimally suited for analyzing the height distributions of surfaces measured by full-field non-contacting methods. The structure function is thus a useful method to extract global or local criteria like e. g. periodicities, waviness, lay, or roughness to analyze and evaluate technical surfaces. After the definition of line- and area-structure function and offering effective procedures for their calculation this paper presents examples using simulated and measured data of technical surfaces including aircraft parts.
Pore water colloid properties in argillaceous sedimentary rocks.
Degueldre, Claude; Cloet, Veerle
2016-11-01
The focus of this work is to evaluate the colloid nature, concentration and size distribution in the pore water of Opalinus Clay and other sedimentary host rocks identified for a potential radioactive waste repository in Switzerland. Because colloids could not be measured in representative undisturbed porewater of these host rocks, predictive modelling based on data from field and laboratory studies is applied. This approach allowed estimating the nature, concentration and size distributions of the colloids in the pore water of these host rocks. As a result of field campaigns, groundwater colloid concentrations are investigated on the basis of their size distribution quantified experimentally using single particle counting techniques. The colloid properties are estimated considering data gained from analogue hydrogeochemical systems ranging from mylonite features in crystalline fissures to sedimentary formations. The colloid concentrations were analysed as a function of the alkaline and alkaline earth element concentrations. Laboratory batch results on clay colloid generation from compacted pellets in quasi-stagnant water are also reported. Experiments with colloids in batch containers indicate that the size distribution of a colloidal suspension evolves toward a common particle size distribution independently of initial conditions. The final suspension size distribution was found to be a function of the attachment factor of the colloids. Finally, calculations were performed using a novel colloid distribution model based on colloid generation, aggregation and sedimentation rates to predict under in-situ conditions what makes colloid concentrations and size distributions batch- or fracture-size dependent. The data presented so far are compared with the field and laboratory data. The colloid occurrence, stability and mobility have been evaluated for the water of the considered potential host rocks. In the pore water of the considered sedimentary host rocks, the clay colloid concentration is expected to be very low (<1ppb, for 10-100nm) which restricts their relevance for radionuclide transport. Copyright © 2016. Published by Elsevier B.V.
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
NASA Astrophysics Data System (ADS)
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izacard, Olivier
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less
The Boer-Mulders Transverse Momentum Distribution in the Pion and its Evolution in Lattice QCD
NASA Astrophysics Data System (ADS)
Engelhardt, M.; Musch, B.; Hägler, P.; Schäfer, A.; Negele, J.
2015-02-01
Starting from a definition of transverse momentum-dependent parton distributions (TMDs) in terms of hadronic matrix elements of a quark bilocal operator containing a staple-shaped gauge link, selected TMD observables can be evaluated within Lattice QCD. A TMD ratio describing the Boer-Mulders effect in the pion is investigated, with a particular emphasis on its evolution as a function of a Collins-Soper-type parameter which quantifies the proximity of the staple-shaped gauge links to the light cone.
NASA Astrophysics Data System (ADS)
Li, Wenzhuo; Zhao, Yingying; Huang, Shuaiyu; Zhang, Song; Zhang, Lin
2017-01-01
This goal of this work was to develop a coarse-grained (CG) model of a β-O-4 type lignin polymer, because of the time consuming process required to achieve equilibrium for its atomistic model. The automatic adjustment method was used to develop the lignin CG model, which enables easy discrimination between chemically-varied polymers. In the process of building the lignin CG model, a sum of n Gaussian functions was obtained by an approximation of the corresponding atomistic potentials derived from a simple Boltzmann inversion of the distributions of the structural parameters. This allowed the establishment of the potential functions of the CG bond stretching and angular bending. To obtain the potential function of the CG dihedral angle, an algorithm similar to a Fourier progression form was employed together with a nonlinear curve-fitting method. The numerical potentials of the nonbonded portion of the lignin CG model were obtained using a potential inversion iterative method derived from the corresponding atomistic nonbonded distributions. The study results showed that the proposed CG model of lignin agreed well with its atomistic model in terms of the distributions of bond lengths, bending angles, dihedral angles and nonbonded distances between the CG beads. The lignin CG model also reproduced the static and dynamic properties of the atomistic model. The results of the comparative evaluation of the two models suggested that the designed lignin CG model was efficient and reliable.
Interacting Social and Environmental Predictors for the Spatial Distribution of Conservation Lands
Baldwin, Robert F.; Leonard, Paul B.
2015-01-01
Conservation decisions should be evaluated for how they meet conservation goals at multiple spatial extents. Conservation easements are land use decisions resulting from a combination of social and environmental conditions. An emerging area of research is the evaluation of spatial distribution of easements and their spatial correlates. We tested the relative influence of interacting social and environmental variables on the spatial distribution of conservation easements by ownership category and conservation status. For the Appalachian region of the United States, an area with a long history of human occupation and complex land uses including public-private conservation, we found that settlement, economic, topographic, and environmental data associated with spatial distribution of easements (N = 4813). Compared to random locations, easements were more likely to be found in lower elevations, in areas of greater agricultural productivity, farther from public protected areas, and nearer other human features. Analysis of ownership and conservation status revealed sources of variation, with important differences between local and state government ownerships relative to non-governmental organizations (NGOs), and among U.S. Geological Survey (USGS) GAP program status levels. NGOs were more likely to have easements nearer protected areas, and higher conservation status, while local governments held easements closer to settlement, and on lands of greater agricultural potential. Logistic interactions revealed environmental variables having effects modified by social correlates, and the strongest predictors overall were social (distance to urban area, median household income, housing density, distance to land trust office). Spatial distribution of conservation lands may be affected by geographic area of influence of conservation groups, suggesting that multi-scale conservation planning strategies may be necessary to satisfy local and regional needs for reserve networks. Our results support previous findings and provide an ecoregion-scale view that conservation easements may provide, at local scales, conservation functions on productive, more developable lands. Conservation easements may complement functions of public protected areas but more research should examine relative landscape-level ecological functions of both forms of protection. PMID:26465155
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
Interacting Social and Environmental Predictors for the Spatial Distribution of Conservation Lands.
Baldwin, Robert F; Leonard, Paul B
2015-01-01
Conservation decisions should be evaluated for how they meet conservation goals at multiple spatial extents. Conservation easements are land use decisions resulting from a combination of social and environmental conditions. An emerging area of research is the evaluation of spatial distribution of easements and their spatial correlates. We tested the relative influence of interacting social and environmental variables on the spatial distribution of conservation easements by ownership category and conservation status. For the Appalachian region of the United States, an area with a long history of human occupation and complex land uses including public-private conservation, we found that settlement, economic, topographic, and environmental data associated with spatial distribution of easements (N = 4813). Compared to random locations, easements were more likely to be found in lower elevations, in areas of greater agricultural productivity, farther from public protected areas, and nearer other human features. Analysis of ownership and conservation status revealed sources of variation, with important differences between local and state government ownerships relative to non-governmental organizations (NGOs), and among U.S. Geological Survey (USGS) GAP program status levels. NGOs were more likely to have easements nearer protected areas, and higher conservation status, while local governments held easements closer to settlement, and on lands of greater agricultural potential. Logistic interactions revealed environmental variables having effects modified by social correlates, and the strongest predictors overall were social (distance to urban area, median household income, housing density, distance to land trust office). Spatial distribution of conservation lands may be affected by geographic area of influence of conservation groups, suggesting that multi-scale conservation planning strategies may be necessary to satisfy local and regional needs for reserve networks. Our results support previous findings and provide an ecoregion-scale view that conservation easements may provide, at local scales, conservation functions on productive, more developable lands. Conservation easements may complement functions of public protected areas but more research should examine relative landscape-level ecological functions of both forms of protection.
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
NASA Astrophysics Data System (ADS)
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
Rollet, S; Autischer, M; Beck, P; Latocha, M
2007-01-01
The response of a tissue equivalent proportional counter (TEPC) in a mixed radiation field with a neutron energy distribution similar to the radiation field at commercial flight altitudes has been studied. The measurements have been done at the CERN-EU High-Energy Reference Field (CERF) facility where a well-characterised radiation field is available for intercomparison. The TEPC instrument used by the ARC Seibersdorf Research is filled with pure propane gas at low pressure and can be used to determine the lineal energy distribution of the energy deposition in a mass of gas equivalent to a 2 microm diameter volume of unit density tissue, of similar size to the nuclei of biological cells. The linearity of the detector response was checked both in term of dose and dose rate. The effect of dead-time has been corrected. The influence of the detector exposure location and orientation in the radiation field on the dose distribution was also studied as a function of the total dose. The microdosimetric distribution of the absorbed dose as a function of the lineal energy has been obtained and compared with the same distribution simulated with the FLUKA Monte Carlo transport code. The dose equivalent was calculated by folding this distribution with the quality factor as a function of linear energy transfer. The comparison between the measured and simulated distributions show that they are in good agreement. As a result of this study the detector is well characterised, thanks also to the numerical simulations the instrument response is well understood, and it's currently being used onboard the aircrafts to evaluate the dose to aircraft crew caused by cosmic radiation.
Note: Precise radial distribution of charged particles in a magnetic guiding field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backe, H., E-mail: backe@kph.uni-mainz.de
2015-07-15
Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less
Condition assessment of nonlinear processes
Hively, Lee M.; Gailey, Paul C.; Protopopescu, Vladimir A.
2002-01-01
There is presented a reliable technique for measuring condition change in nonlinear data such as brain waves. The nonlinear data is filtered and discretized into windowed data sets. The system dynamics within each data set is represented by a sequence of connected phase-space points, and for each data set a distribution function is derived. New metrics are introduced that evaluate the distance between distribution functions. The metrics are properly renormalized to provide robust and sensitive relative measures of condition change. As an example, these measures can be used on EEG data, to provide timely discrimination between normal, preseizure, seizure, and post-seizure states in epileptic patients. Apparatus utilizing hardware or software to perform the method and provide an indicative output is also disclosed.
Wei, Ting-Ting; Tang, Qing-Qin; Qin, Bao-Dong; Ma, Ning; Wang, Li-Li; Zhou, Lin; Zhong, Ren-Qian
2016-11-25
Red blood cell distribution width (RDW), a routinely tested parameter of the complete blood count (CBC), has been reported to be increased in various cancers and correlated with the patients' clinical characteristics. However, the significance of RDW in primary hepatocellular carcinoma (pHCC) is largely unknown. The aim of this study was to evaluate the associations between RDW and the clinical characteristics of pHCC patients. Medical records of 110 treatment-naive pHCC patients were retrospectively reviewed. Their clinical characteristics on admission, including RDW, liver function tests and tumor stage, were extracted, and their relationships were analyzed using Spearman correlation and Kruskal-Wallis test. Sixty-eight healthy individuals were set as controls. RDW was significantly increased in pHCC patients and correlated with the liver function tests. However, no correlation between RDW and tumor stage was found. RDW may be used to assess the liver function, but not the tumor stage in pHCC patients.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
NASA Astrophysics Data System (ADS)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy
2018-03-01
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.
Incoherent vector mesons production in PbPb ultraperipheral collisions at the LHC
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2017-03-01
The incoherent rapidity distributions of vector mesons are computed in dipole model in PbPb ultraperipheral collisions at the CERN Large Hadron Collider (LHC). The IIM model fitted from newer data is employed in the dipole amplitude. The Boosted Gaussian and Gaus-LC wave functions for vector mesons are implemented in the calculations as well. Predictions for the J / ψ, ψ (2 s), ρ and ϕ incoherent rapidity distributions are evaluated and compared with experimental data and other theoretical predictions in this paper. We obtain closer predictions of the incoherent rapidity distributions for J / ψ than previous calculations in the IIM model.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...
2018-03-06
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Minimization for conditional simulation: Relationship to optimal transport
NASA Astrophysics Data System (ADS)
Oliver, Dean S.
2014-05-01
In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.
Climate Controls AM Fungal Distributions from Global to Local Scales
NASA Astrophysics Data System (ADS)
Kivlin, S. N.; Hawkes, C.; Muscarella, R.; Treseder, K. K.; Kazenel, M.; Lynn, J.; Rudgers, J.
2016-12-01
Arbuscular mycorrhizal (AM) fungi have key functions in terrestrial biogeochemical processes; thus, determining the relative importance of climate, edaphic factors, and plant community composition on their geographic distributions can improve predictions of their sensitivity to global change. Local adaptation by AM fungi to plant hosts, soil nutrients, and climate suggests that all of these factors may control fungal geographic distributions, but their relative importance is unknown. We created species distribution models for 142 AM fungal taxa at the global scale with data from GenBank. We compared climate variables (BioClim and soil moisture), edaphic variables (phosphorus, carbon, pH, and clay content), and plant variables using model selection on models with (1) all variables, (2) climatic variables only (including soil moisture) and (3) resource-related variables only (all other soil parameters and NPP) using the MaxEnt algorithm evaluated with ENMEval. We also evaluated whether drivers of AM fungal distributions were phylogenetically conserved. To test whether global correlates of AM fungal distributions were reflected at local scales, we then surveyed AM fungi in nine plant hosts along three elevation gradients in the Upper Gunnison Basin, Colorado, USA. At the global scale, the distributions of 55% of AM fungal taxa were affected by both climate and soil resources, whereas 16% were only affected by climate and 29% were only affected by soil resources. Even for AM fungi that were affected by both climate and resources, the effects of climatic variables nearly always outweighed those of resources. Soil moisture and isothermality were the main climatic and NPP and soil carbon the main resource related factors influencing AM fungal distributions. Distributions of closely related AM fungal taxa were similarly affected by climate, but not by resources. Local scale surveys of AM fungi across elevations confirmed that climate was a key driver of AM fungal composition and root colonization, with weaker influences of plant identity and soil nutrients. These two studies across scales suggest prevailing effects of climate on AM fungal distributions. Thus, incorporating climate when forecasting future ranges of AM fungi will enhance predictions of AM fungal abundance and associated ecosystem functions.
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.
An evaluation of child passenger safety : the effectiveness and benefits of safety seats : summary
DOT National Transportation Integrated Search
1986-02-01
The purpose of child safety seats is to reduce the number of child passengers killed or injured in motor vehicle crashes. The seats function by absorbing and safely distributing crash impact loads over the child's body while holding the child in plac...
Priming vs. Rhyming: Orthographic and Phonological Representations in the Left and Right Hemispheres
ERIC Educational Resources Information Center
Lindell, Annukka K.; Lum, Jarrad A. G.
2008-01-01
The right cerebral hemisphere has long been argued to lack phonological processing capacity. Recently, however, a sex difference in the cortical representation of phonology has been proposed, suggesting discrete left hemisphere lateralization in males and more distributed, bilateral representation of function in females. To evaluate this…
Unexpected Direction of Differential Item Functioning
ERIC Educational Resources Information Center
Park, Sangwook
2011-01-01
Many studies have been conducted to evaluate the performance of DIF detection methods, when two groups have different ability distributions. Such studies typically have demonstrated factors that are associated with inflation of Type I error rates in DIF detection, such as mean ability differences. However, no study has examined how the direction…
POPESCU, M.R.; TRANĂ, F.; MANOLEA, H.; RAUTEN, ANE-MARIE; ȘURLIN, PETRA; DRAGOMIR, L.P.
2014-01-01
The partially intercalated edentation offers the practitioner the possibility of the functional rehabilitation of the dental arcades through conjunct gnato-prosthetic devices. The functions of the dento-maxilar device, disturbed by the presence of edentation, require a treatment approach so that, without pre-planning or estimating, the result can lead most of the times to failure in terms of functionality. Clinical evaluation associated with pre- and proprosthetic treatment can also impose, in some situations the evaluation of the dental units involved in prosthetic rehabilitation. The association and implementation of the prosthetic construction in the occlusive-articular ensemble, as well as the counterbalancing of the mastication forces per dental unit and whole interarch system, linked to the distribution of the forces at the level of the pillar teeth and prosthetic construction, represent the goal of this theoretical study. PMID:25729593
Popescu, M R; Trană, F; Manolea, H; Rauten, Ane-Marie; Șurlin, Petra; Dragomir, L P
2014-01-01
The partially intercalated edentation offers the practitioner the possibility of the functional rehabilitation of the dental arcades through conjunct gnato-prosthetic devices. The functions of the dento-maxilar device, disturbed by the presence of edentation, require a treatment approach so that, without pre-planning or estimating, the result can lead most of the times to failure in terms of functionality. Clinical evaluation associated with pre- and proprosthetic treatment can also impose, in some situations the evaluation of the dental units involved in prosthetic rehabilitation. The association and implementation of the prosthetic construction in the occlusive-articular ensemble, as well as the counterbalancing of the mastication forces per dental unit and whole interarch system, linked to the distribution of the forces at the level of the pillar teeth and prosthetic construction, represent the goal of this theoretical study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
NASA Astrophysics Data System (ADS)
Lederman, Dror; Leader, Joseph K.; Zheng, Bin; Sciurba, Frank C.; Tan, Jun; Gur, David
2011-03-01
Quantitative computed tomography (CT) has been widely used to detect and evaluate the presence (or absence) of emphysema applying the density masks at specific thresholds, e.g., -910 or -950 Hounsfield Unit (HU). However, it has also been observed that subjects with similar density-mask based emphysema scores could have varying lung function, possibly indicating differences of disease severity. To assess this possible discrepancy, we investigated whether density distribution of "viable" lung parenchyma regions with pixel values > -910 HU correlates with lung function. A dataset of 38 subjects, who underwent both pulmonary function testing and CT examinations in a COPD SCCOR study, was assembled. After the lung regions depicted on CT images were automatically segmented by a computerized scheme, we systematically divided the lung parenchyma into different density groups (bins) and computed a number of statistical features (i.e., mean, standard deviation (STD), skewness of the pixel value distributions) in these density bins. We then analyzed the correlations between each feature and lung function. The correlation between diffusion lung capacity (DLCO) and STD of pixel values in the bin of -910HU <= PV < -750HU was -0.43, as compared with a correlation of -0.49 obtained between the post-bronchodilator ratio (FEV1/FVC) measured by the forced expiratory volume in 1 second (FEV1) dividing the forced vital capacity (FVC) and the STD of pixel values in the bin of -1024HU <= PV < -910HU. The results showed an association between the distribution of pixel values in "viable" lung parenchyma and lung function, which indicates that similar to the conventional density mask method, the pixel value distribution features in "viable" lung parenchyma areas may also provide clinically useful information to improve assessments of lung disease severity as measured by lung functional tests.
NASA Astrophysics Data System (ADS)
Yuizono, Takaya; Hara, Kousuke; Nakayama, Shigeru
A web-based distributed cooperative development environment of sign-language animation system has been developed. We have extended the system from the previous animation system that was constructed as three tiered system which consists of sign-language animation interface layer, sign-language data processing layer, and sign-language animation database. Two components of a web client using VRML plug-in and web servlet are added to the previous system. The systems can support humanoid-model avatar for interoperability, and can use the stored sign language animation data shared on the database. It is noted in the evaluation of this system that the inverse kinematics function of web client improves the sign-language animation making.
System Analysis for the Huntsville Operation Support Center, Distributed Computer System
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Massey, D.
1985-01-01
HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.
Miranda de Sá, Antonio Mauricio F L; Infantosi, Antonio Fernando C; Lazarev, Vladimir V
2007-01-01
In the present work, a commonly used index for evaluating the Event-Related Synchronization and Desynchronization (ERS/ERD) in the EEG was expressed as a function of the Spectral F-Test (SFT), which is a statistical test for assessing if two sample spectra are from populations with identical theoretical spectra. The sampling distribution of SFT has been derived, allowing hence ERS/ERD to be evaluated under a statistical basis. An example of the technique was also provided in the EEG signals from 10 normal subjects during intermittent photic stimulation.
Mowlavi, Ali Asghar; Fornasier, Maria Rossa; Mirzaei, Mohammd; Bregant, Paola; de Denaro, Mario
2014-10-01
The beta and gamma absorbed fractions in organs and tissues are the important key factors of radionuclide internal dosimetry based on Medical Internal Radiation Dose (MIRD) approach. The aim of this study is to find suitable analytical functions for beta and gamma absorbed fractions in spherical and ellipsoidal volumes with a uniform distribution of iodine-131 radionuclide. MCNPX code has been used to calculate the energy absorption from beta and gamma rays of iodine-131 uniformly distributed inside different ellipsoids and spheres, and then the absorbed fractions have been evaluated. We have found the fit parameters of a suitable analytical function for the beta absorbed fraction, depending on a generalized radius for ellipsoid based on the radius of sphere, and a linear fit function for the gamma absorbed fraction. The analytical functions that we obtained from fitting process in Monte Carlo data can be used for obtaining the absorbed fractions of iodine-131 beta and gamma rays for any volume of the thyroid lobe. Moreover, our results for the spheres are in good agreement with the results of MIRD and other scientific literatures.
Stealth Biocompatible Si-Based Nanoparticles for Biomedical Applications
Chaix, Arnaud; Gary-Bobo, Magali; Angeletti, Bernard; Masion, Armand; Da Silva, Afitz; Daurat, Morgane; Lichon, Laure; Garcia, Marcel; Morère, Alain; El Cheikh, Khaled; Durand, Jean-Olivier; Cunin, Frédérique; Auffan, Mélanie
2017-01-01
A challenge regarding the design of nanocarriers for drug delivery is to prevent their recognition by the immune system. To improve the blood residence time and prevent their capture by organs, nanoparticles can be designed with stealth properties using polymeric coating. In this study, we focused on the influence of surface modification with polyethylene glycol and/or mannose on the stealth behavior of porous silicon nanoparticles (pSiNP, ~200 nm). In vivo biodistribution of pSiNPs formulations were evaluated in mice 5 h after intravenous injection. Results indicated that the distribution in the organs was surface functionalization-dependent. Pristine pSiNPs and PEGylated pSiNPs were distributed mainly in the liver and spleen, while mannose-functionalized pSiNPs escaped capture by the spleen, and had higher blood retention. The most efficient stealth behavior was observed with PEGylated pSiNPs anchored with mannose that were the most excreted in urine at 5 h. The biodegradation kinetics evaluated in vitro were in agreement with these in vivo observations. The biocompatibility of the pristine and functionalized pSiNPs was confirmed in vitro on human cell lines and in vivo by cytotoxic and systemic inflammation investigations, respectively. With their biocompatibility, biodegradability, and stealth properties, the pSiNPs functionalized with mannose and PEG show promising potential for biomedical applications. PMID:28946628
Cai, Jing; Read, Paul W; Altes, Talissa A; Molloy, Janelle A; Brookeman, James R; Sheng, Ke
2007-01-21
Treatment planning based on probability distribution function (PDF) of patient geometries has been shown a potential off-line strategy to incorporate organ motion, but the application of such approach highly depends upon the reproducibility of the PDF. In this paper, we investigated the dependences of the PDF reproducibility on the imaging acquisition parameters, specifically the scan time and the frame rate. Three healthy subjects underwent a continuous 5 min magnetic resonance (MR) scan in the sagittal plane with a frame rate of approximately 10 f s-1, and the experiments were repeated with an interval of 2 to 3 weeks. A total of nine pulmonary vessels from different lung regions (upper, middle and lower) were tracked and the dependences of their displacement PDF reproducibility were evaluated as a function of scan time and frame rate. As results, the PDF reproducibility error decreased with prolonged scans and appeared to approach equilibrium state in subjects 2 and 3 within the 5 min scan. The PDF accuracy increased in the power function with the increase of frame rate; however, the PDF reproducibility showed less sensitivity to frame rate presumably due to the randomness of breathing which dominates the effects. As the key component of the PDF-based treatment planning, the reproducibility of the PDF affects the dosimetric accuracy substantially. This study provides a reference for acquiring MR-based PDF of structures in the lung.
Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.
2016-01-01
Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276
do Amaral, Leonardo L.; Pavoni, Juliana F.; Sampaio, Francisco; Netto, Thomaz Ghilardi
2015-01-01
Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source‐to‐detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were −1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N‐ PMID:26699306
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Q; Zhang, M; Chen, T
Purpose: Variation in function of different lung regions has been ignored so far for conventional lung cancer treatment planning, which may lead to higher risk of radiation induced lung disease. 4DCT based lung ventilation imaging provides a novel yet convenient approach for lung functional imaging as 4DCT is taken as routine for lung cancer treatment. Our work aims to evaluate the impact of accounting for spatial heterogeneity in lung function using 4DCT based lung ventilation imaging for proton and IMRT plans. Methods: Six patients with advanced stage lung cancer of various tumor locations were retrospectively evaluated for the study. Protonmore » and IMRT plans were designed following identical planning objective and constrains for each patient. Ventilation images were calculated from patients’ 4DCT using deformable image registration implemented by Velocity AI software based on Jacobian-metrics. Lung was delineated into two function level regions based on ventilation (low and high functional area). High functional region was defined as lung ventilation greater than 30%. Dose distribution and statistics in different lung function area was calculated for patients. Results: Variation in dosimetric statistics of different function lung region was observed between proton and IMRT plans. In all proton plans, high function lung regions receive lower maximum dose (100.2%–108.9%), compared with IMRT plans (106.4%–119.7%). Interestingly, three out of six proton plans gave higher mean dose by up to 2.2% than IMRT to high function lung region. Lower mean dose (lower by up to 14.1%) and maximum dose (lower by up to 9%) were observed in low function lung for proton plans. Conclusion: A systematic approach was developed to generate function lung ventilation imaging and use it to evaluate plans. This method hold great promise in function analysis of lung during planning. We are currently studying more subjects to evaluate this tool.« less
Evaluation of Kurtosis into the product of two normally distributed variables
NASA Astrophysics Data System (ADS)
Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio
2016-06-01
Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.
Applicability of AgMERRA Forcing Dataset to Fill Gaps in Historical in-situ Meteorological Data
NASA Astrophysics Data System (ADS)
Bannayan, M.; Lashkari, A.; Zare, H.; Asadi, S.; Salehnia, N.
2015-12-01
Integrated assessment studies of food production systems use crop models to simulate the effects of climate and socio-economic changes on food security. Climate forcing data is one of those key inputs of crop models. This study evaluated the performance of AgMERRA climate forcing dataset to fill gaps in historical in-situ meteorological data for different climatic regions of Iran. AgMERRA dataset intercompared with in- situ observational dataset for daily maximum and minimum temperature and precipitation during 1980-2010 periods via Root Mean Square error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) for 17 stations in four climatic regions included humid and moderate, cold, dry and arid, hot and humid. Moreover, probability distribution function and cumulative distribution function compared between model and observed data. The results of measures of agreement between AgMERRA data and observed data demonstrated that there are small errors in model data for all stations. Except for stations which are located in cold regions, model data in the other stations illustrated under-prediction for daily maximum temperature and precipitation. However, it was not significant. In addition, probability distribution function and cumulative distribution function showed the same trend for all stations between model and observed data. Therefore, the reliability of AgMERRA dataset is high to fill gaps in historical observations in different climatic regions of Iran as well as it could be applied as a basis for future climate scenarios.
Global Statistics of Bolides in the Terrestrial Atmosphere
NASA Astrophysics Data System (ADS)
Chernogor, L. F.; Shevelyov, M. B.
2017-06-01
Purpose: Evaluation and analysis of distribution of the number of meteoroid (mini asteroid) falls as a function of glow energy, velocity, the region of maximum glow altitude, and geographic coordinates. Design/methodology/approach: The satellite database on the glow of 693 mini asteroids, which were decelerated in the terrestrial atmosphere, has been used for evaluating basic meteoroid statistics. Findings: A rapid decrease in the number of asteroids with increasing of their glow energy is confirmed. The average speed of the celestial bodies is equal to about 17.9 km/s. The altitude of maximum glow most often equals to 30-40 km. The distribution law for a number of meteoroids entering the terrestrial atmosphere in longitude and latitude (after excluding the component in latitudinal dependence due to the geometry) is approximately uniform. Conclusions: Using a large enough database of measurements, the meteoroid (mini asteroid) statistics has been evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Sudipta; Nelson, Austin; Hoke, Anderson
2016-12-12
Traditional testing methods fall short in evaluating interactions between multiple smart inverters providing advanced grid support functions due to the fact that such interactions largely depend on their placements on the electric distribution systems with impedances between them. Even though significant concerns have been raised by the utilities on the effects of such interactions, little effort has been made to evaluate them. In this paper, power hardware-in-the-loop (PHIL) based testing was utilized to evaluate autonomous volt-var operations of multiple smart photovoltaic (PV) inverters connected to a simple distribution feeder model. The results provided in this paper show that depending onmore » volt-var control (VVC) parameters and grid parameters, interaction between inverters and between the inverter and the grid is possible in some extreme cases with very high VVC slopes, fast response times and large VVC response delays.« less
Lustre Distributed Name Space (DNE) Evaluation at the Oak Ridge Leadership Computing Facility (OLCF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, James S.; Leverman, Dustin B.; Hanley, Jesse A.
This document describes the Lustre Distributed Name Space (DNE) evaluation carried at the Oak Ridge Leadership Computing Facility (OLCF) between 2014 and 2015. DNE is a development project funded by the OpenSFS, to improve Lustre metadata performance and scalability. The development effort has been split into two parts, the first part (DNE P1) providing support for remote directories over remote Lustre Metadata Server (MDS) nodes and Metadata Target (MDT) devices, while the second phase (DNE P2) addressed split directories over multiple remote MDS nodes and MDT devices. The OLCF have been actively evaluating the performance, reliability, and the functionality ofmore » both DNE phases. For these tests, internal OLCF testbed were used. Results are promising and OLCF is planning on a full DNE deployment by mid-2016 timeframe on production systems.« less
From Maximum Entropy Models to Non-Stationarity and Irreversibility
NASA Astrophysics Data System (ADS)
Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar
The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.
The heterogeneity of segmental dynamics of filled EPDM by (1)H transverse relaxation NMR.
Moldovan, D; Fechete, R; Demco, D E; Culea, E; Blümich, B; Herrmann, V; Heinz, M
2011-01-01
Residual second moment of dipolar interactions M(2) and correlation time segmental dynamics distributions were measured by Hahn-echo decays in combination with inverse Laplace transform for a series of unfilled and filled EPDM samples as functions of carbon-black N683 filler content. The fillers-polymer chain interactions which dramatically restrict the mobility of bound rubber modify the dynamics of mobile chains. These changes depend on the filler content and can be evaluated from distributions of M(2). A dipolar filter was applied to eliminate the contribution of bound rubber. In the first approach the Hahn-echo decays were fitted with a theoretical relationship to obtain the average values of the (1)H residual second moment
The heterogeneity of segmental dynamics of filled EPDM by 1H transverse relaxation NMR
NASA Astrophysics Data System (ADS)
Moldovan, D.; Fechete, R.; Demco, D. E.; Culea, E.; Blümich, B.; Herrmann, V.; Heinz, M.
2011-01-01
Residual second moment of dipolar interactions M∼2 and correlation time segmental dynamics distributions were measured by Hahn-echo decays in combination with inverse Laplace transform for a series of unfilled and filled EPDM samples as functions of carbon-black N683 filler content. The fillers-polymer chain interactions which dramatically restrict the mobility of bound rubber modify the dynamics of mobile chains. These changes depend on the filler content and can be evaluated from distributions of M∼2. A dipolar filter was applied to eliminate the contribution of bound rubber. In the first approach the Hahn-echo decays were fitted with a theoretical relationship to obtain the average values of the 1H residual second moment
[Comprehensive evaluation of eco-tourism resources in Yichun forest region of Northeast China].
Huang, Maozhu; Hu, Haiqing; Zhang, Jie; Chen, Lijun
2006-11-01
By using analytical hierarchy process (AHP) and Delphi method, a total of 30 representative evaluation factors in the aspects of tourism resources quantity, environmental quantity, tourism conditions, and tourism functions were chosen to build up a comprehensive quantitative evaluation model to evaluate the eco-tourism resources of Yichun forest region in Northeast China. The results showed that in Yichun forest region, the natural eco-tourism resources were superior to the humanity resources. On the regional distribution of favorable level eco-tourism resources quantity, 4 sites were very prominent, i.e., north (Jiayin) -center (Yichun) -east (Jinshantun) -south (Tieli). As for the distribution of eco-tourism resources type, it was basically in the sequence of north (Jiayin, Tangwang River, Wuying) -center (Yichun, Shangganling) -east (Jinshantun, Meixi) -south (Teli, Dailing). Based on the above analyses, Yichun forest region could be divided into four tourism areas, i.e., the south, the east, the central, and the north. Aimed at the special features of each area, the initial development directions were introduced.
Attanasi, E.D.; Schuenemeyer, J.H.
2002-01-01
Exploration ventures in frontier areas have high risks. Before committing to them, firms prepare regional resource assessments to evaluate the potential payoffs. With no historical basis for directly estimating size distribution of undiscovered accumulations, reservoir attribute probability distributions can be assessed subjectively and used to project undiscovered accumulation sizes. Three questions considered here are: (1) what distributions should be used to characterize the subjective assessments of reservoir attributes, (2) how parsimonious can the analyst be when eliciting subjective information from the assessment geologist, and (3) what are consequences of ignoring dependencies among reservoir attributes? The standard or norm used for comparing outcomes is the computed cost function describing costs of finding, developing, and producing undiscovered oil accumulations. These questions are examined in the context of the US Geological Survey's recently published regional assessment of the 1002 Area of the Arctic National Wildlife Refuge, Alaska. We study effects of using the various common distributions to characterize the geologist's subjective distributions representing reservoir attributes. Specific findings show that triangular distributions result in substantial bias in economic forecasts when used to characterize skewed distributions. Moreover, some forms of the lognormal distribution also result in biased economic inferences. Alternatively, we generally determined four fractiles (100, 50, 5, 0) to be sufficient to capture essential economic characteristics of the underlying attribute distributions. Ignoring actual dependencies among reservoir attributes biases the economic evaluation. ?? 2002 International Association for Mathematical Geology.
Testing methods of pressure distribution of bra cups on breasts soft tissue
NASA Astrophysics Data System (ADS)
Musilova, B.; Nemcokova, R.; Svoboda, M.
2017-10-01
Objective of this study is to evaluate testing methods of pressure distribution of bra cups on breasts soft tissue, the system which do not affect the space between the wearer's body surface and bra cups and thus do not influence the geometry of the measured body surface and thus investigate the functional performance of brassieres. Two measuring systems were used for the pressure comfort evaluating: 1) The pressure distribution of a wearing bra during 20 minutes on women's breasts has been directly measured using pressure sensor, a dielectricum which is elastic polyurethane foam bra cups. Twelve points were measured in bra cups. 2) Simultaneously the change of temperature in the same points bra was tested with the help of noncontact system the thermal imager. The results indicate that both of those systems can identify different pressure distribution at different points. The same size of bra designing features bra cups made from the same material and which is define by the help of same standardised body dimensions (bust and underbust) can cause different value of a compression on different shape of a woman´s breast soft tissue.
NASA Astrophysics Data System (ADS)
Makabe, Toshiaki
2018-03-01
A time-varying low-temperature plasma sustained by electrical powers with various kinds of fRequencies has played a key role in the historical development of new technologies, such as gas lasers, ozonizers, micro display panels, dry processing of materials, medical care, and so on, since World War II. Electrons in a time-modulated low-temperature plasma have a proper velocity spectrum, i.e. velocity distribution dependent on the microscopic quantum characteristics of the feed gas molecule and on the external field strength and the frequency. In order to solve and evaluate the time-varying velocity distribution, we have mostly two types of theoretical methods based on the classical and linear Boltzmann equations, namely, the expansion method using the orthogonal function and the procedure of non-expansional temporal evolution. Both methods have been developed discontinuously and progressively in synchronization with those technological developments. In this review, we will explore the historical development of the theoretical procedure to evaluate the electron velocity distribution in a time-varying low-temperature plasma over the past 70 years.
Motion of charged particles in planetary magnetospheres with nonelectromagnetic forces
NASA Technical Reports Server (NTRS)
Huang, T. S.; Hill, T. W.; Wolf, R. A.
1988-01-01
Expressions are derived for the mirror point, the bounce period, the second adiabatic invariant, and the bounce-averaged azimuthal drift velocity as functions of equatorial pitch angle for a charged particle in a dipole magnetic field in the presence of centrifugal, gravitational, and Coriolis forces. These expressions are evaluated numerically, and the results are displayed graphically. The average azimuthal drift speed for a flux tube containing a thermal equilibrium plasma distribution is also evaluated.
Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.
NASA Astrophysics Data System (ADS)
Maheras, Steven James
Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.
Linear dispersion properties of ring velocity distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandas, Marek, E-mail: marek.vandas@asu.cas.cz; Hellinger, Petr; Institute of Atmospheric Physics, AS CR, Bocni II/1401, CZ-14100 Prague
2015-06-15
Linear properties of ring velocity distribution functions are investigated. The dispersion tensor in a form similar to the case of a Maxwellian distribution function, but for a general distribution function separable in velocities, is presented. Analytical forms of the dispersion tensor are derived for two cases of ring velocity distribution functions: one obtained from physical arguments and one for the usual, ad hoc ring distribution. The analytical expressions involve generalized hypergeometric, Kampé de Fériet functions of two arguments. For a set of plasma parameters, the two ring distribution functions are compared. At the parallel propagation with respect to the ambientmore » magnetic field, the two ring distributions give the same results identical to the corresponding bi-Maxwellian distribution. At oblique propagation, the two ring distributions give similar results only for strong instabilities, whereas for weak growth rates their predictions are significantly different; the two ring distributions have different marginal stability conditions.« less
NASA Astrophysics Data System (ADS)
Dimitrova, M.; Popov, Tsv K.; Adamek, J.; Kovačič, J.; Ivanova, P.; Hasan, E.; López-Bruna, D.; Seidl, J.; Vondráček, P.; Dejarnac, R.; Stöckel, J.; Imríšek, M.; Panek, R.; the COMPASS Team
2017-12-01
The radial distributions of the main plasma parameters in the scrape-off-layer of the COMPASS tokamak are measured during L-mode and H-mode regimes by using both Langmuir and ball-pen probes mounted on a horizontal reciprocating manipulator. The radial profile of the plasma potential derived previously from Langmuir probes data by using the first derivative probe technique is compared with data derived using ball-pen probes. A good agreement can be seen between the data acquired by the two techniques during the L-mode discharge and during the H-mode regime within the inter-ELM periods. In contrast with the first derivative probe technique, the ball-pen probe technique does not require a swept voltage and, therefore, the temporal resolution is only limited by the data acquisition system. In the electron temperature evaluation, in the far scrape-off layer and in the limiter shadow, where the electron energy distribution is Maxwellian, the results from both techniques match well. In the vicinity of the last closed flux surface, where the electron energy distribution function is bi-Maxwellian, the ball-pen probe technique results are in agreement with the high-temperature components of the electron distribution only. We also discuss the application of relatively large Langmuir probes placed in parallel and perpendicularly to the magnetic field lines to studying the main plasma parameters. The results obtained by the two types of the large probes agree well. They are compared with Thomson scattering data for electron temperatures and densities. The results for the electron densities are compared also with the results from ASTRA code calculation of the electron source due to the ionization of the neutrals by fast electrons and the origin of the bi-Maxwellian electron energy distribution function is briefly discussed.
NASA Astrophysics Data System (ADS)
Janković, Bojan
2009-10-01
The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.
Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2017-03-01
Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.
Hadron mass corrections in semi-inclusive deep-inelastic scattering
Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; ...
2015-09-24
We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q 2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.
Anharmonic effects in the quantum cluster equilibrium method
NASA Astrophysics Data System (ADS)
von Domaros, Michael; Perlt, Eva
2017-03-01
The well-established quantum cluster equilibrium (QCE) model provides a statistical thermodynamic framework to apply high-level ab initio calculations of finite cluster structures to macroscopic liquid phases using the partition function. So far, the harmonic approximation has been applied throughout the calculations. In this article, we apply an important correction in the evaluation of the one-particle partition function and account for anharmonicity. Therefore, we implemented an analytical approximation to the Morse partition function and the derivatives of its logarithm with respect to temperature, which are required for the evaluation of thermodynamic quantities. This anharmonic QCE approach has been applied to liquid hydrogen chloride and cluster distributions, and the molar volume, the volumetric thermal expansion coefficient, and the isobaric heat capacity have been calculated. An improved description for all properties is observed if anharmonic effects are considered.
Simulation of probabilistic wind loads and building analysis
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Chamis, Christos C.
1991-01-01
Probabilistic wind loads likely to occur on a structure during its design life are predicted. Described here is a suitable multifactor interactive equation (MFIE) model and its use in the Composite Load Spectra (CLS) computer program to simulate the wind pressure cumulative distribution functions on four sides of a building. The simulated probabilistic wind pressure load was applied to a building frame, and cumulative distribution functions of sway displacements and reliability against overturning were obtained using NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), a stochastic finite element computer code. The geometry of the building and the properties of building members were also considered as random in the NESSUS analysis. The uncertainties of wind pressure, building geometry, and member section property were qualified in terms of their respective sensitivities on the structural response.
Research on illumination uniformity of high-power LED array light source
NASA Astrophysics Data System (ADS)
Yu, Xiaolong; Wei, Xueye; Zhang, Ou; Zhang, Xinwei
2018-06-01
Uniform illumination is one of the most important problem that must be solved in the application of high-power LED array. A numerical optimization algorithm, is applied to obtain the best LED array typesetting so that the light intensity of the target surface is evenly distributed. An evaluation function is set up through the standard deviation of the illuminance function, then the particle swarm optimization algorithm is utilized to optimize different arrays. Furthermore, the light intensity distribution is obtained by optical ray tracing method. Finally, a hybrid array is designed and the optical ray tracing method is applied to simulate the array. The simulation results, which is consistent with the traditional theoretical calculation, show that the algorithm introduced in this paper is reasonable and effective.
An Analytical Evaluation of Two Common-Odds Ratios as Population Indicators of DIF.
ERIC Educational Resources Information Center
Pommerich, Mary; And Others
The Mantel-Haenszel (MH) statistic for identifying differential item functioning (DIF) commonly conditions on the observed test score as a surrogate for conditioning on latent ability. When the comparison group distributions are not completely overlapping (i.e., are incongruent), the observed score represents different levels of latent ability…
Samuel A. Cushman; Erin L. Landguth
2012-01-01
Population connectivity is a function of the dispersal ability of the species, influences of different landscape elements on its movement behavior, density and distribution of the population, and structure of the landscape. Often, researchers have not carefully considered each of these factors when evaluating connectivity and making conservation recommendations. We...
Accelerated life testing and reliability of high K multilayer ceramic capacitors
NASA Technical Reports Server (NTRS)
Minford, W. J.
1981-01-01
The reliability of one lot of high K multilayer ceramic capacitors was evaluated using accelerated life testing. The degradation in insulation resistance was characterized as a function of voltage and temperature. The times to failure at a voltage-temperature stress conformed to a lognormal distribution with a standard deviation approximately 0.5.
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul W.
2010-01-01
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
Evaluation of cluster expansions and correlated one-body properties of nuclei
NASA Astrophysics Data System (ADS)
Moustakidis, Ch. C.; Massen, S. E.; Panos, C. P.; Grypeos, M. E.; Antonov, A. N.
2001-07-01
Three different cluster expansions for the evaluation of correlated one-body properties of s-p and s-d shell nuclei are compared. Harmonic oscillator wave functions and Jastrow-type correlations are used, while analytical expressions are obtained for the charge form factor, density distribution, and momentum distribution by truncating the expansions and using a standard Jastrow correlation function f. The harmonic oscillator parameter b and the correlation parameter β have been determined by a least-squares fit to the experimental charge form factors in each case. The information entropy of nuclei in position space (Sr) and momentum space (Sk) according to the three methods are also calculated. It is found that the larger the entropy sum, S=Sr+Sk (the net information content of the system), the smaller the values of χ2. This indicates that maximal S is a criterion of the quality of a given nuclear model, according to the maximum entropy principle. Only two exceptions to this rule, out of many cases examined, were found. Finally an analytic expression for the so-called ``healing'' or ``wound'' integrals is derived with the function f considered, for any state of the relative two-nucleon motion, and their values in certain cases are computed and compared.
Olejarczyk, Elzbieta; Bogucki, Piotr; Sobieszek, Aleksander
2017-01-01
Electroencephalographic (EEG) patterns were analyzed in a group of ambulatory patients who ranged in age and sex using spectral analysis as well as Directed Transfer Function, a method used to evaluate functional brain connectivity. We tested the impact of window size and choice of reference electrode on the identification of two or more peaks with close frequencies in the spectral power distribution, so called "split alpha." Together with the connectivity analysis, examination of spatiotemporal maps showing the distribution of amplitudes of EEG patterns allowed for better explanation of the mechanisms underlying the generation of split alpha peaks. It was demonstrated that the split alpha spectrum can be generated by two or more independent and interconnected alpha wave generators located in different regions of the cerebral cortex, but not necessarily in the occipital cortex. We also demonstrated the importance of appropriate reference electrode choice during signal recording. In addition, results obtained using the original data were compared with results obtained using re-referenced data, using average reference electrode and reference electrode standardization techniques.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Li, Xin; Li, Ye
2015-01-01
Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.
NASA Astrophysics Data System (ADS)
Park, J.; Lim, Y. J.; Sung, J. H.; Kang, H. S.
2017-12-01
The widely used meteorological drought index, the Standardized Precipitation Index (SPI) basically assumes stationarity, but recent change in the climate have led to a need to review this hypothesis. In this study, a new non-stationary SPI that considers not only the modified probability distribution parameter but also the return period under the non-stationary process has been proposed. The results are evaluated for two severe drought cases during the last 10 years in South Korea. As a result, SPIs considered the non-stationary hypothesis underestimated the drought severity than the stationary SPI despite these past two droughts were recognized as significantly severe droughts. It may be caused by that the variances of summer and autumn precipitation become larger over time then it can make the shape of probability distribution function wider than before. This understanding implies that drought expressions by statistical index such as SPI can be distorted by stationary assumption and cautious approach is needed when deciding drought level considering climate changes.
Adaptive Detector Arrays for Optical Communications Receivers
NASA Technical Reports Server (NTRS)
Vilnrotter, V.; Srinivasan, M.
2000-01-01
The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.
Reframing the Dissemination Challenge: A Marketing and Distribution Perspective
Bernhardt, Jay M.
2009-01-01
A fundamental obstacle to successful dissemination and implementation of evidence-based public health programs is the near-total absence of systems and infrastructure for marketing and distribution. We describe the functions of a marketing and distribution system, and we explain how it would help move effective public health programs from research to practice. Then we critically evaluate the 4 dominant strategies now used to promote dissemination and implementation, and we explain how each would be enhanced by marketing and distribution systems. Finally, we make 6 recommendations for building the needed system infrastructure and discuss the responsibility within the public health community for implementation of these recommendations. Without serious investment in such infrastructure, application of proven solutions in public health practice will continue to occur slowly and rarely. PMID:19833993
Reframing the dissemination challenge: a marketing and distribution perspective.
Kreuter, Matthew W; Bernhardt, Jay M
2009-12-01
A fundamental obstacle to successful dissemination and implementation of evidence-based public health programs is the near-total absence of systems and infrastructure for marketing and distribution. We describe the functions of a marketing and distribution system, and we explain how it would help move effective public health programs from research to practice. Then we critically evaluate the 4 dominant strategies now used to promote dissemination and implementation, and we explain how each would be enhanced by marketing and distribution systems. Finally, we make 6 recommendations for building the needed system infrastructure and discuss the responsibility within the public health community for implementation of these recommendations. Without serious investment in such infrastructure, application of proven solutions in public health practice will continue to occur slowly and rarely.
Application-oriented architecture for multimedia teleservices
NASA Astrophysics Data System (ADS)
Vanrijssen, Erwin; Widya, Ing; Michiels, Eddie
This paper looks into communications capabilities that are required by distributed multimedia applications to achieve relation preserving information exchange. These capabilities are derived by analyzing the notion of 'information exchange' and are embodied in communications functionalities. To emphasize the importance of the users' view, a top-down approach is applied. The revised Open Systems Interconnection (OSI) Application Layer Structure (OSI-ALS) is used to model the communications functionalities and to develop an architecture for composition of multimedia teleservices with these functionalities. This work may therefore be considered an exercise to evaluate the suitability of OSI-ALS for composition of multimedia teleservices.
NASA Astrophysics Data System (ADS)
Wysocki, J. K.
1984-02-01
The idea of Young and Clark of independent evaluation of the work function φ and electric field strength F in FEM [R.D. Young and H.E. Clark, Phys. Rev. Letters 17 (1966) 351] has been extended to the energy region above the Fermi level. The estimation of slowly varying elliptic functions, necessary to compute φ and F, using only experimental data is presented. Calculations for the W(111) plane using the field electron energy distribution and the integral field-emission current dependence on retarding voltage have been performed.
The Global Emergency Observation and Warning System
NASA Technical Reports Server (NTRS)
Bukley, Angelia P.; Mulqueen, John A.
1994-01-01
Based on an extensive characterization of natural hazards, and an evaluation of their impacts on humanity, a set of functional technical requirements for a global warning and relief system was developed. Since no technological breakthroughs are required to implement a global system capable of performing the functions required to provide sufficient information for prevention, preparedness, warning, and relief from natural disaster effects, a system is proposed which would combine the elements of remote sensing, data processing, information distribution, and communications support on a global scale for disaster mitigation.
Study of residual stresses in CT test specimens welded by electron beam
NASA Astrophysics Data System (ADS)
Papushkin, I. V.; Kaisheva, D.; Bokuchava, G. D.; Angelov, V.; Petrov, P.
2018-03-01
The paper reports result of residual stress distribution studies in CT specimens reconstituted by electron beam welding (EBW). The main aim of the study is evaluation of the applicability of the welding technique for CT specimens’ reconstitution. Thus, the temperature distribution during electron beam welding of a CT specimen was calculated using Green’s functions and the residual stress distribution was determined experimentally using neutron diffraction. Time-of-flight neutron diffraction experiments were performed on a Fourier stress diffractometer at the IBR-2 fast pulsed reactor in FLNP JINR (Dubna, Russia). The neutron diffraction data estimates yielded a maximal stress level of ±180 MPa in the welded joint.
Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R
2001-06-01
To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.
Delgado-Baquerizo, Manuel; Fry, Ellen L; Eldridge, David J; de Vries, Franciska T; Manning, Peter; Hamonts, Kelly; Kattge, Jens; Boenisch, Gerhard; Singh, Brajesh K; Bardgett, Richard D
2018-04-19
We lack strong empirical evidence for links between plant attributes (plant community attributes and functional traits) and the distribution of soil microbial communities at large spatial scales. Using datasets from two contrasting regions and ecosystem types in Australia and England, we report that aboveground plant community attributes, such as diversity (species richness) and cover, and functional traits can predict a unique portion of the variation in the diversity (number of phylotypes) and community composition of soil bacteria and fungi that cannot be explained by soil abiotic properties and climate. We further identify the relative importance and evaluate the potential direct and indirect effects of climate, soil properties and plant attributes in regulating the diversity and community composition of soil microbial communities. Finally, we deliver a list of examples of common taxa from Australia and England that are strongly related to specific plant traits, such as specific leaf area index, leaf nitrogen and nitrogen fixation. Together, our work provides new evidence that plant attributes, especially plant functional traits, can predict the distribution of soil microbial communities at the regional scale and across two hemispheres. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.
Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn
2016-03-01
One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Nazmi A.; Ali, Taha A., E-mail: Taha25@gmail.com; Aly, Moustafa H.
2013-12-15
In this work, different FBG temperature sensors are designed and evaluated with various apodization profiles. Evaluation is done under a wide range of controlling design parameters like sensor length and refractive index modulation amplitude, targeting a remarkable temperature sensing performance. New judgment techniques are introduced such as apodization window roll-off rate, asymptotic sidelobe (SL) decay level, number of SLs, and average SL level (SLav). Evaluation techniques like reflectivity, Full width at Half Maximum (FWHM), and Sidelobe Suppression Ratio (SLSR) are also used. A “New” apodization function is proposed, which achieves better performance like asymptotic decay of 18.4 dB/nm, high SLSRmore » of 60 dB, high channel isolation of 57.9 dB, and narrow FWHM less than 0.15 nm. For a single accurate temperature sensor measurement in extensive noisy environment, optimum results are obtained by the Nuttall apodization profile and the new apodization function, which have remarkable SLSR. For a quasi-distributed FBG temperature sensor the Barthann and the new apodization profiles obtain optimum results. Barthann achieves a high asymptotic decay of 40 dB/nm, a narrow FWHM (less than 25 GHZ), a very low SLav of −45.3 dB, high isolation of 44.6 dB, and a high SLSR of 35 dB. The new apodization function achieves narrow FWHM of 0.177 nm, very low SL of −60.1, very low SLav of −63.6 dB, and very high SLSR of −57.7 dB. A study is performed on including an unapodized sensor among apodized sensors in a quasi-distributed sensing system. Finally, an isolation examination is performed on all the discussed apodizations and a linear relation between temperature and the Bragg wavelength shift is observed experimentally and matched with the simulated results.« less
NASA Astrophysics Data System (ADS)
Mohammed, Nazmi A.; Ali, Taha A.; Aly, Moustafa H.
2013-12-01
In this work, different FBG temperature sensors are designed and evaluated with various apodization profiles. Evaluation is done under a wide range of controlling design parameters like sensor length and refractive index modulation amplitude, targeting a remarkable temperature sensing performance. New judgment techniques are introduced such as apodization window roll-off rate, asymptotic sidelobe (SL) decay level, number of SLs, and average SL level (SLav). Evaluation techniques like reflectivity, Full width at Half Maximum (FWHM), and Sidelobe Suppression Ratio (SLSR) are also used. A "New" apodization function is proposed, which achieves better performance like asymptotic decay of 18.4 dB/nm, high SLSR of 60 dB, high channel isolation of 57.9 dB, and narrow FWHM less than 0.15 nm. For a single accurate temperature sensor measurement in extensive noisy environment, optimum results are obtained by the Nuttall apodization profile and the new apodization function, which have remarkable SLSR. For a quasi-distributed FBG temperature sensor the Barthann and the new apodization profiles obtain optimum results. Barthann achieves a high asymptotic decay of 40 dB/nm, a narrow FWHM (less than 25 GHZ), a very low SLav of -45.3 dB, high isolation of 44.6 dB, and a high SLSR of 35 dB. The new apodization function achieves narrow FWHM of 0.177 nm, very low SL of -60.1, very low SLav of -63.6 dB, and very high SLSR of -57.7 dB. A study is performed on including an unapodized sensor among apodized sensors in a quasi-distributed sensing system. Finally, an isolation examination is performed on all the discussed apodizations and a linear relation between temperature and the Bragg wavelength shift is observed experimentally and matched with the simulated results.
Using Bayesian networks to support decision-focused information retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehner, P.; Elsaesser, C.; Seligman, L.
This paper has described an approach to controlling the process of pulling data/information from distributed data bases in a way that is specific to a persons specific decision making context. Our prototype implementation of this approach uses a knowledge-based planner to generate a plan, an automatically constructed Bayesian network to evaluate the plan, specialized processing of the network to derive key information items that would substantially impact the evaluation of the plan (e.g., determine that replanning is needed), automated construction of Standing Requests for Information (SRIs) which are automated functions that monitor changes and trends in distributed data base thatmore » are relevant to the key information items. This emphasis of this paper is on how Bayesian networks are used.« less
Evaluation of substitution monopole models for tire noise sound synthesis
NASA Astrophysics Data System (ADS)
Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.
2010-01-01
Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.
Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani
2012-01-01
Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455
Distributions of observed death tolls govern sensitivity to human fatalities
Olivola, Christopher Y.; Sagara, Namika
2009-01-01
How we react to humanitarian crises, epidemics, and other tragic events involving the loss of human lives depends largely on the extent to which we are moved by the size of their associated death tolls. Many studies have demonstrated that people generally exhibit a diminishing sensitivity to the number of human fatalities and, equivalently, a preference for risky (vs. sure) alternatives in decisions under risk involving human losses. However, the reason for this tendency remains unknown. Here we show that the distributions of event-related death tolls that people observe govern their evaluations of, and risk preferences concerning, human fatalities. In particular, we show that our diminishing sensitivity to human fatalities follows from the fact that these death tolls are approximately power-law distributed. We further show that, by manipulating the distribution of mortality-related events that people observe, we can alter their risk preferences in decisions involving fatalities. Finally, we show that the tendency to be risk-seeking in mortality-related decisions is lower in countries in which high-mortality events are more frequently observed. Our results support a model of magnitude evaluation based on memory sampling and relative judgment. This model departs from the utility-based approaches typically encountered in psychology and economics in that it does not rely on stable, underlying value representations to explain valuation and choice, or on choice behavior to derive value functions. Instead, preferences concerning human fatalities emerge spontaneously from the distributions of sampled events and the relative nature of the evaluation process. PMID:20018778
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Semi-classical approach to compute RABBITT traces in multi-dimensional complex field distributions.
Lucchini, M; Ludwig, A; Kasmi, L; Gallmann, L; Keller, U
2015-04-06
We present a semi-classical model to calculate RABBITT (Reconstruction of Attosecond Beating By Interference of Two-photon Transitions) traces in the presence of a reference infrared field with a complex two-dimensional (2D) spatial distribution. The evolution of the electron spectra as a function of the pump-probe delay is evaluated starting from the solution of the classical equation of motion and incorporating the quantum phase acquired by the electron during the interaction with the infrared field. The total response to an attosecond pulse train is then evaluated by a coherent sum of the contributions generated by each individual attosecond pulse in the train. The flexibility of this model makes it possible to calculate spectrograms from non-trivial 2D field distributions. After confirming the validity of the model in a simple 1D case, we extend the discussion to describe the probe-induced phase in photo-emission experiments on an ideal metallic surface.
Bayes Factor Covariance Testing in Item Response Models.
Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip
2017-12-01
Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.
Using the Quantile Mapping to improve a weather generator
NASA Astrophysics Data System (ADS)
Chen, Y.; Themessl, M.; Gobiet, A.
2012-04-01
We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.
Ventilation-perfusion distribution in normal subjects.
Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A
2012-09-01
Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.
NASA Astrophysics Data System (ADS)
Fang, H.; van der Zwaag, S.; van Dijk, N. H.
2018-07-01
The magnetic configuration of a ferromagnetic system with mono-disperse and poly-disperse distribution of magnetic particles with inter-particle interactions has been computed. The analysis is general in nature and applies to all systems containing magnetically interacting particles in a non-magnetic matrix, but has been applied to steel microstructures, consisting of a paramagnetic austenite phase and a ferromagnetic ferrite phase, as formed during the austenite-to-ferrite phase transformation in low-alloyed steels. The characteristics of the computational microstructures are linked to the correlation function and determinant of depolarisation matrix, which can be experimentally obtained in three-dimensional neutron depolarisation (3DND). By tuning the parameters in the model used to generate the microstructure, we studied the effect of the (magnetic) particle size distribution on the 3DND parameters. It is found that the magnetic particle size derived from 3DND data matches the microstructural grain size over a wide range of volume fractions and grain size distributions. A relationship between the correlation function and the relative width of the particle size distribution was proposed to accurately account for the width of the size distribution. This evaluation shows that 3DND experiments can provide unique in situ information on the austenite-to-ferrite phase transformation in steels.
NASA Astrophysics Data System (ADS)
Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann
2017-04-01
Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas horizontal similarity fades out in short distances. Some of the input data have been acquired in the framework of the ChangeHabitats2 project financed by the European Union. BS contributed as an Alexander von Humboldt Research Fellow.
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2018-02-01
Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.
NASA Astrophysics Data System (ADS)
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2018-02-01
Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.
Force-field functor theory: classical force-fields which reproduce equilibrium quantum distributions
Babbush, Ryan; Parkhill, John; Aspuru-Guzik, Alán
2013-01-01
Feynman and Hibbs were the first to variationally determine an effective potential whose associated classical canonical ensemble approximates the exact quantum partition function. We examine the existence of a map between the local potential and an effective classical potential which matches the exact quantum equilibrium density and partition function. The usefulness of such a mapping rests in its ability to readily improve Born-Oppenheimer potentials for use with classical sampling. We show that such a map is unique and must exist. To explore the feasibility of using this result to improve classical molecular mechanics, we numerically produce a map from a library of randomly generated one-dimensional potential/effective potential pairs then evaluate its performance on independent test problems. We also apply the map to simulate liquid para-hydrogen, finding that the resulting radial pair distribution functions agree well with path integral Monte Carlo simulations. The surprising accessibility and transferability of the technique suggest a quantitative route to adapting Born-Oppenheimer potentials, with a motivation similar in spirit to the powerful ideas and approximations of density functional theory. PMID:24790954
Lagerlöf, Jakob H; Bernhardt, Peter
2016-01-01
To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.
Wang, You-qi; Bai, Yi-ru; Wang, Jian-yu
2016-02-15
Surface soil samples (0-20 cm) from eight different functional areas in Yinchuan city were collected. There were 10 samples respectively in each functional area. The urban soil heavy metals (Zn, Cd, Pb, Mn, Cu and Cr) pollution characteristics and sources in eight different functional areas were evaluated by mathematical statistics and geostatistical analysis method. Meanwhile, the spatial distributions of heavy metals based on the geography information system (GIS) were plotted. The average values of total Zn, Cd, Pb, Mn, Cu and Cr were 74.87, 0.15, 29.02, 553.55, 40.37 and 80.79 mg x kg(-1), respectively. The results showed that the average value of soil heavy metals was higher than the soil background value of Ningxia, which indicated accumulation of the heavy metals in urban soil. The single factor pollution index of soil heavy metals was in the sequence of Cu > Pb > Zn > Cr > Cd > Mn. The average values of total Zn, Cd, Pb and Cr were higher in north east, south west and central city, while the average values of Mn and Cu were higher in north east and central city. There was moderate pollution in road and industrial area of Yinchuan, while the other functional areas showed slight pollution according to Nemoro synthesis index. The pollution degree of different functional areas was as follows: road > industrial area > business district > medical treatment area > residential area > public park > development zone > science and education area. The results indicated that the soil heavy metal pollution condition in Yinchuan City has been affected by human activities with the development of economy.
Nilsson, Henrik; Blomqvist, Lennart; Douglas, Lena; Nordell, Anders; Jacobsson, Hans; Hagen, Karin; Bergquist, Annika; Jonas, Eduard
2014-04-01
To evaluate dynamic hepatocyte-specific contrast-enhanced MRI (DHCE-MRI) for the assessment of global and segmental liver volume and function in patients with primary sclerosing cholangitis (PSC), and to explore the heterogeneous distribution of liver function in this patient group. Twelve patients with primary sclerosing cholangitis (PSC) and 20 healthy volunteers were examined using DHCE-MRI with Gd-EOB-DTPA. Segmental and total liver volume were calculated, and functional parameters (hepatic extraction fraction [HEF], input relative blood-flow [irBF], and mean transit time [MTT]) were calculated in each liver voxel using deconvolutional analysis. In each study subject, and incongruence score (IS) was constructed to describe the mismatch between segmental function and volume. Among patients, the liver function parameters were correlated to bile duct obstruction and to established scoring models for liver disease. Liver function was significantly more heterogeneously distributed in the patient group (IS 1.0 versus 0.4). There were significant correlations between biliary obstruction and segmental functional parameters (HEF rho -0.24; irBF rho -0.45), and the Mayo risk score correlated significantly with the total liver extraction capacity of Gd-EOB-DTPA (rho -0.85). The study demonstrates a new method to quantify total and segmental liver function using DHCE-MRI in patients with PSC. Copyright © 2013 Wiley Periodicals, Inc.
Arenas-Castro, Salvador; Gonçalves, João; Alves, Paulo; Alcaraz-Segura, Domingo; Honrado, João P
2018-01-01
Global environmental changes are rapidly affecting species' distributions and habitat suitability worldwide, requiring a continuous update of biodiversity status to support effective decisions on conservation policy and management. In this regard, satellite-derived Ecosystem Functional Attributes (EFAs) offer a more integrative and quicker evaluation of ecosystem responses to environmental drivers and changes than climate and structural or compositional landscape attributes. Thus, EFAs may hold advantages as predictors in Species Distribution Models (SDMs) and for implementing multi-scale species monitoring programs. Here we describe a modelling framework to assess the predictive ability of EFAs as Essential Biodiversity Variables (EBVs) against traditional datasets (climate, land-cover) at several scales. We test the framework with a multi-scale assessment of habitat suitability for two plant species of conservation concern, both protected under the EU Habitats Directive, differing in terms of life history, range and distribution pattern (Iris boissieri and Taxus baccata). We fitted four sets of SDMs for the two test species, calibrated with: interpolated climate variables; landscape variables; EFAs; and a combination of climate and landscape variables. EFA-based models performed very well at the several scales (AUCmedian from 0.881±0.072 to 0.983±0.125), and similarly to traditional climate-based models, individually or in combination with land-cover predictors (AUCmedian from 0.882±0.059 to 0.995±0.083). Moreover, EFA-based models identified additional suitable areas and provided valuable information on functional features of habitat suitability for both test species (narrowly vs. widely distributed), for both coarse and fine scales. Our results suggest a relatively small scale-dependence of the predictive ability of satellite-derived EFAs, supporting their use as meaningful EBVs in SDMs from regional and broader scales to more local and finer scales. Since the evaluation of species' conservation status and habitat quality should as far as possible be performed based on scalable indicators linking to meaningful processes, our framework may guide conservation managers in decision-making related to biodiversity monitoring and reporting schemes.
NASA Astrophysics Data System (ADS)
Yeo, I. Y.; Lang, M.; Lee, S.; Huang, C.; Jin, H.; McCarty, G.; Sadeghi, A.
2017-12-01
The wetland ecosystem plays crucial roles in improving hydrological function and ecological integrity for the downstream water and the surrounding landscape. However, changing behaviours and functioning of wetland ecosystems are poorly understood and extremely difficult to characterize. Improved understanding on hydrological behaviours of wetlands, considering their interaction with surrounding landscapes and impacts on downstream waters, is an essential first step toward closing the knowledge gap. We present an integrated wetland-catchment modelling study that capitalizes on recently developed inundation maps and other geospatial data. The aim of the data-model integration is to improve spatial prediction of wetland inundation and evaluate cumulative hydrological benefits at the catchment scale. In this paper, we highlight problems arising from data preparation, parameterization, and process representation in simulating wetlands within a distributed catchment model, and report the recent progress on mapping of wetland dynamics (i.e., inundation) using multiple remotely sensed data. We demonstrate the value of spatially explicit inundation information to develop site-specific wetland parameters and to evaluate model prediction at multi-spatial and temporal scales. This spatial data-model integrated framework is tested using Soil and Water Assessment Tool (SWAT) with improved wetland extension, and applied for an agricultural watershed in the Mid-Atlantic Coastal Plain, USA. This study illustrates necessity of spatially distributed information and a data integrated modelling approach to predict inundation of wetlands and hydrologic function at the local landscape scale, where monitoring and conservation decision making take place.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Utility of TICS-M for the assessment of cognitive function in older adults.
de Jager, Celeste A; Budge, Marc M; Clarke, Robert
2003-04-01
Routine screening of high-risk elderly people for early cognitive impairment is constrained by the limitations of currently available cognitive function tests. The Telephone Interview of Cognitive Status is a novel instrument for assessment of cognitive function that can be administered in person or by telephone. To evaluate the determinants and utility of TICS-M (13-item modified version) for assessment of cognitive function in healthy elderly people. The utility of TICS-M was compared with more widely used MMSE and CAMCOG in a cross-sectional survey of 120 older (62 to 89 years) UK adults. The TICS-M cognitive test scores (27.97, SD 4.15) were normally distributed in contrast with those for MMSE and CAMCOG that had a negatively skewed distribution. TICS-M scores were inversely correlated with age (r = -0.21) and with the NART fullscale IQ (r = -0.35), but were independent of years of education in this cohort. TICS-M was highly correlated with MMSE (r = 0.57) and with CAMCOG (r = 0.62) scores. The time required to complete the test is comparable to MMSE and substantially less than CAMCOG. The normal distribution of TICS-M test scores suggest that this test is less constrained by the ceiling effect which limits the utility of MMSE and CAMCOG test scores in detecting early cognitive impairment. TICS-M is an appropriate instrument to assess cognitive function in both research and in clinical practice. Copyright 2003 John Wiley & Sons, Ltd.
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.
Boundary-Layer Receptivity and Integrated Transition Prediction
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Choudhari, Meelan
2005-01-01
The adjoint parabold stability equations (PSE) formulation is used to calculate the boundary layer receptivity to localized surface roughness and suction for compressible boundary layers. Receptivity efficiency functions predicted by the adjoint PSE approach agree well with results based on other nonparallel methods including linearized Navier-Stokes equations for both Tollmien-Schlichting waves and crossflow instability in swept wing boundary layers. The receptivity efficiency function can be regarded as the Green's function to the disturbance amplitude evolution in a nonparallel (growing) boundary layer. Given the Fourier transformed geometry factor distribution along the chordwise direction, the linear disturbance amplitude evolution for a finite size, distributed nonuniformity can be computed by evaluating the integral effects of both disturbance generation and linear amplification. The synergistic approach via the linear adjoint PSE for receptivity and nonlinear PSE for disturbance evolution downstream of the leading edge forms the basis for an integrated transition prediction tool. Eventually, such physics-based, high fidelity prediction methods could simulate the transition process from the disturbance generation through the nonlinear breakdown in a holistic manner.
Evaluation of the image quality of telescopes using the star test
NASA Astrophysics Data System (ADS)
Vazquez y Monteil, Sergio; Salazar Romero, Marcos A.; Gale, David M.
2004-10-01
The Point Spread Function (PSF) or star test is one of the main criteria to be considered in the quality of the image formed by a telescope. In a real system the distribution of irradiance in the image of a point source is given by the PSF, a function which is highly sensitive to aberrations. The PSF of a telescope may be determined by measuring the intensity distribution in the image of a star. Alternatively, if we already know the aberrations present in the optical system, then we may use diffraction theory to calculate the function. In this paper we propose a method for determining the wavefront aberrations from the PSF, using Genetic Algorithms to perform an optimization process starting from the PSF instead of the more traditional method of adjusting an aberration polynomial. We show that this method of phase recuperation is immune to noise-induced errors arising during image aquisition and registration. Some practical results are shown.
Evaluation of several methods of applying sewage effluent to forested soils in the winter.
Alfred Ray Harris
1978-01-01
Surface application methods result in heat loss, deep soil frost, and surface ice accumulations; subsurface methods decrease heat loss and produce shallower frost. Distribution of effluent within the frozen soil is a function of surface application methods, piping due to macropores and biopores, and water movement due to temperature gradients. Nitrate is not...
From diagnostics to metagenomics: Applications of DNA-based tools in forest pathology
Amy L. Ross-Davis; Mee-Sook Kim; Jane E. Stewart; John W. Hanna; John D. Shaw; Ned B. Klopfenstein
2013-01-01
Advances in molecular technology provide an accessible set of tools to 1) help forest pathologists detect, identify, and monitor forest pathogens, 2) examine the evolutionary relationships and global distributions of forest pathogens and their hosts, 3) assess the diversity and structure of host and pathogen populations, and 4) evaluate the structure and function of...
Measurement of the mass and composition of particulate matter (PM) as a function of size is important for research studies for chemical mass balance, factor analysis, air quality model evaluation, epidemiology, and risk assessment. Such measurements are also important in underst...
Environmental Statement for Proposed Continental Operations Range.
1974-12-17
out territories, recognize young, detect and locate prey , and evade predators . These functions could be critically affected, even if the animals appear... Sparrow 254. CaZamoapiza meZanocorye Lark Bunting 266. Spizella atrogularis evura 255. Paaaerculue 8andwichensia nevadensis* Black-chinned Sparrow ...and Evaluation Systems Program Office By ............. .....------------- Kirtland Air Force Base, New Mexico Distribution I Availability Codes I
IRT-LR-DIF with Estimation of the Focal-Group Density as an Empirical Histogram
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Item response theory-likelihood ratio-differential item functioning (IRT-LR-DIF) is used to evaluate the degree to which items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Usually, the latent distribution is presumed normal for both…
Forest tree growth response to hydroclimate variability in the southern Appalachians
Katherine J. Elliott; Chelcy Ford Miniat; Neil Pederson; Stephanie H. Laseter
2015-01-01
Climate change will affect tree species growth and distribution; however, under the same climatic conditions species may differ in their response according to site conditions. We evaluated the climate-driven patterns of growth for six dominant deciduous tree species in the southern Appalachians. We categorized species into two functional groups based on their stomatal...
PresenceAbsence: An R package for presence absence analysis
Elizabeth A. Freeman; Gretchen Moisen
2008-01-01
The PresenceAbsence package for R provides a set of functions useful when evaluating the results of presence-absence analysis, for example, models of species distribution or the analysis of diagnostic tests. The package provides a toolkit for selecting the optimal threshold for translating a probability surface into presence-absence maps specifically tailored to their...
Mas, Abraham; Amenós, Montse; Lois, L Maria
2016-01-01
Different studies point to an enrichment in SUMO conjugation in the cell nucleus, although non-nuclear SUMO targets also exist. In general, the study of subcellular localization of proteins is essential for understanding their function within a cell. Fluorescence microscopy is a powerful tool for studying subcellular protein partitioning in living cells, since fluorescent proteins can be fused to proteins of interest to determine their localization. Subcellular distribution of proteins can be influenced by binding to other biomolecules and by posttranslational modifications. Sometimes these changes affect only a portion of the protein pool or have a partial effect, and a quantitative evaluation of fluorescence images is required to identify protein redistribution among subcellular compartments. In order to obtain accurate data about the relative subcellular distribution of SUMO conjugation machinery members, and to identify the molecular determinants involved in their localization, we have applied quantitative confocal microscopy imaging. In this chapter, we will describe the fluorescent protein fusions used in these experiments, and how to measure, evaluate, and compare average fluorescence intensities in cellular compartments by image-based analysis. We show the distribution of some components of the Arabidopsis SUMOylation machinery in epidermal onion cells and how they change their distribution in the presence of interacting partners or even when its activity is affected.
Saba, T M; Antikatzides, T G
1979-04-01
The influence of systemic heparin administration on the vascular clearance and tissue distribution of blood-borne microparticles was evaluated in normal rats and rats after operation (laparotomy plus intestinal manipulation) utilizing an (131)I- colloid which is phagocytized by the reticuloendothelial system (RES). Intravenous heparin administration (100 USP/100g body weight) into normal animals three minutes prior to colloid injection (50 mg/lOOg) induced a significant increase in pulmonary localization of the microparticles as compared to nonheparinized control rats, while hepatic and splenic uptake were decreased. Surgical trauma decreased hepatic RE uptake and increased pulmonary localization of the microparticles when injected systemically at 60 minutes postsurgery. Heparin administration 60 minutes after surgery and three minutes prior to colloid injection, magnified the increased pulmonary localization response with an associated further depression of the RES. The ability of heparin to alter both RE clearance function and lung localization of microparticles was dose dependent and a function of the interval between heparin administration and systemic particulate infusion. Thus, low dose heparin administration was capable of stimulating RE activity while heparin in doses of excess of 50 USP units/lOOg body weight decreased RE function. These findings suggest that the functional state of the hepatic RE system can be greatly affected in a dose-dependent manner by systemic heparin administration which may influence distribution of blood-borne microparticles.
NASA Astrophysics Data System (ADS)
Hong, D. H.; Park, J. K.
2018-04-01
The purpose of the present work was to verify the grain size distribution (GSD) method, which was recently proposed by one of the present authors as a method for evaluating the fraction of dynamic recrystallisation (DRX) in a microalloyed medium carbon steel. To verify the GSD-method, we have selected a 304 stainless steel as a model system and have measured the evolution of the overall grain size distribution (including both the recrystallised and unrecrystallised grains) during hot compression at 1,000 °C in a Gleeble machine; the DRX fraction estimated using the GSD method is compared with the experimentally measured value via EBSD. The results show that the previous GSD method tends to overestimate the DRX fraction due to the utilisation of a plain lognormal distribution function (LDF). To overcome this shortcoming, we propose a modified GSD-method wherein an area-weighted LDF, in place of a plain LDF, is employed to model the evolution of GSD during hot deformation. Direct measurement of the DRX fraction using EBSD confirms that the modified GSD-method provides a reliable method for evaluating the DRX fraction from the experimentally measured GSDs. Reasonable agreement between the DRX fraction and softening fraction suggests that the Kocks-Mecking method utilising the Voce equation can be satisfactorily used to model the work hardening and dynamic recovery behaviour of steels during hot deformation.
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
Plagianakos, V P; Magoulas, G D; Vrahatis, M N
2006-03-01
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.
Design type air engine Di Pietro
NASA Astrophysics Data System (ADS)
Zwierzchowski, Jaroslaw
The article presents a pneumatic engine constructed by Angelo Di Pietro. 3D solid models of pneumatic engine components were presented therein. A directional valve is a key element of the control system. The valve functions as a camshaft distributing air to particular engine chambers. The construction designed by Angelo Di Pietro is modern and innovative. A pneumatic engine requires low pressure to start rotary movement. With the use of CFD software, the fields of velocity vectors' distribution were determined. Moreover, the author determined the distribution of pressure values in engine inlet and outlet channels. CFD model studies on engine operation were conducted for chosen stages of operating cycles. On the basis of simulation tests that were conducted, the values of flow rates for the engine were determined. The distribution of pressure values made it possible to evaluate the torque value on the rotating shaft.
Rehabilitation outcomes in children with cerebral palsy during a 2 year period
İçağasıoğlu, Afitap; Mesci, Erkan; Yumusakhuylu, Yasemin; Turgut, Selin Turan; Murat, Sadiye
2015-01-01
[Purpose] To observe motor and functional progress of children with cerebral palsy during 2 years. [Subjects and Methods] Pediatric cerebral palsy patients aged 3–15 years (n = 35/69) with 24-month follow-up at our outpatient cerebral palsy clinic were evaluated retrospectively. The distribution of cerebral palsy types was as follows: diplegia (n = 19), hemiplegia (n = 4), and quadriplegia (n = 12). Participants were divided into 3 groups according to their Gross Motor Functional Classification System scores (i.e., mild, moderate, and severe). All participants were evaluated initially and at the final assessment 2 years later. During this time, patients were treated 3 times/week. Changes in motor and functional abilities were assessed based on Gross Motor Function Measure-88 and Wee Functional Independence Measure. [Results] Significant improvements were observed in Gross Motor Function Measure-88 and Wee Functional Independence Measure results in all 35 patients at the end of 2 years. The Gross Motor Function Measure-88 scores correlated with Wee Functional Independence Measure Scores. Marked increases in motor and functional capabilities in mild and moderate cerebral palsy patients were observed in the subgroup assessments, but not in those with severe cerebral palsy. [Conclusion] Rehabilitation may greatly help mild and moderate cerebral palsy patients achieve their full potential. PMID:26644677
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
Teaching and Learning Activity Sequencing System using Distributed Genetic Algorithms
NASA Astrophysics Data System (ADS)
Matsui, Tatsunori; Ishikawa, Tomotake; Okamoto, Toshio
The purpose of this study is development of a supporting system for teacher's design of lesson plan. Especially design of lesson plan which relates to the new subject "Information Study" is supported. In this study, we developed a system which generates teaching and learning activity sequences by interlinking lesson's activities corresponding to the various conditions according to the user's input. Because user's input is multiple information, there will be caused contradiction which the system should solve. This multiobjective optimization problem is resolved by Distributed Genetic Algorithms, in which some fitness functions are defined with reference models on lesson, thinking and teaching style. From results of various experiments, effectivity and validity of the proposed methods and reference models were verified; on the other hand, some future works on reference models and evaluation functions were also pointed out.
Hirano, Toshiyuki; Sato, Fumitoshi
2014-07-28
We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.
NASA Astrophysics Data System (ADS)
Edwards, Brian J.
2002-05-01
Given the premise that a set of dynamical equations must possess a definite, underlying mathematical structure to ensure local and global thermodynamic stability, as has been well documented, several different models for describing liquid crystalline dynamics are examined with respect to said structure. These models, each derived during the past several years using a specific closure approximation for the fourth moment of the distribution function in Doi's rigid rod theory, are all shown to be inconsistent with this basic mathematical structure. The source of this inconsistency lies in Doi's expressions for the extra stress tensor and temporal evolution of the order parameter, which are rederived herein using a transformation that allows for internal compatibility with the underlying mathematical structure that is present on the distribution function level of description.
Statistical procedures for evaluating daily and monthly hydrologic model predictions
Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.
2004-01-01
The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.
Bao, Ande; Zhao, Xia; Phillips, William T; Woolley, F Ross; Otto, Randal A; Goins, Beth; Hevezi, James M
2005-01-01
Radioimmunotherapy of hematopoeitic cancers and micrometastases has been shown to have significant therapeutic benefit. The treatment of solid tumors with radionuclide therapy has been less successful. Previous investigations of intratumoral activity distribution and studies on intratumoral drug delivery suggest that a probable reason for the disappointing results in solid tumor treatment is nonuniform intratumoral distribution coupled with restricted intratumoral drug penetrance, thus inhibiting antineoplastic agents from reaching the tumor's center. This paper describes a nonuniform intratumoral activity distribution identified by limited radiolabeled tracer diffusion from tumor surface to tumor center. This activity was simulated using techniques that allowed the absorbed dose distributions to be estimated using different intratumoral diffusion capabilities and calculated for tumors of varying diameters. The influences of these absorbed dose distributions on solid tumor radionuclide therapy are also discussed. The absorbed dose distribution was calculated using the dose point kernel method that provided for the application of a three-dimensional (3D) convolution between a dose rate kernel function and an activity distribution function. These functions were incorporated into 3D matrices with voxels measuring 0.10 x 0.10 x 0.10 mm3. At this point fast Fourier transform (FFT) and multiplication in frequency domain followed by inverse FFT (iFFT) were used to effect this phase of the dose calculation process. The absorbed dose distribution for tumors of 1, 3, 5, 10, and 15 mm in diameter were studied. Using the therapeutic radionuclides of 131I, 186Re, 188Re, and 90Y, the total average dose, center dose, and surface dose for each of the different tumor diameters were reported. The absorbed dose in the nearby normal tissue was also evaluated. When the tumor diameters exceed 15 mm, a much lower tumor center dose is delivered compared with tumors between 3 and 5 mm in diameter. Based on these findings, the use of higher beta-energy radionuclides, such as 188Re and 90Y is more effective in delivering a higher absorbed dose to the tumor center at tumor diameters around 10 mm.
Kitamura, Yutaka; Isobe, Kazushige; Kawabata, Hideo; Tsujino, Tetsuhiro; Watanabe, Taisuke; Nakamura, Masayuki; Toyoda, Toshihisa; Okudera, Hajime; Okuda, Kazuhiro; Nakata, Koh; Kawase, Tomoyuki
2018-06-18
Platelet activation and aggregation have been conventionally evaluated using an aggregometer. However, this method is suitable for short-term but not long-term quantitative evaluation of platelet aggregation, morphological changes, and/or adhesion to specific materials. The recently developed digital holographic microscopy (DHM) has enabled the quantitative evaluation of cell size and morphology without labeling or destruction. Thus, we aim to validate its applicability in quantitatively evaluating changes in cell morphology, especially in the aggregation and spreading of activated platelets, thus modifying typical image analysis procedures to suit aggregated platelets. Freshly prepared platelet-rich plasma was washed with phosphate-buffered saline and treated with 0.1% CaCl 2 . Platelets were then fixed and subjected to DHM, scanning electron microscopy (SEM), atomic force microscopy, optical microscopy, and flow cytometry (FCM). Tightly aggregated platelets were identified as single cells. Data obtained from time-course experiments were plotted two-dimensionally according to the average optical thickness versus attachment area and divided into four regions. The majority of the control platelets, which supposedly contained small and round platelets, were distributed in the lower left region. As activation time increased, however, this population dispersed toward the upper right region. The distribution shift demonstrated by DHM was essentially consistent with data obtained from SEM and FCM. Therefore, DHM was validated as a promising device for testing platelet function given that it allows for the quantitative evaluation of activation-dependent morphological changes in platelets. DHM technology will be applicable to the quality assurance of platelet concentrates, as well as diagnosis and drug discovery related to platelet functions. Copyright © 2018 Elsevier Ltd. All rights reserved.
The European functional tree of bird life in the face of global change
Thuiller, Wilfried; Pironon, Samuel; Psomas, Achilleas; Barbet-Massin, Morgane; Jiguet, Frédéric; Lavergne, Sébastien; Pearman, Peter B.; Renaud, Julien; Zupan, Laure; Zimmermann, Niklaus E.
2014-01-01
Despite the recognized joint impact of climate and land cover change on facets of biodiversity and their associated functions, risk assessments have primarily evaluated impacts on species ranges and richness. Here we quantify the sensitivity of the functional structure of European avian assemblages to changes in both regional climate and land cover. We combine species range forecasts with functional trait information. We show that species sensitivity to environmental change is randomly distributed across the functional tree of the European avifauna and that functionally unique species are not disproportionately threatened by 2080. However, projected species range changes will modify the mean species richness and functional diversity of bird diets and feeding behaviours. This will unequally affect the spatial structure of functional diversity, leading to homogenization across Europe. Therefore, global changes may alter the functional structure of species assemblages in the future in ways that need to be accounted for in conservation planning. PMID:24452245
Soares, Daniel Crístian Ferreira; Ferreira, Tiago Hilário; Ferreira, Carolina de Aguiar; Cardoso, Valbert Nascimento; de Sousa, Edésia Martins Barros
2012-02-28
In the present study, boron nitride nanotubes (BNNTs) were synthesized from an innovative process and functionalized with a glycol chitosan polymer in CDTN (Centro de Desenvolvimento da Tecnologia Nuclear) laboratories. As a means of studying their in vivo biodistribution behavior, these nanotubes were radiolabeled with (99m)Tc and injected in mice. Their size, distribution, and homogeneity were determined by photon correlation spectroscopy (PCS), while their zeta potential was determined by laser Doppler anemometry. The morphology and structural organization were evaluated by scanning electron microscopy (SEM). The functionalization in the nanotubes was evaluated by thermogravimetry analysis (TGA) and Fourier transformer infrared spectroscopy. The results showed that BNNTs were obtained and functionalized successfully, reaching a mean size and dispersity deemed adequate for in vivo studies. The BNNTs were also evaluated by ex vivo biodistribution studies and scintigraphic imaging in healthy mice. The results showed that nanostructures, after 24h, having accumulated in the liver, spleen and gut, and eliminated via renal excretion. The findings from this study reveal a potential application of functionalized BNNTs as new potential drugs or radioisotope nanocarriers to be applied in therapeutic procedures. Copyright © 2011 Elsevier B.V. All rights reserved.
Neuroactive substances in the human vestibular end organs.
Usami, S; Matsubara, A; Shinkawa, H; Matsunaga, T; Kanzaki, J
1995-01-01
In order to evaluate the involvement of neuroactive substances in the human vestibular periphery, the immunocytochemical distribution of substance P (SP), calcitonin gene-related peptide (CGRP), and choline acetyltransferase (ChAT) was examined. SP-like immunoreactivity (LI) was present around and beneath sensory hair cells, probably corresponding to their afferent nerve endings. SP-LI was found predominantly in subpopulations of the primary afferents distributed in the peripheral region of the end organs. ChAT-LI and CGRP-LI were found throughout as small puncta below the hair cell layer, probably corresponding to efferent endings. The present results indicate that these neuroactive substances, previously described in animals, are also distributed in the human vestibular periphery, and almost certainly contribute to human vestibular function.
Distributed environmental control
NASA Technical Reports Server (NTRS)
Cleveland, Gary A.
1992-01-01
We present an architecture of distributed, independent control agents designed to work with the Computer Aided System Engineering and Analysis (CASE/A) simulation tool. CASE/A simulates behavior of Environmental Control and Life Support Systems (ECLSS). We describe a lattice of agents capable of distributed sensing and overcoming certain sensor and effector failures. We address how the architecture can achieve the coordinating functions of a hierarchical command structure while maintaining the robustness and flexibility of independent agents. These agents work between the time steps of the CASE/A simulation tool to arrive at command decisions based on the state variables maintained by CASE/A. Control is evaluated according to both effectiveness (e.g., how well temperature was maintained) and resource utilization (the amount of power and materials used).
Planar Laser Imaging of Sprays for Liquid Rocket Studies
NASA Technical Reports Server (NTRS)
Lee, W.; Pal, S.; Ryan, H. M.; Strakey, P. A.; Santoro, Robert J.
1990-01-01
A planar laser imaging technique which incorporates an optical polarization ratio technique for droplet size measurement was studied. A series of pressure atomized water sprays were studied with this technique and compared with measurements obtained using a Phase Doppler Particle Analyzer. In particular, the effects of assuming a logarithmic normal distribution function for the droplet size distribution within a spray was evaluated. Reasonable agreement between the instrument was obtained for the geometric mean diameter of the droplet distribution. However, comparisons based on the Sauter mean diameter show larger discrepancies, essentially because of uncertainties in the appropriate standard deviation to be applied for the polarization ratio technique. Comparisons were also made between single laser pulse (temporally resolved) measurements with multiple laser pulse visualizations of the spray.
NASA Technical Reports Server (NTRS)
Hiser, H. W.; Senn, H. V.; Bukkapatnam, S. T.; Akyuzlu, K.
1977-01-01
The use of cloud images in the visual spectrum from the SMS/GOES geostationary satellites to determine the hourly distribution of sunshine on a mesoscale in the continental United States excluding Alaska is presented. Cloud coverage and density as a function of time of day and season are evaluated through the use of digital data processing techniques. Low density cirrus clouds are less detrimental to solar energy collection than other types; and clouds in the morning and evening are less detrimental than those during midday hours of maximum insolation. Seasonal geographic distributions of cloud cover/sunshine are converted to langleys of solar radiation received at the earth's surface through relationships developed from long term measurements at six widely distributed stations.
Suspended sediment transport in an estuarine tidal channel within San Francisco Bay, California
Sternberg, R.W.; Cacchione, D.A.; Drake, D.E.; Kranck, K.
1986-01-01
Size distributions of the suspended sediment samples, estimates of particle settling velocity (??s), friction velocity (U*), and reference concentration (Ca) at z = 20 cm were used in the suspended sediment distribution equations to evaluate their ability to predict the observed suspended sediment profiles. Three suspended sediment particle conditions were evaluated: (1) individual particle sizes in the 4-11 ?? (62.5-0.5 ??m) size range with the reference concentration Ca at z = 20 cm (C??); (2) individual particle sizes in the 4-6 ?? size range, flocs representing the 7-11 ?? size range with the reference concentration Ca at z = 20 cm (Cf); and (3) individual particle sizes in the 4-6 ?? size range, flocs representing the 7-11 ?? size range with the reference concentration predicted as a function of the bed sediment size distribution and the square of the excess shear stress. An analysis was also carried out on the sensitivity of the suspended sediment distribution equation to deviations in the primary variables ??s, U*, and Ca. In addition, computations of mass flux were made in order to show vertical variations in mass flux for varying flow conditions. ?? 1986.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Probabilistic objective functions for margin-less IMRT planning
NASA Astrophysics Data System (ADS)
Bohoslavsky, Román; Witte, Marnix G.; Janssen, Tomas M.; van Herk, Marcel
2013-06-01
We present a method to implement probabilistic treatment planning of intensity-modulated radiation therapy using custom software plugins in a commercial treatment planning system. Our method avoids the definition of safety-margins by directly including the effect of geometrical uncertainties during optimization when objective functions are evaluated. Because the shape of the resulting dose distribution implicitly defines the robustness of the plan, the optimizer has much more flexibility than with a margin-based approach. We expect that this added flexibility helps to automatically strike a better balance between target coverage and dose reduction for surrounding healthy tissue, especially for cases where the planning target volume overlaps organs at risk. Prostate cancer treatment planning was chosen to develop our method, including a novel technique to include rotational uncertainties. Based on population statistics, translations and rotations are simulated independently following a marker-based IGRT correction strategy. The effects of random and systematic errors are incorporated by first blurring and then shifting the dose distribution with respect to the clinical target volume. For simplicity and efficiency, dose-shift invariance and a rigid-body approximation are assumed. Three prostate cases were replanned using our probabilistic objective functions. To compare clinical and probabilistic plans, an evaluation tool was used that explicitly incorporates geometric uncertainties using Monte-Carlo methods. The new plans achieved similar or better dose distributions than the original clinical plans in terms of expected target coverage and rectum wall sparing. Plan optimization times were only about a factor of two higher than in the original clinical system. In conclusion, we have developed a practical planning tool that enables margin-less probability-based treatment planning with acceptable planning times, achieving the first system that is feasible for clinical implementation.
Bartés-Serrallonga, M; Adan, A; Solé-Casals, J; Caldú, X; Falcón, C; Pérez-Pàmies, M; Bargalló, N; Serra-Grabulosa, J M
2014-04-01
One of the most used paradigms in the study of attention is the Continuous Performance Test (CPT). The identical pairs version (CPT-IP) has been widely used to evaluate attention deficits in developmental, neurological and psychiatric disorders. However, the specific locations and the relative distribution of brain activation in networks identified with functional imaging, varies significantly with differences in task design. To design a task to evaluate sustained attention using functional magnetic resonance imaging (fMRI), and thus to provide data for research concerned with the role of these functions. Forty right-handed, healthy students (50% women; age range: 18-25 years) were recruited. A CPT-IP implemented as a block design was used to assess sustained attention during the fMRI session. The behavioural results from the CPT-IP task showed a good performance in all subjects, higher than 80% of hits. fMRI results showed that the used CPT-IP task activates a network of frontal, parietal and occipital areas, and that these are related to executive and attentional functions. In relation to the use of the CPT to study of attention and working memory, this task provides normative data in healthy adults, and it could be useful to evaluate disorders which have attentional and working memory deficits.
Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure
Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas
2015-01-01
Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014
The pitch-heave dynamics of transportation vehicles
NASA Technical Reports Server (NTRS)
Sweet, L. M.; Richardson, H. H.
1975-01-01
The analysis and design of suspensions for vehicles of finite length using pitch-heave models is presented. Dynamic models for the finite length vehicle include the spatial distribution of the guideway input disturbance over the vehicle length, as well as both pitch and heave degrees-of-freedom. Analytical results relate the vehicle front and rear accelerations to the pitch and heave natural frequencies, which are functions of vehicle suspension geometry and mass distribution. The effects of vehicle asymmetry and suspension contact area are evaluated. Design guidelines are presented for the modification of vehicle and suspension parameters to meet alternative ride quality criteria.
FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions
NASA Astrophysics Data System (ADS)
Huang, Jingfang; Jia, Jun; Zhang, Bo
2009-11-01
A Fortran program package is introduced for the rapid evaluation of the screened Coulomb interactions of N particles in three dimensions. The method utilizes an adaptive oct-tree structure, and is based on the new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related packages are also available at
Martino, Nicola A; Dell'Aquila, Maria E; Filioli Uranio, Manuel; Rutigliano, Lucia; Nicassio, Michele; Lacalandra, Giovanni M; Hinrichs, Katrin
2014-10-11
Evaluation of mitochondrial function offers an alternative to evaluate embryo development for assessment of oocyte viability, but little information is available on the relationship between mitochondrial and chromatin status in equine oocytes. We evaluated these parameters in immature equine oocytes either fixed immediately (IMM) or held overnight in an Earle's/Hank's' M199-based medium in the absence of meiotic inhibitors (EH treatment), and in mature oocytes. We hypothesized that EH holding may affect mitochondrial function and that holding temperature may affect the efficiency of meiotic suppression. Experiment 1 - Equine oocytes processed immediately or held in EH at uncontrolled temperature (22 to 27°C) were evaluated for initial chromatin configuration, in vitro maturation (IVM) rates and mitochondrial energy/redox potential. Experiment 2 - We then investigated the effect of holding temperature (25°C, 30°C, 38°C) on initial chromatin status of held oocytes, and subsequently repeated mitochondrial energy/redox assessment of oocytes held at 25°C vs. immediately-evaluated controls. EH holding at uncontrolled temperature was associated with advancement of germinal vesicle (GV) chromatin condensation and with meiotic resumption, as well as a lower maturation rate after IVM. Holding did not have a significant effect on mitochondrial distribution within chromatin configurations. Independent of treatment, oocytes having condensed chromatin had a significantly higher proportion of perinuclear/pericortical mitochondrial distribution than did other GV configurations. Holding did not detrimentally affect oocyte energy/redox parameters in viable GV-stage oocytes. There were no significant differences in chromatin configuration between oocytes held at 25°C and controls, whereas holding at higher temperature was associated with meiosis resumption and loss of oocytes having the condensed chromatin GV configuration. Holding at 25°C was not associated with progression of mitochondrial distribution pattern and there were no significant differences in oocyte energy/redox parameters between these oocytes and controls. Mitochondrial distribution in equine GV-stage oocytes is correlated with chromatin configuration within the GV. Progression of chromatin configuration and mitochondrial status during holding are dependent on temperature. EH holding at 25°C maintains meiotic arrest, viability and mitochondrial potential of equine oocytes. This is the first report on the effects of EH treatment on oocyte mitochondrial energy/redox potential.
NASA Astrophysics Data System (ADS)
Aklan, Bassim; Gierse, Pia; Hartmann, Josefin; Ott, Oliver J.; Fietkau, Rainer; Bert, Christoph
2017-06-01
Patient positioning plays an important role in regional deep hyperthermia to obtain a successful hyperthermia treatment. In this study, the influence of possible patient mispositioning was systematically assessed on specific absorption rate (SAR) and temperature distribution. With a finite difference time domain approach, the SAR and temperature distributions were predicted for six patients at 312 positions. Patient displacements and rotations as well as the combination of both were considered inside the Sigma-Eye applicator. Position sensitivity is assessed for hyperthermia treatment planning -guided steering, which relies on model-based optimization of the SAR and temperature distribution. The evaluation of the patient mispositioning was done with and without optimization. The evaluation without optimization was made by creating a treatment plan for the patient reference position in the center of the applicator and applied for all other positions, while the evaluation with optimization was based on creating an individual plan for each position. The parameter T90 was used for the temperature evaluation, which was defined as the temperature that covers 90% of the gross tumor volume (GTV). Furthermore, the hotspot tumor quotient (HTQ) was used as a goal function to assess the quality of the SAR and temperature distribution. The T90 was shown considerably dependent on the position within the applicator. Without optimization, the T90 was clearly decreased below 40 °C by patient shifts and the combination of shifts and rotations. However, the application of optimization for each positon led to an increase of T90 in the GTV. Position inaccuracies of less than 1 cm in the X-and Y-directions and 2 cm in the Z-direction, resulted in an increase of HTQ of less than 5%, which does not significantly affect the SAR and temperature distribution. Current positioning precision is sufficient in the X (right-left)-direction, but position accuracy is required in the Y-and Z-directions.
Description of the SSF PMAD DC testbed control system data acquisition function
NASA Technical Reports Server (NTRS)
Baez, Anastacio N.; Mackin, Michael; Wright, Theodore
1992-01-01
The NASA LeRC in Cleveland, Ohio has completed the development and integration of a Power Management and Distribution (PMAD) DC Testbed. This testbed is a reduced scale representation of the end to end, sources to loads, Space Station Freedom Electrical Power System (SSF EPS). This unique facility is being used to demonstrate DC power generation and distribution, power management and control, and system operation techniques considered to be prime candidates for the Space Station Freedom. A key capability of the testbed is its ability to be configured to address system level issues in support of critical SSF program design milestones. Electrical power system control and operation issues like source control, source regulation, system fault protection, end-to-end system stability, health monitoring, resource allocation, and resource management are being evaluated in the testbed. The SSF EPS control functional allocation between on-board computers and ground based systems is evolving. Initially, ground based systems will perform the bulk of power system control and operation. The EPS control system is required to continuously monitor and determine the current state of the power system. The DC Testbed Control System consists of standard controllers arranged in a hierarchical and distributed architecture. These controllers provide all the monitoring and control functions for the DC Testbed Electrical Power System. Higher level controllers include the Power Management Controller, Load Management Controller, Operator Interface System, and a network of computer systems that perform some of the SSF Ground based Control Center Operation. The lower level controllers include Main Bus Switch Controllers and Photovoltaic Controllers. Power system status information is periodically provided to the higher level controllers to perform system control and operation. The data acquisition function of the control system is distributed among the various levels of the hierarchy. Data requirements are dictated by the control system algorithms being implemented at each level. A functional description of the various levels of the testbed control system architecture, the data acquisition function, and the status of its implementationis presented.
Potential corridors and barriers for plague spread in Central Asia.
Wilschut, Liesbeth I; Addink, Elisabeth A; Heesterbeek, Hans; Heier, Lise; Laudisoit, Anne; Begon, Mike; Davis, Stephen; Dubyanskiy, Vladimir M; Burdelov, Leonid A; de Jong, Steven M
2013-10-31
Plague (Yersinia pestis infection) is a vector-borne disease which caused millions of human deaths in the Middle Ages. The hosts of plague are mostly rodents, and the disease is spread by the fleas that feed on them. Currently, the disease still circulates amongst sylvatic rodent populations all over the world, including great gerbil (Rhombomys opimus) populations in Central Asia. Great gerbils are social desert rodents that live in family groups in burrows, which are visible on satellite images. In great gerbil populations an abundance threshold exists, above which plague can spread causing epizootics. The spatial distribution of the host species is thought to influence the plague dynamics, such as the direction of plague spread, however no detailed analysis exists on the possible functional or structural corridors and barriers that are present in this population and landscape. This study aims to fill that gap. Three 20 by 20 km areas with known great gerbil burrow distributions were used to analyse the spatial distribution of the burrows. Object-based image analysis was used to map the landscape at several scales, and was linked to the burrow maps. A novel object-based method was developed - the mean neighbour absolute burrow density difference (MNABDD) - to identify the optimal scale and evaluate the efficacy of using landscape objects as opposed to square cells. Multiple regression using raster maps was used to identify the landscape-ecological variables that explain burrow density best. Functional corridors and barriers were mapped using burrow density thresholds. Cumulative resistance of the burrow distribution to potential disease spread was evaluated using cost distance analysis. A 46-year plague surveillance dataset was used to evaluate whether plague spread was radially symmetric. The burrow distribution was found to be non-random and negatively correlated with Greenness, especially in the floodplain areas. Corridors and barriers showed a mostly NWSE alignment, suggesting easier spreading along this axis. This was confirmed by the analysis of the plague data. Plague spread had a predominantly NWSE direction, which is likely due to the NWSE alignment of corridors and barriers in the burrow distribution and the landscape. This finding may improve predictions of plague in the future and emphasizes the importance of including landscape analysis in wildlife disease studies.
Potential corridors and barriers for plague spread in central Asia
2013-01-01
Background Plague (Yersinia pestis infection) is a vector-borne disease which caused millions of human deaths in the Middle Ages. The hosts of plague are mostly rodents, and the disease is spread by the fleas that feed on them. Currently, the disease still circulates amongst sylvatic rodent populations all over the world, including great gerbil (Rhombomys opimus) populations in Central Asia. Great gerbils are social desert rodents that live in family groups in burrows, which are visible on satellite images. In great gerbil populations an abundance threshold exists, above which plague can spread causing epizootics. The spatial distribution of the host species is thought to influence the plague dynamics, such as the direction of plague spread, however no detailed analysis exists on the possible functional or structural corridors and barriers that are present in this population and landscape. This study aims to fill that gap. Methods Three 20 by 20 km areas with known great gerbil burrow distributions were used to analyse the spatial distribution of the burrows. Object-based image analysis was used to map the landscape at several scales, and was linked to the burrow maps. A novel object-based method was developed – the mean neighbour absolute burrow density difference (MNABDD) – to identify the optimal scale and evaluate the efficacy of using landscape objects as opposed to square cells. Multiple regression using raster maps was used to identify the landscape-ecological variables that explain burrow density best. Functional corridors and barriers were mapped using burrow density thresholds. Cumulative resistance of the burrow distribution to potential disease spread was evaluated using cost distance analysis. A 46-year plague surveillance dataset was used to evaluate whether plague spread was radially symmetric. Results The burrow distribution was found to be non-random and negatively correlated with Greenness, especially in the floodplain areas. Corridors and barriers showed a mostly NWSE alignment, suggesting easier spreading along this axis. This was confirmed by the analysis of the plague data. Conclusions Plague spread had a predominantly NWSE direction, which is likely due to the NWSE alignment of corridors and barriers in the burrow distribution and the landscape. This finding may improve predictions of plague in the future and emphasizes the importance of including landscape analysis in wildlife disease studies. PMID:24171709
Potential energy distribution function and its application to the problem of evaporation
NASA Astrophysics Data System (ADS)
Gerasimov, D. N.; Yurin, E. I.
2017-10-01
Distribution function on potential energy in a strong correlated system can be calculated analytically. In an equilibrium system (for instance, in the bulk of the liquid) this distribution function depends only on temperature and mean potential energy, which can be found through the specific heat of vaporization. At the surface of the liquid this distribution function differs significantly, but its shape still satisfies analytical correlation. Distribution function on potential energy nearby the evaporation surface can be used instead of the work function of the atom of the liquid.
Unifying distribution functions: some lesser known distributions.
Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P
2008-08-01
We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
Xu, Hongmei; Zhou, Wangda; Zhou, Diansong; Li, Jianguo; Al-Huniti, Nidal
2017-03-01
Aztreonam is a monocyclic β-lactam antibiotic often used to treat infections caused by Enterobacteriaceae or Pseudomonas aeruginosa. Despite the long history of clinical use, population pharmacokinetic modeling of aztreonam in renally impaired patients is not yet available. The aims of this study were to assess the impact of renal impairment on aztreonam exposure and to evaluate dosing regimens for patients with renal impairment. A population model describing aztreonam pharmacokinetics following intravenous administration was developed using plasma concentrations from 42 healthy volunteers and renally impaired patients from 2 clinical studies. The final pharmacokinetic model was used to predict aztreonam plasma concentrations and evaluate the probability of pharmacodynamic target attainment (PTA) in patients with different levels of renal function. A 2-compartment model with first-order elimination adequately described aztreonam pharmacokinetics. The population mean estimates of aztreonam clearance, intercompartmental clearance, volume of distribution of the central compartment, and volume of distribution of the peripheral compartment were 4.93 L/h, 9.26 L/h, 7.43 L, and 6.44 L, respectively. Creatinine clearance and body weight were the most significant variables to explain patient variability in aztreonam clearance and volume of distribution, respectively. Simulations using the final pharmacokinetic model resulted in a clinical susceptibility break point of 4 and 8 mg/L, respectively, based on the clinical use of 1- and 2-g loading doses with the same or reduced maintenance dose every 8 hours for various renal deficiency patients. The population pharmacokinetic modeling and PTA estimation support adequate PTAs (>90% PTA) from the aztreonam label for dose adjustment of aztreonam in patients with moderate and severe renal impairment. © 2016, The American College of Clinical Pharmacology.
Light-driven liquid microlenses
NASA Astrophysics Data System (ADS)
Angelini, A.; Pirani, F.; Frascella, F.; Ricciardi, S.; Descrovi, E.
2017-02-01
We propose a liquid polymeric compound based on photo-responsive azo-polymers to be used as light-activated optical element with tunable and reversible functionalities. The interaction of a laser beam locally modifies the liquid density thus producing a refractive index gradient. The laser induced refractive index profiles are observed along the optical axis of the microscope to evaluate the total phase shift induced and along the orthogonal direction to provide the axial distribution of the refractive index variation. The focusing and imaging properties of the liquid lenses as functions of the light intensity are illustrated.
Third law of thermodynamics in the presence of a heat flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camacho, J.
1995-01-01
Following a maximum entropy formalism, we study a one-dimensional crystal under a heat flux. We obtain the phonon distribution function and evaluate the nonequilibrium temperature, the specific heat, and the entropy as functions of the internal energy and the heat flux, in both the quantum and the classical limits. Some analogies between the behavior of equilibrium systems at low absolute temperature and nonequilibrium steady states under high values of the heat flux are shown, which point to a possible generalization of the third law in nonequilibrium situations.
Evaluating statistical cloud schemes: What can we gain from ground-based remote sensing?
NASA Astrophysics Data System (ADS)
Grützun, V.; Quaas, J.; Morcrette, C. J.; Ament, F.
2013-09-01
Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based remote sensing such as lidar, microwave, and radar to evaluate prognostic distribution moments using the "perfect model approach." This means that we employ a high-resolution weather model as virtual reality and retrieve full three-dimensional atmospheric quantities and virtual ground-based observations. We then use statistics from the virtual observation to validate the modeled 3-D statistics. Since the data are entirely consistent, any discrepancy occurring is due to the method. Focusing on total water mixing ratio, we find that the mean ratio can be evaluated decently but that it strongly depends on the meteorological conditions as to whether the variance and skewness are reliable. Using some simple schematic description of different synoptic conditions, we show how statistics obtained from point or line measurements can be poor at representing the full three-dimensional distribution of water in the atmosphere. We argue that a careful analysis of measurement data and detailed knowledge of the meteorological situation is necessary to judge whether we can use the data for an evaluation of higher moments of the humidity distribution used by a statistical cloud scheme.
NASA Astrophysics Data System (ADS)
Wang, L.; Kerr, L. A.; Bridger, E.
2016-02-01
Changes in species distributions have been widely associated with climate change. Understanding how ocean temperatures influence species distributions is critical for elucidating the role of climate in ecosystem change as well as for forecasting how species may be distributed in the future. As such, species distribution modeling (SDM) is increasingly useful in marine ecosystems research, as it can enable estimation of the likelihood of encountering marine fish in space or time as a function of a set of environmental and ecosystem conditions. Many traditional SDM approaches are applied to species data collected through standardized methods that include both presence and absence records, but are incapable of using presence-only data, such as those collected from fisheries or through citizen science programs. Maximum entropy (MaxEnt) models provide promising tools as they can predict species distributions from incomplete information (presence-only data). We developed a MaxEnt framework to relate the occurrence records of several marine fish species (e.g. Atlantic herring, Atlantic mackerel, and butterfish) to environmental conditions. Environmental variables derived from remote sensing, such as monthly average sea surface temperature (SST), are matched with fish species data, and model results indicate the relative occurrence rate of the species as a function of the environmental variables. The results can be used to provide hindcasts of where species might have been in the past in relation to historical environmental conditions, nowcasts in relation to current conditions, and forecasts of future species distributions. In this presentation, we will assess the relative influence of several environmental factors on marine fish species distributions, and evaluate the effects of data coverage on these presence-only models. We will also discuss how the information from species distribution forecasts can support climate adaptation planning in marine fisheries.
Barnes, M P; Ebert, M A
2008-03-01
The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions.
NASA Astrophysics Data System (ADS)
Khanpour, Hamzeh; Mirjalili, Abolfazl; Tehrani, S. Atashbar
2017-03-01
An analytical solution based on the Laplace transformation technique for the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations is presented at next-to-leading order accuracy in perturbative QCD. This technique is also applied to extract the analytical solution for the proton structure function, F2p(x ,Q2) , in the Laplace s space. We present the results for the separate parton distributions of all parton species, including valence quark densities, the antiquark and strange sea parton distribution functions (PDFs), and the gluon distribution. We successfully compare the obtained parton distribution functions and the proton structure function with the results from GJR08 [Gluck, Jimenez-Delgado, and Reya, Eur. Phys. J. C 53, 355 (2008)], 10.1140/epjc/s10052-007-0462-9 and KKT12 [Khanpour, Khorramian, and Tehrani, J. Phys. G 40, 045002 (2013)], 10.1088/0954-3899/40/4/045002 parametrization models as well as the x -space results using
Proteome analysis of the almond kernel (Prunus dulcis).
Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu
2016-08-01
Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
Distributed Data Service for Data Management in Internet of Things Middleware.
Cruz Huacarpuma, Ruben; de Sousa Junior, Rafael Timoteo; de Holanda, Maristela Terto; de Oliveira Albuquerque, Robson; García Villalba, Luis Javier; Kim, Tai-Hoon
2017-04-27
The development of the Internet of Things (IoT) is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS) to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware.
Shang, Jianyuan; Geva, Eitan
2007-04-26
The quenching rate of a fluorophore attached to a macromolecule can be rather sensitive to its conformational state. The decay of the corresponding fluorescence lifetime autocorrelation function can therefore provide unique information on the time scales of conformational dynamics. The conventional way of measuring the fluorescence lifetime autocorrelation function involves evaluating it from the distribution of delay times between photoexcitation and photon emission. However, the time resolution of this procedure is limited by the time window required for collecting enough photons in order to establish this distribution with sufficient signal-to-noise ratio. Yang and Xie have recently proposed an approach for improving the time resolution, which is based on the argument that the autocorrelation function of the delay time between photoexcitation and photon emission is proportional to the autocorrelation function of the square of the fluorescence lifetime [Yang, H.; Xie, X. S. J. Chem. Phys. 2002, 117, 10965]. In this paper, we show that the delay-time autocorrelation function is equal to the autocorrelation function of the square of the fluorescence lifetime divided by the autocorrelation function of the fluorescence lifetime. We examine the conditions under which the delay-time autocorrelation function is approximately proportional to the autocorrelation function of the square of the fluorescence lifetime. We also investigate the correlation between the decay of the delay-time autocorrelation function and the time scales of conformational dynamics. The results are demonstrated via applications to a two-state model and an off-lattice model of a polypeptide.
Jutkowitz, Eric; Kane, Robert L; Dowd, Bryan; Gaugler, Joseph E; MacLehose, Richard F; Kuntz, Karen M
2017-06-01
Clinical features of dementia (cognition, function, and behavioral/psychological symptoms [BPSD]) may differentially affect Medicare expenditures/health care utilization. We linked cross-sectional data from the Aging, Demographics, and Memory Study to Medicare data to evaluate the association between dementia clinical features among those with dementia and Medicare expenditures/health care utilization (n = 234). Cognition was evaluated using the Mini-Mental State Examination (MMSE). Function was evaluated as the number of functional limitations (0-10). BPSD was evaluated as the number of symptoms (0-12). Expenditures were estimated with a generalized linear model (log-link and gamma distribution). Number of hospitalizations, institutional outpatient visits, and physician visits were estimated with a negative binomial regression. Medicare covered skilled nursing days were estimated with a zero-inflated negative binomial model. Cognition and BPSD were not associated with expenditures. Among individuals with less than seven functional limitations, one additional limitation was associated with $123 (95% confidence interval: $19-$227) additional monthly Medicare spending. Better cognition and poorer function were associated with more hospitalizations among those with an MMSE less than three and less than six functional limitations, respectively. BPSD had no effect on hospitalizations. Poorer function and fewer BPSD were associated with more skilled nursing among individuals with one to seven functional limitations and more than four symptoms, respectively. Cognition had no effect on skilled nursing care. No clinical feature was associated with institutional outpatient care. Of individuals with an MMSE less than 15, poorer cognition was associated with fewer physician visits. Among those with more than six functional limitations, poorer function was associated with fewer physician visits. Poorer function, not cognition or BPSD, was associated with higher Medicare expenditures. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Interaction between high harmonic fast waves and fast ions in NSTX/NSTX-U plasmas
NASA Astrophysics Data System (ADS)
Bertelli, N.; Valeo, E. J.; Gorelenkova, M.; Green, D. L.; RF SciDAC Team
2016-10-01
Fast wave (FW) heating in the ion cyclotron range of frequency (ICRF) has been successfully used to sustain and control the fusion plasma performance, and it will likely play an important role in the ITER experiment. As demonstrated in the NSTX and DIII-D experiments the interactions between fast waves and fast ions can be so strong to significantly modify the fast ion population from neutral beam injection. In fact, it has been recently found in NSTX that FWs can modify and, under certain conditions, even suppress the energetic particle driven instabilities, such as toroidal Alfvén eigenmodes and global Alfvén eigenmodes and fishbones. This paper examines such interactions in NSTX/NSTX-U plasmas by using the recent extension of the RF full-wave code TORIC to include non-Maxwellian ions distribution functions. Particular attention is given to the evolution of the fast ions distribution function w/ and w/o RF. Tests on the RF kick-operator implemented in the Monte-Carlo particle code NUBEAM is also discussed in order to move towards a self consistent evaluation of the RF wave-field and the ion distribution functions in the TRANSP code. Work supported by US DOE Contract DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Ikeguchi, Mitsunori; Doi, Junta
1995-09-01
The Ornstein-Zernike integral equation (OZ equation) has been used to evaluate the distribution function of solvents around solutes, but its numerical solution is difficult for molecules with a complicated shape. This paper proposes a numerical method to directly solve the OZ equation by introducing the 3D lattice. The method employs no approximation the reference interaction site model (RISM) equation employed. The method enables one to obtain the spatial distribution of spherical solvents around solutes with an arbitrary shape. Numerical accuracy is sufficient when the grid-spacing is less than 0.5 Å for solvent water. The spatial water distribution around a propane molecule is demonstrated as an example of a nonspherical hydrophobic molecule using iso-value surfaces. The water model proposed by Pratt and Chandler is used. The distribution agrees with the molecular dynamics simulation. The distribution increases offshore molecular concavities. The spatial distribution of water around 5α-cholest-2-ene (C27H46) is visualized using computer graphics techniques and a similar trend is observed.
A comparative analysis of hazard models for predicting debris flows in Madison County, VA
Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.
2001-01-01
During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).
Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…
Transport of Gas and Solutes in Permeable Estuarine Sediments
2013-09-30
513. 2. Hermand, J. P., and Ieee. 2004. Photosynthesis of seagrasses observed in situ from acoustic measurements. Oceans Mts/Ieee Techno-Ocean...functionality is demonstrated by measuring the spatial and temporal distribution of small bubbles produced by photosynthesis in sublittoral sands. − We...Evaluation of ebullition caused by sedimentary photosynthesis and methanogenesis For these experiments photosynthetic gas bubbles released from the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Hua; Noel, Camille; Chen, Haijian
Purpose: Severe artifacts in kilovoltage-CT simulation images caused by large metallic implants can significantly degrade the conspicuity and apparent CT Hounsfield number of targets and anatomic structures, jeopardize the confidence of anatomical segmentation, and introduce inaccuracies into the radiation therapy treatment planning process. This study evaluated the performance of the first commercial orthopedic metal artifact reduction function (O-MAR) for radiation therapy, and investigated its clinical applications in treatment planning. Methods: Both phantom and clinical data were used for the evaluation. The CIRS electron density phantom with known physical (and electron) density plugs and removable titanium implants was scanned on amore » Philips Brilliance Big Bore 16-slice CT simulator. The CT Hounsfield numbers of density plugs on both uncorrected and O-MAR corrected images were compared. Treatment planning accuracy was evaluated by comparing simulated dose distributions computed using the true density images, uncorrected images, and O-MAR corrected images. Ten CT image sets of patients with large hip implants were processed with the O-MAR function and evaluated by two radiation oncologists using a five-point score for overall image quality, anatomical conspicuity, and CT Hounsfield number accuracy. By utilizing the same structure contours delineated from the O-MAR corrected images, clinical IMRT treatment plans for five patients were computed on the uncorrected and O-MAR corrected images, respectively, and compared. Results: Results of the phantom study indicated that CT Hounsfield number accuracy and noise were improved on the O-MAR corrected images, especially for images with bilateral metal implants. The {gamma} pass rates of the simulated dose distributions computed on the uncorrected and O-MAR corrected images referenced to those of the true densities were higher than 99.9% (even when using 1% and 3 mm distance-to-agreement criterion), suggesting that dose distributions were clinically identical. In all patient cases, radiation oncologists rated O-MAR corrected images as higher quality. Formerly obscured critical structures were able to be visualized. The overall image quality and the conspicuity in critical organs were significantly improved compared with the uncorrected images: overall quality score (1.35 vs 3.25, P= 0.0022); bladder (2.15 vs 3.7, P= 0.0023); prostate and seminal vesicles/vagina (1.3 vs 3.275, P= 0.0020); rectum (2.8 vs 3.9, P= 0.0021). The noise levels of the selected ROIs were reduced from 93.7 to 38.2 HU. On most cases (8/10), the average CT Hounsfield numbers of the prostate/vagina on the O-MAR corrected images were closer to the referenced value (41.2 HU, an average measured from patients without metal implants) than those on the uncorrected images. High {gamma} pass rates of the five IMRT dose distribution pairs indicated that the dose distributions were not significantly affected by the CT image improvements. Conclusions: Overall, this study indicated that the O-MAR function can remarkably reduce metal artifacts and improve both CT Hounsfield number accuracy and target and critical structure visualization. Although there was no significant impact of the O-MAR algorithm on the calculated dose distributions, we suggest that O-MAR corrected images are more suitable for the entire treatment planning process by offering better anatomical structure visualization, improving radiation oncologists' confidence in target delineation, and by avoiding subjective density overrides of artifact regions on uncorrected images.« less
Li, Hua; Noel, Camille; Chen, Haijian; Harold Li, H.; Low, Daniel; Moore, Kevin; Klahr, Paul; Michalski, Jeff; Gay, Hiram A.; Thorstad, Wade; Mutic, Sasa
2012-01-01
Purpose: Severe artifacts in kilovoltage-CT simulation images caused by large metallic implants can significantly degrade the conspicuity and apparent CT Hounsfield number of targets and anatomic structures, jeopardize the confidence of anatomical segmentation, and introduce inaccuracies into the radiation therapy treatment planning process. This study evaluated the performance of the first commercial orthopedic metal artifact reduction function (O-MAR) for radiation therapy, and investigated its clinical applications in treatment planning. Methods: Both phantom and clinical data were used for the evaluation. The CIRS electron density phantom with known physical (and electron) density plugs and removable titanium implants was scanned on a Philips Brilliance Big Bore 16-slice CT simulator. The CT Hounsfield numbers of density plugs on both uncorrected and O-MAR corrected images were compared. Treatment planning accuracy was evaluated by comparing simulated dose distributions computed using the true density images, uncorrected images, and O-MAR corrected images. Ten CT image sets of patients with large hip implants were processed with the O-MAR function and evaluated by two radiation oncologists using a five-point score for overall image quality, anatomical conspicuity, and CT Hounsfield number accuracy. By utilizing the same structure contours delineated from the O-MAR corrected images, clinical IMRT treatment plans for five patients were computed on the uncorrected and O-MAR corrected images, respectively, and compared. Results: Results of the phantom study indicated that CT Hounsfield number accuracy and noise were improved on the O-MAR corrected images, especially for images with bilateral metal implants. The γ pass rates of the simulated dose distributions computed on the uncorrected and O-MAR corrected images referenced to those of the true densities were higher than 99.9% (even when using 1% and 3 mm distance-to-agreement criterion), suggesting that dose distributions were clinically identical. In all patient cases, radiation oncologists rated O-MAR corrected images as higher quality. Formerly obscured critical structures were able to be visualized. The overall image quality and the conspicuity in critical organs were significantly improved compared with the uncorrected images: overall quality score (1.35 vs 3.25, P = 0.0022); bladder (2.15 vs 3.7, P = 0.0023); prostate and seminal vesicles/vagina (1.3 vs 3.275, P = 0.0020); rectum (2.8 vs 3.9, P = 0.0021). The noise levels of the selected ROIs were reduced from 93.7 to 38.2 HU. On most cases (8/10), the average CT Hounsfield numbers of the prostate/vagina on the O-MAR corrected images were closer to the referenced value (41.2 HU, an average measured from patients without metal implants) than those on the uncorrected images. High γ pass rates of the five IMRT dose distribution pairs indicated that the dose distributions were not significantly affected by the CT image improvements. Conclusions: Overall, this study indicated that the O-MAR function can remarkably reduce metal artifacts and improve both CT Hounsfield number accuracy and target and critical structure visualization. Although there was no significant impact of the O-MAR algorithm on the calculated dose distributions, we suggest that O-MAR corrected images are more suitable for the entire treatment planning process by offering better anatomical structure visualization, improving radiation oncologists’ confidence in target delineation, and by avoiding subjective density overrides of artifact regions on uncorrected images. PMID:23231300
Leaf position optimization for step-and-shoot IMRT.
De Gersem, W; Claus, F; De Wagter, C; Van Duyse, B; De Neve, W
2001-12-01
To describe the theoretical basis, the algorithm, and implementation of a tool that optimizes segment shapes and weights for step-and-shoot intensity-modulated radiation therapy delivered by multileaf collimators. The tool, called SOWAT (Segment Outline and Weight Adapting Tool) is applied to a set of segments, segment weights, and corresponding dose distribution, computed by an external dose computation engine. SOWAT evaluates the effects of changing the position of each collimating leaf of each segment on an objective function, as follows. Changing a leaf position causes a change in the segment-specific dose matrix, which is calculated by a fast dose computation algorithm. A weighted sum of all segment-specific dose matrices provides the dose distribution and allows computation of the value of the objective function. Only leaf position changes that comply with the multileaf collimator constraints are evaluated. Leaf position changes that tend to decrease the value of the objective function are retained. After several possible positions have been evaluated for all collimating leaves of all segments, an external dose engine recomputes the dose distribution, based on the adapted leaf positions and weights. The plan is evaluated. If the plan is accepted, a segment sequencer is used to make the prescription files for the treatment machine. Otherwise, the user can restart SOWAT using the new set of segments, segment weights, and corresponding dose distribution. The implementation was illustrated using two example cases. The first example is a T1N0M0 supraglottic cancer case that was distributed as a multicenter planning exercise by investigators from Rotterdam, The Netherlands. The exercise involved a two-phase plan. Phase 1 involved the delivery of 46 Gy to a concave-shaped planning target volume (PTV) consisting of the primary tumor volume and the elective lymph nodal regions II-IV on both sides of the neck. Phase 2 involved a boost of 24 Gy to the primary tumor region only. SOWAT was applied to the Phase 1 plan. Parotid sparing was a planning goal. The second implementation example is an ethmoid sinus cancer case, planned with the intent of bilateral visus sparing. The median PTV prescription dose was 70 Gy with a maximum dose constraint to the optic pathway structures of 60 Gy. The initial set of segments, segment weights, and corresponding dose distribution were obtained, respectively, by an anatomy-based segmentation tool, a segment weight optimization tool, and a differential scatter-air ratio dose computation algorithm as external dose engine. For the supraglottic case, this resulted in a plan that proved to be comparable to the plans obtained at the other institutes by forward or inverse planning techniques. After using SOWAT, the minimum PTV dose and PTV dose homogeneity increased; the maximum dose to the spinal cord decreased from 38 Gy to 32 Gy. The left parotid mean dose decreased from 22 Gy to 19 Gy and the right parotid mean dose from 20 to 18 Gy. For the ethmoid sinus case, the target homogeneity increased by leaf position optimization, together with a better sparing of the optical tracts. By using SOWAT, the plans improved with respect to all plan evaluation end points. Compliance with the multileaf collimator constraints is guaranteed. The treatment delivery time remains almost unchanged, because no additional segments are created.
NASA Astrophysics Data System (ADS)
Williamson, Nathan H.; Röding, Magnus; Galvosas, Petrik; Miklavcic, Stanley J.; Nydén, Magnus
2016-08-01
We present the pseudo 2-D relaxation model (P2DRM), a method to estimate multidimensional probability distributions of material parameters from independent 1-D measurements. We illustrate its use on 1-D T1 and T2 relaxation measurements of saturated rock and evaluate it on both simulated and experimental T1-T2 correlation measurement data sets. Results were in excellent agreement with the actual, known 2-D distribution in the case of the simulated data set. In both the simulated and experimental case, the functional relationships between T1 and T2 were in good agreement with the T1-T2 correlation maps from the 2-D inverse Laplace transform of the full 2-D data sets. When a 1-D CPMG experiment is combined with a rapid T1 measurement, the P2DRM provides a double-shot method for obtaining a T1-T2 relationship, with significantly decreased experimental time in comparison to the full T1-T2 correlation measurement.
Bounds on the conductivity of a suspension of random impenetrable spheres
NASA Astrophysics Data System (ADS)
Beasley, J. D.; Torquato, S.
1986-11-01
We compare the general Beran bounds on the effective electrical conductivity of a two-phase composite to the bounds derived by Torquato for the specific model of spheres distributed throughout a matrix phase. For the case of impenetrable spheres, these bounds are shown to be identical and to depend on the microstructure through the sphere volume fraction φ2 and a three-point parameter ζ2, which is an integral over a three-point correlation function. We evaluate ζ2 exactly through third order in φ2 for distributions of impenetrable spheres. This expansion is compared to the analogous results of Felderhof and of Torquato and Lado, all of whom employed the superposition approximation for the three-particle distribution function involved in ζ2. The results indicate that the exact ζ2 will be greater than the value calculated under the superposition approximation. For reasons of mathematical analogy, the results obtained here apply as well to the determination of the thermal conductivity, dielectric constant, and magnetic permeability of composite media and the diffusion coefficient of porous media.
Passalacqua, Thais Gaban; Dutra, Luiz Antonio; de Almeida, Letícia; Velásquez, Angela Maria Arenas; Torres, Fabio Aurelio Esteves; Yamasaki, Paulo Renato; dos Santos, Mariana Bastos; Regasini, Luis Octavio; Michels, Paul A M; Bolzani, Vanderlan da Silva; Graminha, Marcia A S
2015-08-15
Chalcones form a class of compounds that belong to the flavonoid family and are widely distributed in plants. Their simple structure and the ease of preparation make chalcones attractive scaffolds for the synthesis of a large number of derivatives enabling the evaluation of the effects of different functional groups on biological activities. In this Letter, we report the successful synthesis of a series of novel prenylated chalcones via Claisen-Schmidt condensation and the evaluation of their effect on the viability of the Trypanosomatidae parasites Leishmania amazonensis, Leishmania infantum and Trypanosoma cruzi. Copyright © 2015 Elsevier Ltd. All rights reserved.
Li, Q; He, Y L; Wang, Y; Tao, W Q
2007-11-01
A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.
NASA Astrophysics Data System (ADS)
Michel, N.; Stoitsov, M. V.
2008-04-01
The fast computation of the Gauss hypergeometric function F12 with all its parameters complex is a difficult task. Although the F12 function verifies numerous analytical properties involving power series expansions whose implementation is apparently immediate, their use is thwarted by instabilities induced by cancellations between very large terms. Furthermore, small areas of the complex plane, in the vicinity of z=e, are inaccessible using F12 power series linear transformations. In order to solve these problems, a generalization of R.C. Forrey's transformation theory has been developed. The latter has been successful in treating the F12 function with real parameters. As in real case transformation theory, the large canceling terms occurring in F12 analytical formulas are rigorously dealt with, but by way of a new method, directly applicable to the complex plane. Taylor series expansions are employed to enter complex areas outside the domain of validity of power series analytical formulas. The proposed algorithm, however, becomes unstable in general when |a|, |b|, |c| are moderate or large. As a physical application, the calculation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential involving F12 evaluations is considered. Program summaryProgram title: hyp_2F1, PTG_wf Catalogue identifier: AEAE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6839 No. of bytes in distributed program, including test data, etc.: 63 334 Distribution format: tar.gz Programming language: C++, Fortran 90 Computer: Intel i686 Operating system: Linux, Windows Word size: 64 bits Classification: 4.7 Nature of problem: The Gauss hypergeometric function F12, with all its parameters complex, is uniquely calculated in the frame of transformation theory with power series summations, thus providing a very fast algorithm. The evaluation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential is treated as a physical application. Solution method: The Gauss hypergeometric function F12 verifies linear transformation formulas allowing consideration of arguments of a small modulus which then can be handled by a power series. They, however, give rise to indeterminate or numerically unstable cases, when b-a and c-a-b are equal or close to integers. They are properly dealt with through analytical manipulations of the Lanczos expression providing the Gamma function. The remaining zones of the complex plane uncovered by transformation formulas are dealt with Taylor expansions of the F12 function around complex points where linear transformations can be employed. The Pöschl-Teller-Ginocchio potential wave functions are calculated directly with F12 evaluations. Restrictions: The algorithm provides full numerical precision in almost all cases for |a|, |b|, and |c| of the order of one or smaller, but starts to be less precise or unstable when they increase, especially through a, b, and c imaginary parts. While it is possible to run the code for moderate or large |a|, |b|, and |c| and obtain satisfactory results for some specified values, the code is very likely to be unstable in this regime. Unusual features: Two different codes, one for the hypergeometric function and one for the Pöschl-Teller-Ginocchio potential wave functions, are provided in C++ and Fortran 90 versions. Running time: 20,000 F12 function evaluations take an average of one second.
Methods To Determine the Silicone Oil Layer Thickness in Sprayed-On Siliconized Syringes.
Loosli, Viviane; Germershaus, Oliver; Steinberg, Henrik; Dreher, Sascha; Grauschopf, Ulla; Funke, Stefanie
2018-01-01
The silicone lubricant layer in prefilled syringes has been investigated with regards to siliconization process performance, prefilled syringe functionality, and drug product attributes, such as subvisible particle levels, in several studies in the past. However, adequate methods to characterize the silicone oil layer thickness and distribution are limited, and systematic evaluation is missing. In this study, white light interferometry was evaluated to close this gap in method understanding. White light interferometry demonstrated a good accuracy of 93-99% for MgF 2 coated, curved standards covering a thickness range of 115-473 nm. Thickness measurements for sprayed-on siliconized prefilled syringes with different representative silicone oil distribution patterns (homogeneous, pronounced siliconization at flange or needle side, respectively) showed high instrument (0.5%) and analyst precision (4.1%). Different white light interferometry instrument parameters (autofocus, protective shield, syringe barrel dimensions input, type of non-siliconized syringe used as base reference) had no significant impact on the measured average layer thickness. The obtained values from white light interferometry applying a fully developed method (12 radial lines, 50 mm measurement distance, 50 measurements points) were in agreement with orthogonal results from combined white and laser interferometry and 3D-laser scanning microscopy. The investigated syringe batches (lot A and B) exhibited comparable longitudinal silicone oil layer thicknesses ranging from 170-190 nm to 90-100 nm from flange to tip and homogeneously distributed silicone layers over the syringe barrel circumference (110- 135 nm). Empty break-loose (4-4.5 N) and gliding forces (2-2.5 N) were comparably low for both analyzed syringe lots. A silicone oil layer thickness of 100-200 nm was thus sufficient for adequate functionality in this particular study. Filling the syringe with a surrogate solution including short-term exposure and emptying did not significantly influence the silicone oil layer at the investigated silicone level. It thus appears reasonable to use this approach to characterize silicone oil layers in filled syringes over time. The developed method characterizes non-destructively the layer thickness and distribution of silicone oil in empty syringes and provides fast access to reliable results. The gained information can be further used to support optimization of siliconization processes and increase the understanding of syringe functionality. LAY ABSTRACT: Silicone oil layers as lubricant are required to ensure functionality of prefilled syringes. Methods evaluating these layers are limited, and systematic evaluation is missing. The aim of this study was to develop and assess white light interferometry as an analytical method to characterize sprayed-on silicone oil layers in 1 mL prefilled syringes. White light interferometry showed a good accuracy (93-99%) as well as instrument and analyst precision (0.5% and 4.1%, respectively). Different applied instrument parameters had no significant impact on the measured layer thickness. The obtained values from white light interferometry applying a fully developed method concurred with orthogonal results from 3D-laser scanning microscopy and combined white light and laser interferometry. The average layer thicknesses in two investigated syringe lots gradually decreased from 170-190 nm at the flange to 100-90 nm at the needle side. The silicone layers were homogeneously distributed over the syringe barrel circumference (110-135 nm) for both lots. Empty break-loose (4-4.5 N) and gliding forces (2-2.5 N) were comparably low for both analyzed syringe lots. Syringe filling with a surrogate solution, including short-term exposure and emptying, did not significantly affect the silicone oil layer. The developed, non-destructive method provided reliable results to characterize the silicone oil layer thickness and distribution in empty siliconized syringes. This information can be further used to support optimization of siliconization processes and increase understanding of syringe functionality. © PDA, Inc. 2018.
Uncertainty Analysis of Simulated Hydraulic Fracturing
NASA Astrophysics Data System (ADS)
Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.
2012-12-01
Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted among input parameters and objective functions. In addition, reduced-order emulation models resulting from this analysis can be used for optimal control of hydraulic fracturing. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Technical Reports Server (NTRS)
1975-01-01
The SATIL 2 computer program was developed to assist with the programmatic evaluation of alternative approaches to establishing and maintaining a specified mix of operational sensors on spacecraft in an operational SEASAT system. The program computes the probability distributions of events (i.e., number of launch attempts, number of spacecraft purchased, etc.), annual recurring cost, and present value of recurring cost. This is accomplished for the specific task of placing a desired mix of sensors in orbit in an optimal fashion in order to satisfy a specified sensor demand function. Flow charts are shown, and printouts of the programs are given.
General formulation of long-range degree correlations in complex networks
NASA Astrophysics Data System (ADS)
Fujiki, Yuka; Takaguchi, Taro; Yakubo, Kousuke
2018-06-01
We provide a general framework for analyzing degree correlations between nodes separated by more than one step (i.e., beyond nearest neighbors) in complex networks. One joint and four conditional probability distributions are introduced to fully describe long-range degree correlations with respect to degrees k and k' of two nodes and shortest path length l between them. We present general relations among these probability distributions and clarify the relevance to nearest-neighbor degree correlations. Unlike nearest-neighbor correlations, some of these probability distributions are meaningful only in finite-size networks. Furthermore, as a baseline to determine the existence of intrinsic long-range degree correlations in a network other than inevitable correlations caused by the finite-size effect, the functional forms of these probability distributions for random networks are analytically evaluated within a mean-field approximation. The utility of our argument is demonstrated by applying it to real-world networks.
Distributed Humidity Sensing in PMMA Optical Fibers at 500 nm and 650 nm Wavelengths.
Liehr, Sascha; Breithaupt, Mathias; Krebber, Katerina
2017-03-31
Distributed measurement of humidity is a sought-after capability for various fields of application, especially in the civil engineering and structural health monitoring sectors. This article presents a method for distributed humidity sensing along polymethyl methacrylate (PMMA) polymer optical fibers (POFs) by analyzing wavelength-dependent Rayleigh backscattering and attenuation characteristics at 500 nm and 650 nm wavelengths. Spatially resolved humidity sensing is obtained from backscatter traces of a dual-wavelength optical time domain reflectometer (OTDR). Backscatter dependence, attenuation dependence as well as the fiber length change are characterized as functions of relative humidity. Cross-sensitivity effects are discussed and quantified. The evaluation of the humidity-dependent backscatter effects at the two wavelength measurements allows for distributed and unambiguous measurement of relative humidity. The technique can be readily employed with low-cost standard polymer optical fibers and commercial OTDR devices.
Longo, Liam; Lee, Jihun; Blaber, Michael
2012-12-01
The acquisition of function is often associated with destabilizing mutations, giving rise to the stability-function tradeoff hypothesis. To test whether function is also accommodated at the expense of foldability, fibroblast growth factor-1 (FGF-1) was subjected to a comprehensive φ-value analysis at each of the 11 turn regions. FGF-1, a β-trefoil fold, represents an excellent model system with which to evaluate the influence of function on foldability: because of its threefold symmetric structure, analysis of FGF-1 allows for direct comparisons between symmetry-related regions of the protein that are associated with function to those that are not; thus, a structural basis for regions of foldability can potentially be identified. The resulting φ-value distribution of FGF-1 is highly polarized, with the majority of positions described as either folded-like or denatured-like in the folding transition state. Regions important for folding are shown to be asymmetrically distributed within the protein architecture; furthermore, regions associated with function (i.e., heparin-binding affinity and receptor-binding affinity) are localized to regions of the protein that fold after barrier crossing (late in the folding pathway). These results provide experimental support for the foldability-function tradeoff hypothesis in the evolution of FGF-1. Notably, the results identify the potential for folding redundancy in symmetric protein architecture with important implications for protein evolution and design. Copyright © 2012 The Protein Society.
NASA Astrophysics Data System (ADS)
Cardarelli, Gene A.
The primary goal in radiation oncology is to deliver lethal radiation doses to tumors, while minimizing dose to normal tissue. IMRT has the capability to increase the dose to the targets and decrease the dose to normal tissue, increasing local control, decrease toxicity and allow for effective dose escalation. This advanced technology does present complex dose distributions that are not easily verified. Furthermore, the dose inhomogeneity caused by non-uniform dose distributions seen in IMRT treatments has caused the development of biological models attempting to characterize the dose-volume effect in the response of organized tissues to radiation. Dosimetry of small fields can be quite challenging when measuring dose distributions for high-energy X-ray beams used in IMRT. The proper modeling of these small field distributions is essential in reproducing accurate dose for IMRT. This evaluation was conducted to quantify the effects of small field dosimetry on IMRT plan dose distributions and the effects on four biological model parameters. The four biological models evaluated were: (1) the generalized Equivalent Uniform Dose (gEUD), (2) the Tumor Control Probability (TCP), (3) the Normal Tissue Complication Probability (NTCP) and (4) the Probability of uncomplicated Tumor Control (P+). These models are used to estimate local control, survival, complications and uncomplicated tumor control. This investigation compares three distinct small field dose algorithms. Dose algorithms were created using film, small ion chamber, and a combination of ion chamber measurements and small field fitting parameters. Due to the nature of uncertainties in small field dosimetry and the dependence of biological models on dose volume information, this examination quantifies the effects of small field dosimetry techniques on radiobiological models and recommends pathways to reduce the errors in using these models to evaluate IMRT dose distributions. This study demonstrates the importance of valid physical dose modeling prior to the use of biological modeling. The success of using biological function data, such as hypoxia, in clinical IMRT planning will greatly benefit from the results of this study.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
New device for accurate measurement of the x-ray intensity distribution of x-ray tube focal spots.
Doi, K; Fromes, B; Rossmann, K
1975-01-01
A new device has been developed with which the focal spot distribution can be measured accurately. The alignment and localization of the focal spot relative to the device are accomplished by adjustment of three micrometer screws in three orthogonal directions and by comparison of red reference light spots with green fluorescent pinhole images at five locations. The standard deviations for evaluating the reproducibility of the adjustments in the horizontal and vertical directions were 0.2 and 0.5 mm, respectively. Measurements were made of the pinhole images as well as of the line-spread functions (LSFs) and modulation transfer functions (MTFs) for an x-ray tube with focal spots of 1-mm and 50-mum nominal size. The standard deviations for the LSF and MTF of the 1-mm focal spot were 0.017 and 0.010, respectively.
A single molecule rectifier with strong push-pull coupling
NASA Astrophysics Data System (ADS)
Saraiva-Souza, Aldilene; Macedo de Souza, Fabricio; Aleixo, Vicente F. P.; Girão, Eduardo Costa; Filho, Josué Mendes; Meunier, Vincent; Sumpter, Bobby G.; Souza Filho, Antônio Gomes; Del Nero, Jordan
2008-11-01
We theoretically investigate the electronic charge transport in a molecular system composed of a donor group (dinitrobenzene) coupled to an acceptor group (dihydrophenazine) via a polyenic chain (unsaturated carbon bridge). Ab initio calculations based on the Hartree-Fock approximations are performed to investigate the distribution of electron states over the molecule in the presence of an external electric field. For small bridge lengths (n =0-3) we find a homogeneous distribution of the frontier molecular orbitals, while for n >3 a strong localization of the lowest unoccupied molecular orbital is found. The localized orbitals in between the donor and acceptor groups act as conduction channels when an external electric field is applied. We also calculate the rectification behavior of this system by evaluating the charge accumulated in the donor and acceptor groups as a function of the external electric field. Finally, we propose a phenomenological model based on nonequilibrium Green's function to rationalize the ab initio findings.
NASA Astrophysics Data System (ADS)
Wang, Aiwen; Chen, Hongyan; Hao, Yuxin; Zhang, Wei
2018-06-01
Free vibration and static bending of functionally graded (FG) graphene nanoplatelet (GPL) reinforced composite doubly-curved shallow shells with three distinguished distributions are analyzed. Material properties with gradient variation in the thickness aspect are evaluated by the modified Halpin-Tsai model. Mathematical model of the simply supported doubly-curved shallow shells rests upon Hamilton Principle and a higher order shear deformation theory (HSDT). The free vibration frequencies and bending deflections are gained by taking into account Navier technique. The agreement between the obtained results and ANSYS as well as the prior results in the open literature verifies the accuracy of the theory in this article. Further, parametric studies are accomplished to highlight the significant influence of GPL distribution patterns and weight fraction, stratification number, dimensions of GPLs and shells on the mechanical behavior of the system.
Collisionless distribution function for the relativistic force-free Harris sheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, C. R.; Neukirch, T.
A self-consistent collisionless distribution function for the relativistic analogue of the force-free Harris sheet is presented. This distribution function is the relativistic generalization of the distribution function for the non-relativistic collisionless force-free Harris sheet recently found by Harrison and Neukirch [Phys. Rev. Lett. 102, 135003 (2009)], as it has the same dependence on the particle energy and canonical momenta. We present a detailed calculation which shows that the proposed distribution function generates the required current density profile (and thus magnetic field profile) in a frame of reference in which the electric potential vanishes identically. The connection between the parameters ofmore » the distribution function and the macroscopic parameters such as the current sheet thickness is discussed.« less
A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis
Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.
2015-01-01
Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324
An analysis of annual maximum streamflows in Terengganu, Malaysia using TL-moments approach
NASA Astrophysics Data System (ADS)
Ahmad, Ummi Nadiah; Shabri, Ani; Zakaria, Zahrahtul Amani
2013-02-01
TL-moments approach has been used in an analysis to determine the best-fitting distributions to represent the annual series of maximum streamflow data over 12 stations in Terengganu, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: generalized pareto (GPA), generalized logistic, and generalized extreme value distribution. The influence of TL-moments on estimated probability distribution functions are examined by evaluating the relative root mean square error and relative bias of quantile estimates through Monte Carlo simulations. The boxplot is used to show the location of the median and the dispersion of the data, which helps in reaching the decisive conclusions. For most of the cases, the results show that TL-moments with one smallest value was trimmed from the conceptual sample (TL-moments (1,0)), of GPA distribution was the most appropriate in majority of the stations for describing the annual maximum streamflow series in Terengganu, Malaysia.
Gibbons, Richard A.; Dixon, Stephen N.; Pocock, David H.
1973-01-01
A specimen of intestinal glycoprotein isolated from the pig and two samples of dextran, all of which are polydisperse (that is, the preparations may be regarded as consisting of a continuous distribution of molecular weights), have been examined in the ultracentrifuge under meniscus-depletion conditions at equilibrium. They are compared with each other and with a glycoprotein from Cysticercus tenuicollis cyst fluid which is almost monodisperse. The quantity c−⅓ (c=concentration) is plotted against ξ (the reduced radius); this plot is linear when the molecular-weight distribution approximates to the `most probable', i.e. when Mn:Mw:Mz: M(z+1)....... is as 1:2:3:4: etc. The use of this plot, and related procedures, to evaluate qualitatively and semi-quantitatively molecular-weight distribution functions where they can be realistically approximated to Schulz distributions is discussed. The theoretical basis is given in an Appendix. PMID:4778265
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Wismadi, Arif; Zuidgeest, Mark; Brussel, Mark; van Maarseveen, Martin
2014-01-01
To determine whether the inclusion of spatial neighbourhood comparison factors in Preference Modelling allows spatial decision support systems (SDSSs) to better address spatial equity, we introduce Spatial Preference Modelling (SPM). To evaluate the effectiveness of this model in addressing equity, various standardisation functions in both Non-Spatial Preference Modelling and SPM are compared. The evaluation involves applying the model to a resource location-allocation problem for transport infrastructure in the Special Province of Yogyakarta in Indonesia. We apply Amartya Sen's Capability Approach to define opportunity to mobility as a non-income indicator. Using the extended Moran's I interpretation for spatial equity, we evaluate the distribution output regarding, first, `the spatial distribution patterns of priority targeting for allocation' (SPT) and, second, `the effect of new distribution patterns after location-allocation' (ELA). The Moran's I index of the initial map and its comparison with six patterns for SPT as well as ELA consistently indicates that the SPM is more effective for addressing spatial equity. We conclude that the inclusion of spatial neighbourhood comparison factors in Preference Modelling improves the capability of SDSS to address spatial equity. This study thus proposes a new formal method for SDSS with specific attention on resource location-allocation to address spatial equity.
Vercruysse, Jurgen; Toiviainen, Maunu; Fonteyne, Margot; Helkimo, Niko; Ketolainen, Jarkko; Juuti, Mikko; Delaet, Urbain; Van Assche, Ivo; Remon, Jean Paul; Vervaet, Chris; De Beer, Thomas
2014-04-01
Over the last decade, there has been increased interest in the application of twin screw granulation as a continuous wet granulation technique for pharmaceutical drug formulations. However, the mixing of granulation liquid and powder material during the short residence time inside the screw chamber and the atypical particle size distribution (PSD) of granules produced by twin screw granulation is not yet fully understood. Therefore, this study aims at visualizing the granulation liquid mixing and distribution during continuous twin screw granulation using NIR chemical imaging. In first instance, the residence time of material inside the barrel was investigated as function of screw speed and moisture content followed by the visualization of the granulation liquid distribution as function of different formulation and process parameters (liquid feed rate, liquid addition method, screw configuration, moisture content and barrel filling degree). The link between moisture uniformity and granule size distributions was also studied. For residence time analysis, increased screw speed and lower moisture content resulted to a shorter mean residence time and narrower residence time distribution. Besides, the distribution of granulation liquid was more homogenous at higher moisture content and with more kneading zones on the granulator screws. After optimization of the screw configuration, a two-level full factorial experimental design was performed to evaluate the influence of moisture content, screw speed and powder feed rate on the mixing efficiency of the powder and liquid phase. From these results, it was concluded that only increasing the moisture content significantly improved the granulation liquid distribution. This study demonstrates that NIR chemical imaging is a fast and adequate measurement tool for allowing process visualization and hence for providing better process understanding of a continuous twin screw granulation system. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van den Heuvel, F; Hackett, S; Fiorini, F
Purpose: Currently, planning systems allow robustness calculations to be performed, but a generalized assessment methodology is not yet available. We introduce and evaluate a methodology to quantify the robustness of a plan on an individual patient basis. Methods: We introduce the notion of characterizing a treatment instance (i.e. one single fraction delivery) by describing the dose distribution within an organ as an alpha-stable distribution. The parameters of the distribution (shape(α), scale(γ), position(δ), and symmetry(β)), will vary continuously (in a mathematical sense) as the distributions change with the different positions. The rate of change of the parameters provides a measure ofmore » the robustness of the treatment. The methodology is tested in a planning study of 25 patients with known residual errors at each fraction. Each patient was planned using Eclipse with an IBA-proton beam model. The residual error space for every patient was sampled 30 times, yielding 31 treatment plans for each patient and dose distributions in 5 organs. The parameters’ change rate as a function of Euclidean distance from the original plan was analyzed. Results: More than 1,000 dose distributions were analyzed. For 4 of the 25 patients the change in scale rate (γ) was considerably higher than the lowest change rate, indicating a lack of robustness. The sign of the shape change rate (α) also seemed indicative but the experiment lacked the power to prove significance. Conclusion: There are indications that this robustness measure is a valuable tool to allow a more patient individualized approach to the determination of margins. In a further study we will also evaluate this robustness measure using photon treatments, and evaluate the impact of using breath hold techniques, and the a Monte Carlo based dose deposition calculation. A principle component analysis is also planned.« less
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
NASA Astrophysics Data System (ADS)
Roa, Wilson; Xiong, Yeping; Chen, Jie; Yang, Xiaoyan; Song, Kun; Yang, Xiaohong; Kong, Beihua; Wilson, John; Xing, James Z.
2012-09-01
We synthesized a novel, multi-functional, radiosensitizing agent by covalently linking 6-fluoro-6-deoxy-d-glucose (6-FDG) to gold nanoparticles (6-FDG-GNPs) via a thiol functional group. We then assessed the bio-distribution and pharmacokinetic properties of 6-FDG-GNPs in vivo using a murine model. At 2 h, following intravenous injection of 6-FDG-GNPs into the murine model, approximately 30% of the 6-FDG-GNPs were distributed to three major organs: the liver, the spleen and the kidney. PEGylation of the 6-FDG-GNPs was found to significantly improve the bio-distribution of 6-FDG-GNPs by avoiding unintentional uptake into these organs, while simultaneously doubling the cellular uptake of GNPs in implanted breast MCF-7 adenocarcinoma. When combined with radiation, PEG-6-FDG-GNPs were found to increase the apoptosis of the MCF-7 breast adenocarinoma cells by radiation both in vitro and in vivo. Pharmacokinetic data indicate that GNPs reach their maximal concentrations at a time window of two to four hours post-injection, during which optimal radiation efficiency can be achieved. PEG-6-FDG-GNPs are thus novel nanoparticles that preferentially accumulate in targeted cancer cells where they act as potent radiosensitizing agents. Future research will aim to substitute the 18F atom into the 6-FDG molecule so that the PEG-6-FDG-GNPs can also function as radiotracers for use in positron emission tomography scanning to aid cancer diagnosis and image guided radiation therapy planning.
Linking interseismic deformation with coseismic slip using dynamic rupture simulations
NASA Astrophysics Data System (ADS)
Yang, H.; He, B.; Weng, H.
2017-12-01
The largest earthquakes on earth occur at subduction zones, sometimes accompanied by devastating tsunamis. Reducing losses from megathrust earthquakes and tsunami demands accurate estimate of rupture scenarios for future earthquakes. Interseismic locking distribution derived from geodetic observations is often used to qualitatively evaluate future earthquake potential. However, how to quantitatively estimate the coseismic slip from the locking distribution remains challenging. Here we derive the coseismic rupture process of the 2012 Mw 7.6 Nicoya, Costa Rica, earthquake from interseismic locking distribution using spontaneous rupture simulation. We construct a three-dimensional elastic medium with a curved fault, which is governed by the linear slip-weakening law. The initial stress on the fault is set based on the build-up stress inferred from locking and the dynamic friction coefficient from fast-speed sliding experiments. Our numerical results of coseismic slip distribution, moment rate function and final earthquake moment are well consistent with those derived from seismic and geodetic observations. Furthermore, we find that the epicentral locations affect rupture scenarios and may lead to various sizes of earthquakes given the heterogeneous stress distribution. In the Nicoya region, less than half of rupture initiation regions where the locking degree is greater than 0.6 can develop into large earthquakes (Mw > 7.2). The results of location-dependent earthquake magnitudes underscore the necessity of conducting a large number of simulations to quantitatively evaluate seismic hazard from the interseismic locking models.
Particle acceleration very near an x-line in a collisionless plasma
NASA Technical Reports Server (NTRS)
Lyons, L. R.; Pridmore-Brown, D. C.
1995-01-01
In a previous paper, we applied a simplified model for particle motion in the vicinity of a magnetic X-line that had been introduced by Dungey. We used the model to quantitatively show that an electric force along an X-line can be balanced by the gyroviscous force associated with the off-diagonal elements of the pressure tensor. Distribution functions near the X-line were shown to be skewed in azimuth about the magnetic field and to include particles accelerated to very high energies. In the present paper, we apply the previous model and use the distribution functions to evaluate the energization that results from particle interactions with the X-line. We find that, in general, this interaction gives a spectrum of energized particles that can be represented by a Maxwellian distribution. A power-law, high-energy tail does not develop. The thermal energy, K, of the Maxwellian can be expressed simply in terms of the field parameters and particle mass and charge. It is independent of the thermal energy, K(sub i), of the particle distribution incident upon the region of the X-line, provided that K(sub i) is less than K. Significant energization is not found for K(sub i) is greater than K.
Effects of payoff functions and preference distributions in an adaptive population
NASA Astrophysics Data System (ADS)
Yang, H. M.; Ting, Y. S.; Wong, K. Y. Michael
2008-03-01
Adaptive populations such as those in financial markets and distributed control can be modeled by the Minority Game. We consider how their dynamics depends on the agents’ initial preferences of strategies, when the agents use linear or quadratic payoff functions to evaluate their strategies. We find that the fluctuations of the population making certain decisions (the volatility) depends on the diversity of the distribution of the initial preferences of strategies. When the diversity decreases, more agents tend to adapt their strategies together. In systems with linear payoffs, this results in dynamical transitions from vanishing volatility to a nonvanishing one. For low signal dimensions, the dynamical transitions for the different signals do not take place at the same critical diversity. Rather, a cascade of dynamical transitions takes place when the diversity is reduced. In contrast, no phase transitions are found in systems with the quadratic payoffs. Instead, a basin boundary of attraction separates two groups of samples in the space of the agents’ decisions. Initial states inside this boundary converge to small volatility, while those outside diverge to a large one. Furthermore, when the preference distribution becomes more polarized, the dynamics becomes more erratic. All the above results are supported by good agreement between simulations and theory.
Network topology and resilience analysis of South Korean power grid
NASA Astrophysics Data System (ADS)
Kim, Dong Hwan; Eisenberg, Daniel A.; Chun, Yeong Han; Park, Jeryang
2017-01-01
In this work, we present topological and resilience analyses of the South Korean power grid (KPG) with a broad voltage level. While topological analysis of KPG only with high-voltage infrastructure shows an exponential degree distribution, providing another empirical evidence of power grid topology, the inclusion of low voltage components generates a distribution with a larger variance and a smaller average degree. This result suggests that the topology of a power grid may converge to a highly skewed degree distribution if more low-voltage data is considered. Moreover, when compared to ER random and BA scale-free networks, the KPG has a lower efficiency and a higher clustering coefficient, implying that highly clustered structure does not necessarily guarantee a functional efficiency of a network. Error and attack tolerance analysis, evaluated with efficiency, indicate that the KPG is more vulnerable to random or degree-based attacks than betweenness-based intentional attack. Cascading failure analysis with recovery mechanism demonstrates that resilience of the network depends on both tolerance capacity and recovery initiation time. Also, when the two factors are fixed, the KPG is most vulnerable among the three networks. Based on our analysis, we propose that the topology of power grids should be designed so the loads are homogeneously distributed, or functional hubs and their neighbors have high tolerance capacity to enhance resilience.
Best Statistical Distribution of flood variables for Johor River in Malaysia
NASA Astrophysics Data System (ADS)
Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.
2012-12-01
A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow
Trzepacz, Paula T; Hochstetler, Helen; Wang, Shufang; Walker, Brett; Saykin, Andrew J
2015-09-07
The Montreal Cognitive Assessment (MoCA) was developed to enable earlier detection of mild cognitive impairment (MCI) relative to familiar multi-domain tests like the Mini-Mental State Exam (MMSE). Clinicians need to better understand the relationship between MoCA and MMSE scores. For this cross-sectional study, we analyzed 219 healthy control (HC), 299 MCI, and 100 Alzheimer's disease (AD) dementia cases from the Alzheimer's Disease Neuroimaging Initiative (ADNI)-GO/2 database to evaluate MMSE and MoCA score distributions and select MoCA values to capture early and late MCI cases. Stepwise variable selection in logistic regression evaluated relative value of four test domains for separating MCI from HC. Functional Activities Questionnaire (FAQ) was evaluated as a strategy to separate dementia from MCI. Equi-percentile equating produced a translation grid for MoCA against MMSE scores. Receiver Operating Characteristic (ROC) analyses evaluated lower cutoff scores for capturing the most MCI cases. Most dementia cases scored abnormally, while MCI and HC score distributions overlapped on each test. Most MCI cases scored ≥ 17 on MoCA (96.3%) and ≥ 24 on MMSE (98.3%). The ceiling effect (28-30 points) for MCI and HC was less using MoCA (18.1%) versus MMSE (71.4%). MoCA and MMSE scores correlated most for dementia (r = 0.86; versus MCI r = 0.60; HC r = 0.43). Equi-percentile equating showed a MoCA score of 18 was equivalent to MMSE of 24. ROC analysis found MoCA ≥ 17 as the cutoff between MCI and dementia that emphasized high sensitivity (92.3%) to capture MCI cases. The core and orientation domains in both tests best distinguished HC from MCI groups, whereas comprehension/executive function and attention/calculation were not helpful. Mean FAQ scores were significantly higher and a greater proportion had abnormal FAQ scores in dementia than MCI and HC. MoCA and MMSE were more similar for dementia cases, but MoCA distributes MCI cases across a broader score range with less ceiling effect. A cutoff of ≥ 17 on the MoCA may help capture early and late MCI cases; depending on the level of sensitivity desired, ≥ 18 or 19 could be used. Functional assessment can help exclude dementia cases. MoCA scores are translatable to the MMSE to facilitate comparison.
Tanaka, Rie; Sanada, Shigeru; Okazaki, Nobuo; Kobayashi, Takeshi; Fujimura, Masaki; Yasui, Masahide; Matsui, Takeshi; Nakayama, Kazuya; Nanbu, Yuko; Matsui, Osamu
2006-10-01
Dynamic flat panel detectors (FPD) permit acquisition of distortion-free radiographs with a large field of view and high image quality. The present study was performed to evaluate pulmonary function using breathing chest radiography with a dynamic FPD. We report primary results of a clinical study and computer algorithm for quantifying and visualizing relative local pulmonary airflow. Dynamic chest radiographs of 18 subjects (1 emphysema, 2 asthma, 4 interstitial pneumonia, 1 pulmonary nodule, and 10 normal controls) were obtained during respiration using an FPD system. We measured respiratory changes in distance from the lung apex to the diaphragm (DLD) and pixel values in each lung area. Subsequently, the interframe differences (D-frame) and difference values between maximum inspiratory and expiratory phases (D-max) were calculated. D-max in each lung represents relative vital capacity (VC) and regional D-frames represent pulmonary airflow in each local area. D-frames were superimposed on dynamic chest radiographs in the form of color display (fusion images). The results obtained using our methods were compared with findings on computed tomography (CT) images and pulmonary functional test (PFT), which were examined before inclusion in the study. In normal subjects, the D-frames were distributed symmetrically in both lungs throughout all respiratory phases. However, subjects with pulmonary diseases showed D-frame distribution patterns that differed from the normal pattern. In subjects with air trapping, there were some areas with D-frames near zero indicated as colorless areas on fusion images. These areas also corresponded to the areas showing air trapping on computed tomography images. In asthma, obstructive abnormality was indicated by areas continuously showing D-frame near zero in the upper lung. Patients with interstitial pneumonia commonly showed fusion images with an uneven color distribution accompanied by increased D-frames in the area identified as normal on computed tomography images. Furthermore, measurement of DLD was very effective for evaluating diaphragmatic kinetics. This is a rapid and simple method for evaluation of respiratory kinetics for pulmonary diseases, which can reveal abnormalities in diaphragmatic kinetics and regional lung ventilation. Furthermore, quantification and visualization of respiratory kinetics is useful as an aid in interpreting dynamic chest radiographs.
Dispersion in a thermal plasma including arbitrary degeneracy and quantum recoil.
Melrose, D B; Mushtaq, A
2010-11-01
The longitudinal response function for a thermal electron gas is calculated including two quantum effects exactly, degeneracy, and the quantum recoil. The Fermi-Dirac distribution is expanded in powers of a parameter that is small in the nondegenerate limit and the response function is evaluated in terms of the conventional plasma dispersion function to arbitrary order in this parameter. The infinite sum is performed in terms of polylogarithms in the long-wavelength and quasistatic limits, giving results that apply for arbitrary degeneracy. The results are applied to the dispersion relations for Langmuir waves and to screening, reproducing known results in the nondegenerate and completely degenerate limits, and generalizing them to arbitrary degeneracy.
NASA Technical Reports Server (NTRS)
Shyy, Dong-Jye; Redman, Wayne
1993-01-01
For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.
NASA Astrophysics Data System (ADS)
Ji, Chenxu; Zhang, Yuanzhi; Cheng, Qiuming; Tsou, JinYeu; Jiang, Tingchen; Liang, X. San
2018-06-01
In this study, we analyze spatial and temporal sea surface temperature (SST) and chlorophylla (Chl-a) concentration in the East China Sea (ECS) during the period 2003-2016. Level 3 (4 km) monthly SST and Chl-a data from the Moderate Resolution Imaging Spectroradiometer Satellite (MODIS-Aqua) were reconstructed using the data interpolation empirical orthogonal function (DINEOF) method and used to evaluated the relationship between the two variables. The approaches employed included correlation analysis, regression analysis, and so forth. Our results show that certain strong oceanic SSTs affect Chl-a concentration, with particularly high correlation seen in the coastal area of Jiangsu and Zhejiang provinces. The mean temperature of the high correlated region was 18.67 °C. This finding may suggest that the SST has an important impact on the spatial distribution of Chl-a concentration in the ECS.
Double Wigner distribution function of a first-order optical system with a hard-edge aperture.
Pan, Weiqing
2008-01-01
The effect of an apertured optical system on Wigner distribution can be expressed as a superposition integral of the input Wigner distribution function and the double Wigner distribution function of the apertured optical system. By introducing a hard aperture function into a finite sum of complex Gaussian functions, the double Wigner distribution functions of a first-order optical system with a hard aperture outside and inside it are derived. As an example of application, the analytical expressions of the Wigner distribution for a Gaussian beam passing through a spatial filtering optical system with an internal hard aperture are obtained. The analytical results are also compared with the numerical integral results, and they show that the analytical results are proper and ascendant.
Characteristics of the oil transport network in the South of Mexico
NASA Astrophysics Data System (ADS)
Juárez, R.; Fernández, I. Y.; Guzmán, L.
2017-01-01
We present a study of some organizational properties of the oil transport network of the Mexican oil company (PEMEX) in a region of the State of Tabasco. Particularly, the generalized centrality and the distribution of connectivities are calculated in order to evaluate some aspects of the structure of the network. We find that the connectivities (k) are characterized by a degree distribution which follows a power-law function of the form, P(k)~k -λ, with λ = 2.6. Moreover, our procedure permits to evalute the importance of lines (ducts) and nodes, which can be wells, production headers, separation batteries and petrochemical complexes.
Bayesian inference for disease prevalence using negative binomial group testing
Pritchard, Nicholas A.; Tebbs, Joshua M.
2011-01-01
Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308
A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.
2018-04-01
Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.
Lattice QCD Studies of Transverse Momentum-Dependent Parton Distribution Functions
NASA Astrophysics Data System (ADS)
Engelhardt, M.; Musch, B.; Hägler, P.; Negele, J.; Schäfer, A.
2015-09-01
Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped gauge link. Such a definition opens the possibility of evaluating TMDs within lattice QCD. By parametrizing the aforementioned matrix elements in terms of invariant amplitudes, the problem can be cast in a Lorentz frame suited for the lattice calculation. Results for selected TMD observables are presented, including a particular focus on their dependence on a Collins-Soper-type evolution parameter, which quantifies proximity of the staple-shaped gauge links to the light cone.
Filling the gap in functional trait databases: use of ecological hypotheses to replace missing data.
Taugourdeau, Simon; Villerd, Jean; Plantureux, Sylvain; Huguenin-Elie, Olivier; Amiaud, Bernard
2014-04-01
Functional trait databases are powerful tools in ecology, though most of them contain large amounts of missing values. The goal of this study was to test the effect of imputation methods on the evaluation of trait values at species level and on the subsequent calculation of functional diversity indices at community level using functional trait databases. Two simple imputation methods (average and median), two methods based on ecological hypotheses, and one multiple imputation method were tested using a large plant trait database, together with the influence of the percentage of missing data and differences between functional traits. At community level, the complete-case approach and three functional diversity indices calculated from grassland plant communities were included. At the species level, one of the methods based on ecological hypothesis was for all traits more accurate than imputation with average or median values, but the multiple imputation method was superior for most of the traits. The method based on functional proximity between species was the best method for traits with an unbalanced distribution, while the method based on the existence of relationships between traits was the best for traits with a balanced distribution. The ranking of the grassland communities for their functional diversity indices was not robust with the complete-case approach, even for low percentages of missing data. With the imputation methods based on ecological hypotheses, functional diversity indices could be computed with a maximum of 30% of missing data, without affecting the ranking between grassland communities. The multiple imputation method performed well, but not better than single imputation based on ecological hypothesis and adapted to the distribution of the trait values for the functional identity and range of the communities. Ecological studies using functional trait databases have to deal with missing data using imputation methods corresponding to their specific needs and making the most out of the information available in the databases. Within this framework, this study indicates the possibilities and limits of single imputation methods based on ecological hypothesis and concludes that they could be useful when studying the ranking of communities for their functional diversity indices.
Filling the gap in functional trait databases: use of ecological hypotheses to replace missing data
Taugourdeau, Simon; Villerd, Jean; Plantureux, Sylvain; Huguenin-Elie, Olivier; Amiaud, Bernard
2014-01-01
Functional trait databases are powerful tools in ecology, though most of them contain large amounts of missing values. The goal of this study was to test the effect of imputation methods on the evaluation of trait values at species level and on the subsequent calculation of functional diversity indices at community level using functional trait databases. Two simple imputation methods (average and median), two methods based on ecological hypotheses, and one multiple imputation method were tested using a large plant trait database, together with the influence of the percentage of missing data and differences between functional traits. At community level, the complete-case approach and three functional diversity indices calculated from grassland plant communities were included. At the species level, one of the methods based on ecological hypothesis was for all traits more accurate than imputation with average or median values, but the multiple imputation method was superior for most of the traits. The method based on functional proximity between species was the best method for traits with an unbalanced distribution, while the method based on the existence of relationships between traits was the best for traits with a balanced distribution. The ranking of the grassland communities for their functional diversity indices was not robust with the complete-case approach, even for low percentages of missing data. With the imputation methods based on ecological hypotheses, functional diversity indices could be computed with a maximum of 30% of missing data, without affecting the ranking between grassland communities. The multiple imputation method performed well, but not better than single imputation based on ecological hypothesis and adapted to the distribution of the trait values for the functional identity and range of the communities. Ecological studies using functional trait databases have to deal with missing data using imputation methods corresponding to their specific needs and making the most out of the information available in the databases. Within this framework, this study indicates the possibilities and limits of single imputation methods based on ecological hypothesis and concludes that they could be useful when studying the ranking of communities for their functional diversity indices. PMID:24772273
Model to Test Electric Field Comparisons in a Composite Fairing Cavity
NASA Technical Reports Server (NTRS)
Trout, Dawn; Burford, Janessa
2012-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite . a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.
Model to Test Electric Field Comparisons in a Composite Fairing Cavity
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Burford, Janessa
2013-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.
Advanced flight control system study
NASA Technical Reports Server (NTRS)
Mcgough, J.; Moses, K.; Klafin, J. F.
1982-01-01
The architecture, requirements, and system elements of an ultrareliable, advanced flight control system are described. The basic criteria are functional reliability of 10 to the minus 10 power/hour of flight and only 6 month scheduled maintenance. A distributed system architecture is described, including a multiplexed communication system, reliable bus controller, the use of skewed sensor arrays, and actuator interfaces. Test bed and flight evaluation program are proposed.
Modeling Operator Performance in Low Task Load Supervisory Domains
2011-06-01
PDF Probability Distribution Function SAFE System for Aircrew Fatigue Evaluation SAFTE Sleep , Activity, Fatigue, and Task Effectiveness SCT...attentional capacity due to high mental workload. In low task load settings, fatigue is mainly caused by lack of sleep and boredom experienced by...performance decrements. Also, psychological fatigue is strongly correlated with lack of sleep . Not surprisingly, operators of the morning shift reported the
ERIC Educational Resources Information Center
Wei, Youhua; Morgan, Rick
2016-01-01
As an alternative to common-item equating when common items do not function as expected, the single-group growth model (SGGM) scaling uses common examinees or repeaters to link test scores on different forms. The SGGM scaling assumes that, for repeaters taking adjacent administrations, the conditional distribution of scale scores in later…
Statistics of the residual refraction errors in laser ranging data
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.
Mathematical model of ambulance resources in Saint-Petersburg
NASA Astrophysics Data System (ADS)
Shavidze, G. G.; Balykina, Y. E.; Lejnina, E. A.; Svirkin, M. V.
2016-06-01
Emergency medical system is one of the main elements in city infrastructure. The article contains analysis of existing system of ambulance resource distribution. Paper considers the idea of using multiperiodicity as a tool to increase the efficiency of the Emergency Medical Services. The program developed in programming environment Matlab helps to evaluate the changes in the functioning of the system of emergency medical service.
ERIC Educational Resources Information Center
Whitney, Jennifer D.
2007-01-01
Background: Almost every aspect of modern life is affected in some way by technology. Many people utilize technology from dawn to dusk to communicate; make decisions; reflect, gain, synthesize, evaluate or distribute information, among many other functions. One would be hard pressed to find a single professional, regardless of career field,…
Knight, Rodney R.; Murphy, Jennifer C.; Wolfe, William J.; Saylor, Charles F.; Wales, Amy K.
2014-01-01
Ecological limit functions relating streamflow and aquatic ecosystems remain elusive despite decades of research. We investigated functional relationships between species richness and changes in streamflow characteristics at 662 fish sampling sites in the Tennessee River basin. Our approach included the following: (1) a brief summary of relevant literature on functional relations between fish and streamflow, (2) the development of ecological limit functions that describe the strongest discernible relationships between fish species richness and streamflow characteristics, (3) the evaluation of proposed definitions of hydrologic reference conditions, and (4) an investigation of the internal structures of wedge-shaped distributions underlying ecological limit functions.Twenty-one ecological limit functions were developed across three ecoregions that relate the species richness of 11 fish groups and departures from hydrologic reference conditions using multivariate and quantile regression methods. Each negatively sloped function is described using up to four streamflow characteristics expressed in terms of cumulative departure from hydrologic reference conditions. Negative slopes indicate increased departure results in decreased species richness.Sites with the highest measured fish species richness generally had near-reference hydrologic conditions for a given ecoregion. Hydrology did not generally differ between sites with the highest and lowest fish species richness, indicating that other environmental factors likely limit species richness at sites with reference hydrology.Use of ecological limit functions to make decisions regarding proposed hydrologic regime changes, although commonly presented as a management tool, is not as straightforward or informative as often assumed. We contend that statistical evaluation of the internal wedge structure below limit functions may provide a probabilistic understanding of how aquatic ecology is influenced by altered hydrology and may serve as the basis for evaluating the potential effect of proposed hydrologic changes.
The Brain as a Distributed Intelligent Processing System: An EEG Study
da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo
2011-01-01
Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657
Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.
Zalvidea, D; Sicre, E E
1998-06-10
A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.
NASA Astrophysics Data System (ADS)
Shevchenko, O. Yu.
2013-06-01
The formulas directly connecting parton distribution functions and fragmentation functions at the next-to-leading-order QCD with the same quantities at the leading order are derived. These formulas are universal, i.e., have the same form for all kinds of parton distribution functions and fragmentation functions, differing only in the respective splitting functions entering there.
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
de la Hera, Esther; Gomez, Manuel; Rosell, Cristina M
2013-10-15
Rice flour is becoming very attractive as raw material, but there is lack of information about the influence of particle size on its functional properties and starch digestibility. This study evaluates the degree of dependence of the rice flour functional properties, mainly derived from starch behavior, with the particle size distribution. Hydration properties of flours and gels and starch enzymatic hydrolysis of individual fractions were assessed. Particle size heterogeneity on rice flour significantly affected functional properties and starch features, at room temperature and also after gelatinization; and the extent of that effect was grain type dependent. Particle size heterogeneity on rice flour induces different pattern in starch enzymatic hydrolysis, with the long grain having slower hydrolysis as indicated the rate constant (k). No correlation between starch digestibility and hydration properties or the protein content was observed. It seems that in intact granules interactions with other grain components must be taken into account. Overall, particle size fractionation of rice flour might be advisable for selecting specific physico-chemical properties. Copyright © 2013. Published by Elsevier Ltd.
Tfayli, Ali; Bonnier, Franck; Farhane, Zeineb; Libong, Danielle; Byrne, Hugh J; Baillet-Guffroy, Arlette
2014-06-01
The use of animals for scientific research is increasingly restricted by legislation, increasing the demand for human skin models. These constructs present comparable bulk lipid content to human skin. However, their permeability is significantly higher, limiting their applicability as models of barrier function, although the molecular origins of this reduced barrier function remain unclear. This study analyses the stratum corneum (SC) of one such commercially available reconstructed skin model (RSM) compared with human SC by spectroscopic imaging and chromatographic profiling. Total lipid composition was compared by chromatographic analysis (HPLC). Raman spectroscopy was used to evaluate the conformational order, lateral packing and distribution of lipids in the surface and skin/RSM sections. Although HPLC indicates that all SC lipid classes are present, significant differences are observed in ceramide profiles. Raman imaging demonstrated that the RSM lipids are distributed in a non-continuous matrix, providing a better understanding of the limited barrier function. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
On the optimal identification of tag sets in time-constrained RFID configurations.
Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel
2011-01-01
In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
Assal, Timothy J.; Anderson, Patrick J.; Sibold, Jason
2015-01-01
The availability of land cover data at local scales is an important component in forest management and monitoring efforts. Regional land cover data seldom provide detailed information needed to support local management needs. Here we present a transferable framework to model forest cover by major plant functional type using aerial photos, multi-date Système Pour l’Observation de la Terre (SPOT) imagery, and topographic variables. We developed probability of occurrence models for deciduous broad-leaved forest and needle-leaved evergreen forest using logistic regression in the southern portion of the Wyoming Basin Ecoregion. The model outputs were combined into a synthesis map depicting deciduous and coniferous forest cover type. We evaluated the models and synthesis map using a field-validated, independent data source. Results showed strong relationships between forest cover and model variables, and the synthesis map was accurate with an overall correct classification rate of 0.87 and Cohen’s kappa value of 0.81. The results suggest our method adequately captures the functional type, size, and distribution pattern of forest cover in a spatially heterogeneous landscape.
Studies of the Intrinsic Complexities of Magnetotail Ion Distributions: Theory and Observations
NASA Technical Reports Server (NTRS)
Ashour-Abdalla, Maha
1998-01-01
This year we have studied the relationship between the structure seen in measured distribution functions and the detailed magnetospheric configuration. Results from our recent studies using time-dependent large-scale kinetic (LSK) calculations are used to infer the sources of the ions in the velocity distribution functions measured by a single spacecraft (Geotail). Our results strongly indicate that the different ion sources and acceleration mechanisms producing a measured distribution function can explain this structure. Moreover, individual structures within distribution functions were traced back to single sources. We also confirmed the fractal nature of ion distributions.
Use of anoctamin 1 (ANO1) to evaluate interstitial cells of Cajal in Hirschsprung's disease.
Coyle, David; Kelly, Danielle A M; O'Donnell, Anne Marie; Gillick, John; Puri, Prem
2016-02-01
Interstitial cells of Cajal (ICCs) are pacemaker cells involved in facilitating neurotransmission and the generation of slow electrical waves necessary for colonic peristalsis. Their distribution has been found to be abnormal in the aganglionic and ganglionic colon in Hirschsprung's disease (HSCR) using c-kit-labelling. Anoctamin-1 (ANO1) is a Ca(2+)-activated Cl(-) channel thought to be specifically expressed on ICCs. Unlike c-kit, it plays a key role in ICC pacemaker activity. We aimed to investigate the utility of ANO1 in evaluating the colonic ICC network in HSCR. We collected full-length pull-through specimens from children with HSCR (n = 10). Control colon specimens were collected at colostomy closure in children with anorectal malformation (n = 6). The distribution of ANO1 and c-kit expression was evaluated using immunofluorescence and confocal microscopy. ANO1 expression was quantified using Western blot analysis. ANO1 was not expressed on 23 % of c-kit immuno-positive cells in the circular muscle; however, 100 % of ANO1-positive ICCs were c-kit positive. The distribution of ANO1-positive ICCs was sparse in aganglionic colon, with a modest reduction in ICCs seen in the ganglionic colon in HSCR compared to controls (p = 0.044). ANO1 protein expression was reduced in aganglionic colon but similar in ganglionic colon relative to controls. ANO1 is preferential to c-kit in evaluating the ICC network in HSCR due to its specificity and functional importance. Abnormal distribution of ANO1-positive ICCs in the ganglionic colon in HSCR may contribute to persistent bowel symptoms in some patients after pull-through surgery.
Bernhardt, Peter
2016-01-01
Purpose To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. Methods A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham’s line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green’s function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. Results The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. Conclusions The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour. PMID:27861529
NASA Astrophysics Data System (ADS)
Demirel, Mehmet C.; Mai, Juliane; Mendiguren, Gorka; Koch, Julian; Samaniego, Luis; Stisen, Simon
2018-02-01
Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex evolution optimiser. The calibration results reveal a limited trade-off between streamflow dynamics and spatial patterns illustrating the benefit of combining separate observation types and objective functions. At the same time, the simulated spatial patterns of AET significantly improved when an objective function based on observed AET patterns and a novel spatial performance metric compared to traditional streamflow-only calibration were included. Since the overall water balance is usually a crucial goal in hydrologic modelling, spatial-pattern-oriented optimisation should always be accompanied by traditional discharge measurements. In such a multi-objective framework, the current study promotes the use of a novel bias-insensitive spatial pattern metric, which exploits the key information contained in the observed patterns while allowing the water balance to be informed by discharge observations.
Comprehensive evaluation index system of total supply capability in distribution network
NASA Astrophysics Data System (ADS)
Zhang, Linyao; Wu, Guilian; Yang, Jingyuan; Jia, Shuangrui; Zhang, Wei; Sun, Weiqing
2018-01-01
Aiming at the lack of a comprehensive evaluation of the distribution network, based on the existing distribution network evaluation index system, combined with the basic principles of constructing the evaluation index, put forward a new evaluation index system of distribution network capacity. This paper is mainly based on the total supply capability of the distribution network, combining single index and various factors, into a multi-evaluation index of the distribution network, thus forming a reasonable index system, and various indicators of rational quantification make the evaluation results more intuitive. In order to have a comprehensive judgment of distribution network, this paper uses weights to analyse the importance of each index, verify the rationality of the index system through the example, it is proved that the rationality of the index system, so as to guide the direction of distribution network planning.
Functional safety for the Advanced Technology Solar Telescope
NASA Astrophysics Data System (ADS)
Bulau, Scott; Williams, Timothy R.
2012-09-01
Since inception, the Advanced Technology Solar Telescope (ATST) has planned to implement a facility-wide functional safety system to protect personnel from harm and prevent damage to the facility or environment. The ATST will deploy an integrated safety-related control system (SRCS) to achieve functional safety throughout the facility rather than relying on individual facility subsystems to provide safety functions on an ad hoc basis. The Global Interlock System (GIS) is an independent, distributed, facility-wide, safety-related control system, comprised of commercial off-the-shelf (COTS) programmable controllers that monitor, evaluate, and control hazardous energy and conditions throughout the facility that arise during operation and maintenance. The GIS has been designed to utilize recent advances in technology for functional safety plus revised national and international standards that allow for a distributed architecture using programmable controllers over a local area network instead of traditional hard-wired safety functions, while providing an equivalent or even greater level of safety. Programmable controllers provide an ideal platform for controlling the often complex interrelationships between subsystems in a modern astronomical facility, such as the ATST. A large, complex hard-wired relay control system is no longer needed. This type of system also offers greater flexibility during development and integration in addition to providing for expanded capability into the future. The GIS features fault detection, self-diagnostics, and redundant communications that will lead to decreased maintenance time and increased availability of the facility.
Eads, David A.; Jachowski, David S.; Biggins, Dean E.; Livieri, Travis M.; Matchett, Marc R.; Millspaugh, Joshua J.
2012-01-01
Wildlife-habitat relationships are often conceptualized as resource selection functions (RSFs)—models increasingly used to estimate species distributions and prioritize habitat conservation. We evaluated the predictive capabilities of 2 black-footed ferret (Mustela nigripes) RSFs developed on a 452-ha colony of black-tailed prairie dogs (Cynomys ludovicianus) in the Conata Basin, South Dakota. We used the RSFs to project the relative probability of occurrence of ferrets throughout an adjacent 227-ha colony. We evaluated performance of the RSFs using ferret space use data collected via postbreeding spotlight surveys June–October 2005–2006. In home ranges and core areas, ferrets selected the predicted "very high" and "high" occurrence categories of both RSFs. Count metrics also suggested selection of these categories; for each model in each year, approximately 81% of ferret locations occurred in areas of very high or high predicted occurrence. These results suggest usefulness of the RSFs in estimating the distribution of ferrets throughout a black-tailed prairie dog colony. The RSFs provide a fine-scale habitat assessment for ferrets that can be used to prioritize releases of ferrets and habitat restoration for prairie dogs and ferrets. A method to quickly inventory the distribution of prairie dog burrow openings would greatly facilitate application of the RSFs.
Beyond Zipf's Law: The Lavalette Rank Function and Its Properties.
Fontanelli, Oscar; Miramontes, Pedro; Yang, Yaning; Cocho, Germinal; Li, Wentian
Although Zipf's law is widespread in natural and social data, one often encounters situations where one or both ends of the ranked data deviate from the power-law function. Previously we proposed the Beta rank function to improve the fitting of data which does not follow a perfect Zipf's law. Here we show that when the two parameters in the Beta rank function have the same value, the Lavalette rank function, the probability density function can be derived analytically. We also show both computationally and analytically that Lavalette distribution is approximately equal, though not identical, to the lognormal distribution. We illustrate the utility of Lavalette rank function in several datasets. We also address three analysis issues on the statistical testing of Lavalette fitting function, comparison between Zipf's law and lognormal distribution through Lavalette function, and comparison between lognormal distribution and Lavalette distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Jeff; Rylander, Matthew; Boemer, Jens
The fourth solicitation of the California Solar Initiative (CSI) Research, Development, Demonstration and Deployment (RD&D) Program established by the California Public Utilities Commission (CPUC) supported the Electric Power Research Institute (EPRI), National Renewable Energy Laboratory (NREL), and Sandia National Laboratories (SNL) with data provided from Pacific Gas and Electric (PG&E), Southern California Edison (SCE), and San Diego Gas and Electric (SDG&E) conducted research to determine optimal default settings for distributed energy resource advanced inverter controls. The inverter functions studied are aligned with those developed by the California Smart Inverter Working Group (SIWG) and those being considered by the IEEE 1547more » Working Group. The advanced inverter controls examined to improve the distribution system response included power factor, volt-var, and volt-watt. The advanced inverter controls examined to improve the transmission system response included frequency and voltage ride-through as well as Dynamic Voltage Support. This CSI RD&D project accomplished the task of developing methods to derive distribution focused advanced inverter control settings, selecting a diverse set of feeders to evaluate the methods through detailed analysis, and evaluating the effectiveness of each method developed. Inverter settings focused on the transmission system performance were also evaluated and verified. Based on the findings of this work, the suggested advanced inverter settings and methods to determine settings can be used to improve the accommodation of distributed energy resources (PV specifically). The voltage impact from PV can be mitigated using power factor, volt-var, or volt-watt control, while the bulk system impact can be improved with frequency/voltage ride-through.« less
Energy distribution functions of kilovolt ions in a modified Penning discharge
NASA Technical Reports Server (NTRS)
Roth, J. R.
1972-01-01
The distribution function of ion energy parallel to the magnetic field of a Penning discharge was measured with a retarding potential energy analyzer. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field were made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and their kinetic temperatures are equal within experimental error. This suggests that turbulent processes previously observed Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution. The kinetic temperatures are on the order of kilovolts, and the tails of the ion energy distribution functions are Maxwellian up to a factor of 7 e-folds in energy. When the distributions depart from Maxwellian, they are enhanced above the Maxwellian tail. Above densities of about 10 to the 10th power particles/cc, this enhancement appears to be the result of a second, higher temperature Maxwellian distribution. At these high particle energies, only the ions perpendicular to the magnetic field lines were investigated.
Dolbeth, M; Vendel, A L; Pessanha, A; Patrício, J
2016-11-15
The functional diversity of fish communities was studied along the salinity gradient of two estuaries in Northeast Brazil subjected to different anthropogenic pressures, to gain a better understanding of the response of fish communities to disturbance. We evaluated functional complementarity indices, redundancy and analysed functional composition through functional groups based on combinations of different traits. The fish communities in both estuaries share similar functions performed by few functional groups. The upstream areas had generally lower taxonomic, functional diversity and lower redundancy, suggesting greater vulnerability to impacts caused by human activities. Biomass was slightly more evenly distributed among functional groups in the less disturbed estuary, but total biomass and redundancy were lower in comparison to the urbanized estuary. The present findings lend strength to the notion that the less disturbed estuary may be more susceptible to anthropogenic impacts, underscoring the need for more effective conservation measures directed at this estuary. Copyright © 2016 Elsevier Ltd. All rights reserved.
Clustering of galaxies with f(R) gravity
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; Faizal, Mir; Hameeda, Mir; Pourhassan, Behnam; Salzano, Vincenzo; Upadhyay, Sudhaker
2018-02-01
Based on thermodynamics, we discuss the galactic clustering of expanding Universe by assuming the gravitational interaction through the modified Newton's potential given by f(R) gravity. We compute the corrected N-particle partition function analytically. The corrected partition function leads to more exact equations of state of the system. By assuming that the system follows quasi-equilibrium, we derive the exact distribution function that exhibits the f(R) correction. Moreover, we evaluate the critical temperature and discuss the stability of the system. We observe the effects of correction of f(R) gravity on the power-law behaviour of particle-particle correlation function also. In order to check the feasibility of an f(R) gravity approach to the clustering of galaxies, we compare our results with an observational galaxy cluster catalogue.
An estimation of distribution method for infrared target detection based on Copulas
NASA Astrophysics Data System (ADS)
Wang, Shuo; Zhang, Yiqun
2015-10-01
Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.
CARES/Life Ceramics Durability Evaluation Software Used for Mars Microprobe Aeroshell
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
1998-01-01
The CARES/Life computer program, which was developed at the NASA Lewis Research Center, predicts the probability of a monolithic ceramic component's failure as a function of time in service. The program has many features and options for materials evaluation and component design. It couples commercial finite element programs-which resolve a component's temperature and stress distribution-to-reliability evaluation and fracture mechanics routines for modeling strength-limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength. The capability, flexibility, and uniqueness of CARES/Life has attracted many users representing a broad range of interests and has resulted in numerous awards for technological achievements and technology transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houser, Kevin W.; Royer, Michael P.; David, Aurelien
A system for evaluating the color rendition of light sources was recently published as IES TM-30-15 IES Method for Evaluating Light Source Color Rendition. The system includes a fidelity index (Rf) to quantify similarity to a reference illuminant, a relative-gamut index (Rg) to quantify saturation relative to a reference illuminant, and a color vector icon that visually presents information about color rendition. The calculation employs CAM02-UCS and uses a newly-developed set of reflectance functions, comprising 99 color evaluation samples (CES). The CES were down-selected from 105,000 real object samples and are uniformly distributed in color space (fairly representing different colors)more » and wavelength space (avoiding artificial increase of color rendition values by selective optimization).« less
Distributed Data Service for Data Management in Internet of Things Middleware
Cruz Huacarpuma, Ruben; de Sousa Junior, Rafael Timoteo; de Holanda, Maristela Terto; de Oliveira Albuquerque, Robson; García Villalba, Luis Javier; Kim, Tai-Hoon
2017-01-01
The development of the Internet of Things (IoT) is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS) to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware. PMID:28448469
NASA Astrophysics Data System (ADS)
Romanelli, N.; Mazelle, C.; Meziane, K.
2018-02-01
Seen from the solar wind (SW) reference frame, the presence of newborn planetary protons upstream from the Martian and Venusian bow shocks and SW protons reflected from each of them constitutes two sources of nonthermal proton populations. In both cases, the resulting proton velocity distribution function is highly unstable and capable of giving rise to ultralow frequency quasi-monochromatic electromagnetic plasma waves. When these instabilities take place, the resulting nonlinear waves are convected by the SW and interact with nonthermal protons located downstream from the wave generation region (upstream from the bow shock), playing a predominant role in their dynamics. To improve our understanding of these phenomena, we study the interaction between a charged particle and a large-amplitude monochromatic circularly polarized electromagnetic wave propagating parallel to a background magnetic field, from first principles. We determine the number of fix points in velocity space, their stability, and their dependence on different wave-particle parameters. Particularly, we determine the temporal evolution of a charged particle in the pitch angle-gyrophase velocity plane under nominal conditions expected for backstreaming protons in planetary foreshocks and for newborn planetary protons in the upstream regions of Venus and Mars. In addition, the inclusion of wave ellipticity effects provides an explanation for pitch angle distributions of suprathermal protons observed at the Earth's foreshock, reported in previous studies. These analyses constitute a mean to evaluate if nonthermal proton velocity distribution functions observed at these plasma environments present signatures that can be understood in terms of nonlinear wave-particle processes.
NASA Astrophysics Data System (ADS)
Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald
2017-04-01
The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison
Translational MR Neuroimaging of Stroke and Recovery
Mandeville, Emiri T.; Ayata, Cenk; Zheng, Yi; Mandeville, Joseph B.
2016-01-01
Multiparametric magnetic resonance imaging (MRI) has become a critical clinical tool for diagnosing focal ischemic stroke severity, staging treatment, and predicting outcome. Imaging during the acute phase focuses on tissue viability in the stroke vicinity, while imaging during recovery requires the evaluation of distributed structural and functional connectivity. Preclinical MRI of experimental stroke models provides validation of non-invasive biomarkers in terms of cellular and molecular mechanisms, while also providing a translational platform for evaluation of prospective therapies. This brief review of translational stroke imaging discusses the acute to chronic imaging transition, the principles underlying common MRI methods employed in stroke research, and experimental results obtained by clinical and preclinical imaging to determine tissue viability, vascular remodeling, structural connectivity of major white matter tracts, and functional connectivity using task-based and resting-state fMRI during the stroke recovery process. PMID:27578048
3D Myocardial Elastography In Vivo.
Papadacci, Clement; Bunting, Ethan A; Wan, Elaine Y; Nauleau, Pierre; Konofagou, Elisa E
2017-02-01
Strain evaluation is of major interest in clinical cardiology as it can quantify the cardiac function. Myocardial elastography, a radio-frequency (RF)-based cross-correlation method, has been developed to evaluate the local strain distribution in the heart in vivo. However, inhomogeneities such as RF ablation lesions or infarction require a three-dimensional approach to be measured accurately. In addition, acquisitions at high volume rate are essential to evaluate the cardiac strain in three dimensions. Conventional focused transmit schemes using 2D matrix arrays, trade off sufficient volume rate for beam density or sector size to image rapid moving structure such as the heart, which lowers accuracy and precision in the strain estimation. In this study, we developed 3D myocardial elastography at high volume rates using diverging wave transmits to evaluate the local axial strain distribution in three dimensions in three open-chest canines before and after radio-frequency ablation. Acquisitions were performed with a 2.5 MHz 2D matrix array fully programmable used to emit 2000 diverging waves at 2000 volumes/s. Incremental displacements and strains enabled the visualization of rapid events during the QRS complex along with the different phases of the cardiac cycle in entire volumes. Cumulative displacement and strain volumes depict high contrast between non-ablated and ablated myocardium at the lesion location, mapping the tissue coagulation. 3D myocardial strain elastography could thus become an important technique to measure the regional strain distribution in three dimensions in humans.
Imaging quality evaluation method of pixel coupled electro-optical imaging system
NASA Astrophysics Data System (ADS)
He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui
2017-09-01
With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.
2014-03-27
ii List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii I...t, E) Wigner Distribution Function ii List of Acronyms Acronym Definition WDF Wigner Distribution Function PES Potential Energy Surface DPAL Diode
NASA Astrophysics Data System (ADS)
Zainudin, W. N. R. A.; Ramli, N. A.
2017-09-01
In 2010, Energy Commission (EC) had introduced Incentive Based Regulation (IBR) to ensure sustainable Malaysian Electricity Supply Industry (MESI), promotes transparent and fair returns, encourage maximum efficiency and maintains policy driven end user tariff. To cater such revolutionary transformation, a sophisticated system to generate policy driven electricity tariff structure is in great need. Hence, this study presents a data analytics framework that generates altered revenue function based on varying power consumption distribution and tariff charge function. For the purpose of this study, the power consumption distribution is being proxy using proportion of household consumption and electricity consumed in KwH and the tariff charge function is being proxy using three-tiered increasing block tariff (IBT). The altered revenue function is useful to give an indication on whether any changes in the power consumption distribution and tariff charges will give positive or negative impact to the economy. The methodology used for this framework begins by defining the revenue to be a function of power consumption distribution and tariff charge function. Then, the proportion of household consumption and tariff charge function is derived within certain interval of electricity power. Any changes in those proportion are conjectured to contribute towards changes in revenue function. Thus, these changes can potentially give an indication on whether the changes in power consumption distribution and tariff charge function are giving positive or negative impact on TNB revenue. Based on the finding of this study, major changes on tariff charge function seems to affect altered revenue function more than power consumption distribution. However, the paper concludes that power consumption distribution and tariff charge function can influence TNB revenue to some great extent.
Two-Dimensional Automatic Measurement for Nozzle Flow Distribution Using Improved Ultrasonic Sensor
Zhai, Changyuan; Zhao, Chunjiang; Wang, Xiu; Wang, Ning; Zou, Wei; Li, Wei
2015-01-01
Spray deposition and distribution are affected by many factors, one of which is nozzle flow distribution. A two-dimensional automatic measurement system, which consisted of a conveying unit, a system control unit, an ultrasonic sensor, and a deposition collecting dish, was designed and developed. The system could precisely move an ultrasonic sensor above a pesticide deposition collecting dish to measure the nozzle flow distribution. A sensor sleeve with a PVC tube was designed for the ultrasonic sensor to limit its beam angle in order to measure the liquid level in the small troughs. System performance tests were conducted to verify the designed functions and measurement accuracy. A commercial spray nozzle was also used to measure its flow distribution. The test results showed that the relative error on volume measurement was less than 7.27% when the liquid volume was 2 mL in trough, while the error was less than 4.52% when the liquid volume was 4 mL or more. The developed system was also used to evaluate the flow distribution of a commercial nozzle. It was able to provide the shape and the spraying width of the flow distribution accurately. PMID:26501288
Energy distribution functions of kilovolt ions in a modified Penning discharge.
NASA Technical Reports Server (NTRS)
Roth, J. R.
1973-01-01
The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution.
Energy distribution functions of kilovolt ions in a modified Penning discharge.
NASA Technical Reports Server (NTRS)
Roth, J. R.
1972-01-01
The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution.
Dose Distribution in Cone-Beam Breast Computed Tomography: An Experimental Phantom Study
NASA Astrophysics Data System (ADS)
Russo, Paolo; Lauria, Adele; Mettivier, Giovanni; Montesi, Maria Cristina; Villani, Natalia
2010-02-01
We measured the spatial distribution of absorbed dose in a 14 cm diameter PMMA half-ellipsoid phantom simulating the uncompressed breast, using an X-ray cone-beam breast computed tomography apparatus, assembled for laboratory tests. Thermoluminescent dosimeters (TLD-100) were placed inside the phantom in six positions, both axially and at the phantom periphery. To study the dose distribution inside the PMMA phantom two experimental setups were adopted with effective energies in the range 28.7-44.4 keV. Different values of effective energies were obtained by combining different configurations of added Cu filtration (0.05 mm or 0.2 mm) and tube voltages (from 50 kVp to 80 kVp). Dose values obtained by TLDs in different positions inside the PMMA are reported. To evaluate the dose distribution in the breast shaped volume, the values measured were normalized to the one obtained in the inner position inside the phantom. Measurements with a low energy setup show a gradual increment of dose going from the "chest wall" to the "nipple" (63% more at the "nipple" compared to the central position). Likewise, a gradual increment is observed going from the breast axis toward the periphery (82% more at the "skin" compared to the central position). A more uniform distribution of dose inside the PMMA was obtained with a high energy setup (the maximum variation was 33% at 35.5 keV effective energy in the radial direction). The most uniform distribution is obtained at 44.4 keV. The results of this study show how the dose is distributed: it varies as a function of effective energy of the incident X-ray beam and as a function of the position inside the volume (axial or peripheral position).
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2015-04-01
Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, M.; Aad, G.; Abbott, B.
Ratios of top-quark pair to Z-boson cross sections measured from proton-proton collisions at the LHC centre-of-mass energies of √s = 13 TeV, 8 TeV, and 7 TeV are presented by the ATLAS Collaboration. Single ratios, at a given √s for the two processes and at different √s for each process, as well as double ratios of the two processes at different √s, are evaluated. The ratios are constructed using previously published ATLAS measurements of the tt¯ and Z-boson production cross sections, corrected to a common phase space where required, and a new analysis of Z → ℓ +ℓ – wheremore » ℓ = e, μ at √s = 13 TeV performed with data collected in 2015 with an integrated luminosity of 3.2 fb –1. Correlations of systematic uncertainties are taken into account when evaluating the uncertainties in the ratios. The correlation model is also used to evaluate the combined cross section of the Z → e +e – and the Z → μ +μ – channels for each √s value. The results are compared to calculations performed at next-to-next-to-leading-order accuracy using recent sets of parton distribution functions. The data demonstrate significant power to constrain the gluon distribution function for the Bjorken-x values near 0.1 and the light-quark sea for x < 0.02.« less
Aaboud, M.; Aad, G.; Abbott, B.; ...
2017-02-23
Ratios of top-quark pair to Z-boson cross sections measured from proton-proton collisions at the LHC centre-of-mass energies of √s = 13 TeV, 8 TeV, and 7 TeV are presented by the ATLAS Collaboration. Single ratios, at a given √s for the two processes and at different √s for each process, as well as double ratios of the two processes at different √s, are evaluated. The ratios are constructed using previously published ATLAS measurements of the tt¯ and Z-boson production cross sections, corrected to a common phase space where required, and a new analysis of Z → ℓ +ℓ – wheremore » ℓ = e, μ at √s = 13 TeV performed with data collected in 2015 with an integrated luminosity of 3.2 fb –1. Correlations of systematic uncertainties are taken into account when evaluating the uncertainties in the ratios. The correlation model is also used to evaluate the combined cross section of the Z → e +e – and the Z → μ +μ – channels for each √s value. The results are compared to calculations performed at next-to-next-to-leading-order accuracy using recent sets of parton distribution functions. The data demonstrate significant power to constrain the gluon distribution function for the Bjorken-x values near 0.1 and the light-quark sea for x < 0.02.« less
A new family of distribution functions for spherical galaxies
NASA Astrophysics Data System (ADS)
Gerhard, Ortwin E.
1991-06-01
The present study describes a new family of anisotropic distribution functions for stellar systems designed to keep control of the orbit distribution at fixed energy. These are quasi-separable functions of energy and angular momentum, and they are specified in terms of a circularity function h(x) which fixes the distribution of orbits on the potential's energy surfaces outside some anisotropy radius. Detailed results are presented for a particular set of radially anisotropic circularity functions h-alpha(x). In the scale-free logarithmic potential, exact analytic solutions are shown to exist for all scale-free circularity functions. Intrinsic and projected velocity dispersions are calculated and the expected properties are presented in extensive tables and graphs. Several applications of the quasi-separable distribution functions are discussed. They include the effects of anisotropy or a dark halo on line-broadening functions, the radial orbit instability in anisotropic spherical systems, and violent relaxation in spherical collapse.
Thin-plate spline quadrature of geodetic integrals
NASA Technical Reports Server (NTRS)
Vangysen, Herman
1989-01-01
Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.
ERIC Educational Resources Information Center
Balasooriya, Uditha; Li, Jackie; Low, Chan Kee
2012-01-01
For any density function (or probability function), there always corresponds a "cumulative distribution function" (cdf). It is a well-known mathematical fact that the cdf is more general than the density function, in the sense that for a given distribution the former may exist without the existence of the latter. Nevertheless, while the…
dftools: Distribution function fitting
NASA Astrophysics Data System (ADS)
Obreschkow, Danail
2018-05-01
dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.
Liu, W; Mohan, R
2012-06-01
Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD Anderson Cancer Center, and MD Anderson’s cancer center support grant CA016672. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason P.; Pope, Chad; Toston, Mary
2016-12-01
Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason P.; Pope, Chad; Toston, Mary
Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
Nakamura, Yoshinori; Kanbara, Ryo; Ochiai, Kent T; Tanaka, Yoshinobu
2014-10-01
The mechanical evaluation of the function of partial removable dental prostheses with 3-dimensional finite element modeling requires the accurate assessment and incorporation of soft tissue behavior. The differential behaviors of the residual ridge mucosa and periodontal ligament tissues have been shown to exhibit nonlinear displacement. The mathematic incorporation of known values simulating nonlinear soft tissue behavior has not been investigated previously via 3-dimensional finite element modeling evaluation to demonstrate the effect of prosthesis design on the supporting tissues. The purpose of this comparative study was to evaluate the functional differences of 3 different partial removable dental prosthesis designs with 3-dimensional finite element analysis modeling and a simulated patient model incorporating known viscoelastic, nonlinear soft tissue properties. Three different designs of distal extension removable partial dental prostheses were analyzed. The stress distributions to the supporting abutments and soft tissue displacements of the designs tested were calculated and mechanically compared. Among the 3 dental designs evaluated, the RPI prosthesis demonstrated the lowest stress concentrations on the tissue supporting the tooth abutment and also provided wide mucosa-borne areas of support, thereby demonstrating a mechanical advantage and efficacy over the other designs evaluated. The data and results obtained from this study confirmed that the functional behavior of partial dental prostheses with supporting abutments and soft tissues are consistent with the conventional theories of design and clinical experience. The validity and usefulness of this testing method for future applications and testing protocols are shown. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Yu, Peng; Shaw, Chad A
2014-06-01
The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Analytical dose evaluation of neutron and secondary gamma-ray skyshine from nuclear facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, K.; Nakamura, T.
1985-11-01
The skyshine dose distributions of neutron and secondary gamma rays were calculated systematically using the Monte Carlo method for distances up to 2 km from the source. The energy of source neutrons ranged from thermal to 400 MeV; their emission angle from 0 to 90 deg from the ver tical was treated with a distribution of the direction cosine containing five equal intervals. Calculated dose distributions D(r) were fitted to the formula; D(r) = Q exp (-r/lambda)/r. The value of Q and lambda are slowly varied functions of energy. This formula was applied to the benchmark problems of neutron skyshinemore » from fission, fusion, and accelerator facilities, and good agreement was achieved. This formula will be quite useful for shielding designs of various nuclear facilities.« less
Incompressible lifting-surface aerodynamics for a rotor-stator combination
NASA Technical Reports Server (NTRS)
Ramachandra, S. M.
1984-01-01
Current literature on the three dimensional flow through compressor cascades deals with a row of rotor blades in isolation. Since the distance between the rotor and stator is usually 10 to 20 percent of the blade chord, the aerodynamic interference between them has to be considered for a proper evaluation of the aerothermodynamic performance of the stage. A unified approach to the aerodynamics of the incompressible flow through a stage is presented that uses the lifting surface theory for a compressor cascade of arbitrary camber and thickness distribution. The effects of rotor stator interference are represented as a linear function of the rotor and stator flows separately. The loading distribution on the rotor and stator flows separately. The loading distribution on the rotor and stator blades and the interference factor are determined concurrently through a matrix iteration process.
2014-09-01
has highlighted the need for physically consistent radiation pressure and Bidirectional Reflectance Distribution Function ( BRDF ) models . This paper...seeks to evaluate the impact of BRDF -consistent radiation pres- sure models compared to changes in the other BRDF parameters. The differences in...orbital position arising because of changes in the shape, attitude, angular rates, BRDF parameters, and radiation pressure model are plotted as a
Evaluation of telemedicine centres in Madhya Pradesh, Central India.
Bali, Surya; Gupta, Arti; Khan, Asif; Pakhare, Abhijit
2016-04-01
In a developing country such as India, there is substantial inequality in health care distribution. Telemedicine facilities were established in Madhya Pradesh in 2007-2008. The purpose of this study was to evaluate the infrastructure, equipment, manpower, and functional status of Indian Space and Research Organisation (ISRO) telemedicine nodes in Madhya Pradesh. All district hospitals and medical colleges with nodes were visited by a team of three members. The study was conducted from December 2013-January 2014. The team recorded the structural facility situation and physical conditions on a predesigned pro forma. The team also conducted interviews with the nodal officers, data entry operator and other relevant people at these centres. Of the six specialist nodes, four were functional and two were non-functional. Of 10 patient nodes, two nodes were functional, four were semi-functional and four were non-functional. Most of the centres were not working due to a problem with their satellite modem. The overall condition of ISRO run telemedicine centres in Madhya Pradesh was found to be poor. Most of these centres failed to provide telemedicine consultations. We recommend replacing this system with another cost effective system available in the state wide area network (SWAN). We suggest the concept of the virtual out-patient department. © The Author(s) 2015.
Weymann, Alexander; Ali-Hasan-Al-Saegh, Sadeq; Sabashnikov, Anton; Popov, Aron-Frederik; Mirhosseini, Seyed Jalil; Nombela-Franco, Luis; Testa, Luca; Lotfaliani, Mohammadreza; Zeriouh, Mohamed; Liu, Tong; Dehghan, Hamidreza; Yavuz, Senol; de Oliveira Sá, Michel Pompeu Barros; Baker, William L.; Jang, Jae-Sik; Gong, Mengqi; Benedetto, Umberto; Dohmen, Pascal M.; D’Ascenzo, Fabrizio; Deshmukh, Abhishek J.; Biondi-Zoccai, Giuseppe; Calkins, Hugh; Stone, Gregg W.
2017-01-01
Background This systematic review with meta-analysis aimed to determine the strength of evidence for evaluating the association of platelet cellular and functional characteristics including platelet count (PC), MPV, platelet distribution width (PDW), platelet factor 4, beta thromboglobulin (BTG), and p-selectin with the occurrence of atrial fibrillation (AF) and consequent stroke. Material/Methods We conducted a meta-analysis of observational studies evaluating platelet characteristics in patients with paroxysmal, persistent and permanent atrial fibrillations. A comprehensive subgroup analysis was performed to explore potential sources of heterogeneity. Results Literature search of all major databases retrieved 1,676 studies. After screening, a total of 73 studies were identified. Pooled analysis showed significant differences in PC (weighted mean difference (WMD)=−26.93 and p<0.001), MPV (WMD=0.61 and p<0.001), PDW (WMD=−0.22 and p=0.002), BTG (WMD=24.69 and p<0.001), PF4 (WMD=4.59 and p<0.001), and p-selectin (WMD=4.90 and p<0.001). Conclusions Platelets play a critical and precipitating role in the occurrence of AF. Whereas distribution width of platelets as well as factors of platelet activity was significantly greater in AF patients compared to SR patients, platelet count was significantly lower in AF patients. PMID:28302997
NASA Astrophysics Data System (ADS)
Barrera, G.; Coisson, M.; Celegato, F.; Raghuvanshi, S.; Mazaleyrat, F.; Kane, S. N.; Tiberto, P.
2018-06-01
Co1-xZnxFe2O4 (0.08 ≤ x ≤ 0.56) powders prepared by a sol-gel auto-combustion method have been investigated through the combined use of structural and dc/ac-magnetization measurements under a wide range of applied magnetic field values. EDS spectra are performed to evaluate the samples chemical composition, whereas the X-ray diffraction measurements indicate the formation of the typical nanocrystalline mixed cubic spinel structure and allow to determine the cationic distribution as well as the lattice parameter and the oxygen position as function of Zn content. Magnetic characterization improves the knowledge about the correlation between the structural properties and magnetic behavior. The magnetization curves show a hysteretic behavior at room temperature and they are analyzed as function of Zn content taking in account the Yafet-Kittel's model. The replacement of non-zero magnetic moment Co2+ ions with zero magnetic moment Zn2+ ions induces a gradual reduction of magnetocrystalline anisotropy and a lowering of the magnetic coercivity. The energy lost in a static and alternating magnetic field (frequency of 69 kHz) at selected vertex field values for the studied samples has been calculated in order to evaluate their prospective usage to operate in different field conditions.
Santora, Jarrod A; Zeno, Ramona; Dorman, Jeffrey G; Sydeman, William J
2018-05-15
Submarine canyon systems are ubiquitous features of marine ecosystems, known to support high levels of biodiversity. Canyons may be important to benthic-pelagic ecosystem coupling, but their role in concentrating plankton and structuring pelagic communities is not well known. We hypothesize that at the scale of a large marine ecosystem, canyons provide a critical habitat network, which maintain energy flow and trophic interactions. We evaluate canyon characteristics relative to the distribution and abundance of krill, critically important prey in the California Current Ecosystem. Using a geological database, we conducted a census of canyon locations, evaluated their dimensions, and quantified functional relationships with krill hotspots (i.e., sites of persistently elevated abundance) derived from hydro-acoustic surveys. We found that 76% of krill hotspots occurred within and adjacent to canyons. Most krill hotspots were associated with large shelf-incising canyons. Krill hotspots and canyon dimensions displayed similar coherence as a function of latitude and indicate a potential regional habitat network. The latitudinal migration of many fish, seabirds and mammals may be enhanced by using this canyon-krill network to maintain foraging opportunities. Biogeographic assessments and predictions of krill and krill-predator distributions under climate change may be improved by accounting for canyons in habitat models.
Assurance Technology Challenges of Advanced Space Systems
NASA Technical Reports Server (NTRS)
Chern, E. James
2004-01-01
The initiative to explore space and extend a human presence across our solar system to revisit the moon and Mars post enormous technological challenges to the nation's space agency and aerospace industry. Key areas of technology development needs to enable the endeavor include advanced materials, structures and mechanisms; micro/nano sensors and detectors; power generation, storage and management; advanced thermal and cryogenic control; guidance, navigation and control; command and data handling; advanced propulsion; advanced communication; on-board processing; advanced information technology systems; modular and reconfigurable systems; precision formation flying; solar sails; distributed observing systems; space robotics; and etc. Quality assurance concerns such as functional performance, structural integrity, radiation tolerance, health monitoring, diagnosis, maintenance, calibration, and initialization can affect the performance of systems and subsystems. It is thus imperative to employ innovative nondestructive evaluation methodologies to ensure quality and integrity of advanced space systems. Advancements in integrated multi-functional sensor systems, autonomous inspection approaches, distributed embedded sensors, roaming inspectors, and shape adaptive sensors are sought. Concepts in computational models for signal processing and data interpretation to establish quantitative characterization and event determination are also of interest. Prospective evaluation technologies include ultrasonics, laser ultrasonics, optics and fiber optics, shearography, video optics and metrology, thermography, electromagnetics, acoustic emission, x-ray, data management, biomimetics, and nano-scale sensing approaches for structural health monitoring.
Spatially distributed effects of mental exhaustion on resting-state FMRI networks.
Esposito, Fabrizio; Otto, Tobias; Zijlstra, Fred R H; Goebel, Rainer
2014-01-01
Brain activity during rest is spatially coherent over functional connectivity networks called resting-state networks. In resting-state functional magnetic resonance imaging, independent component analysis yields spatially distributed network representations reflecting distinct mental processes, such as intrinsic (default) or extrinsic (executive) attention, and sensory inhibition or excitation. These aspects can be related to different treatments or subjective experiences. Among these, exhaustion is a common psychological state induced by prolonged mental performance. Using repeated functional magnetic resonance imaging sessions and spatial independent component analysis, we explored the effect of several hours of sustained cognitive performances on the resting human brain. Resting-state functional magnetic resonance imaging was performed on the same healthy volunteers in two days, with and without, and before, during and after, an intensive psychological treatment (skill training and sustained practice with a flight simulator). After each scan, subjects rated their level of exhaustion and performed an N-back task to evaluate eventual decrease in cognitive performance. Spatial maps of selected resting-state network components were statistically evaluated across time points to detect possible changes induced by the sustained mental performance. The intensive treatment had a significant effect on exhaustion and effort ratings, but no effects on N-back performances. Significant changes in the most exhausted state were observed in the early visual processing and the anterior default mode networks (enhancement) and in the fronto-parietal executive networks (suppression), suggesting that mental exhaustion is associated with a more idling brain state and that internal attention processes are facilitated to the detriment of more extrinsic processes. The described application may inspire future indicators of the level of fatigue in the neural attention system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, S.; Kroposki, B.; Kramer, W.
Integrating renewable energy and distributed generations into the Smart Grid architecture requires power electronic (PE) for energy conversion. The key to reaching successful Smart Grid implementation is to develop interoperable, intelligent, and advanced PE technology that improves and accelerates the use of distributed energy resource systems. This report describes the simulation, design, and testing of a single-phase DC-to-AC inverter developed to operate in both islanded and utility-connected mode. It provides results on both the simulations and the experiments conducted, demonstrating the ability of the inverter to provide advanced control functions such as power flow and VAR/voltage regulation. This report alsomore » analyzes two different techniques used for digital signal processor (DSP) code generation. Initially, the DSP code was written in C programming language using Texas Instrument's Code Composer Studio. In a later stage of the research, the Simulink DSP toolbox was used to self-generate code for the DSP. The successful tests using Simulink self-generated DSP codes show promise for fast prototyping of PE controls.« less
Combining aesthetic with ecological values for landscape sustainability.
Yang, Dewei; Luo, Tao; Lin, Tao; Qiu, Quanyi; Luo, Yunjian
2014-01-01
Humans receive multiple benefits from various landscapes that foster ecological services and aesthetic attractiveness. In this study, a hybrid framework was proposed to evaluate ecological and aesthetic values of five landscape types in Houguanhu Region of central China. Data from the public aesthetic survey and professional ecological assessment were converted into a two-dimensional coordinate system and distribution maps of landscape values. Results showed that natural landscapes (i.e. water body and forest) contributed positively more to both aesthetic and ecological values than semi-natural and human-dominated landscapes (i.e. farmland and non-ecological land). The distribution maps of landscape values indicated that the aesthetic, ecological and integrated landscape values were significantly associated with landscape attributes and human activity intensity. To combine aesthetic preferences with ecological services, the methods (i.e. field survey, landscape value coefficients, normalized method, a two-dimensional coordinate system, and landscape value distribution maps) were employed in landscape assessment. Our results could facilitate to identify the underlying structure-function-value chain, and also improve the understanding of multiple functions in landscape planning. The situation context could also be emphasized to bring ecological and aesthetic goals into better alignment.
Combining Aesthetic with Ecological Values for Landscape Sustainability
Yang, Dewei; Luo, Tao; Lin, Tao; Qiu, Quanyi; Luo, Yunjian
2014-01-01
Humans receive multiple benefits from various landscapes that foster ecological services and aesthetic attractiveness. In this study, a hybrid framework was proposed to evaluate ecological and aesthetic values of five landscape types in Houguanhu Region of central China. Data from the public aesthetic survey and professional ecological assessment were converted into a two-dimensional coordinate system and distribution maps of landscape values. Results showed that natural landscapes (i.e. water body and forest) contributed positively more to both aesthetic and ecological values than semi-natural and human-dominated landscapes (i.e. farmland and non-ecological land). The distribution maps of landscape values indicated that the aesthetic, ecological and integrated landscape values were significantly associated with landscape attributes and human activity intensity. To combine aesthetic preferences with ecological services, the methods (i.e. field survey, landscape value coefficients, normalized method, a two-dimensional coordinate system, and landscape value distribution maps) were employed in landscape assessment. Our results could facilitate to identify the underlying structure-function-value chain, and also improve the understanding of multiple functions in landscape planning. The situation context could also be emphasized to bring ecological and aesthetic goals into better alignment. PMID:25050886
Shimabukuro, Marilia Kimie; Langhi, Larissa Gutman Paranhos; Cordeiro, Ingrid; Brito, José M.; Batista, Claudia Maria de Castro; Mattson, Mark P.; de Mello Coelho, Valeria
2016-01-01
We characterized cerebral Oil Red O-positive lipid-laden cells (LLC) of aging mice evaluating their distribution, morphology, density, functional activities and inflammatory phenotype. We identified LLC in meningeal, cortical and neurogenic brain regions. The density of cerebral LLC increased with age. LLC presenting small lipid droplets were visualized adjacent to blood vessels or deeper in the brain cortical and striatal parenchyma of aging mice. LLC with larger droplets were asymmetrically distributed in the cerebral ventricle walls, mainly located in the lateral wall. We also found that LLC in the subventricular region co-expressed beclin-1 or LC3, markers for autophagosome or autophagolysosome formation, and perilipin (PLIN), a lipid droplet-associated protein, suggesting lipophagic activity. Some cerebral LLC exhibited β galactosidase activity indicating a senescence phenotype. Moreover, we detected production of the pro-inflammatory cytokine TNF-α in cortical PLIN+ LLC. Some cortical NeuN+ neurons, GFAP+ glia limitans astrocytes, Iba-1+ microglia and S100β+ ependymal cells expressed PLIN in the aging brain. Our findings suggest that cerebral LLC exhibit distinct cellular phenotypes and may participate in the age-associated neuroinflammatory processes. PMID:27029648
Shimabukuro, Marilia Kimie; Langhi, Larissa Gutman Paranhos; Cordeiro, Ingrid; Brito, José M; Batista, Claudia Maria de Castro; Mattson, Mark P; Mello Coelho, Valeria de
2016-03-31
We characterized cerebral Oil Red O-positive lipid-laden cells (LLC) of aging mice evaluating their distribution, morphology, density, functional activities and inflammatory phenotype. We identified LLC in meningeal, cortical and neurogenic brain regions. The density of cerebral LLC increased with age. LLC presenting small lipid droplets were visualized adjacent to blood vessels or deeper in the brain cortical and striatal parenchyma of aging mice. LLC with larger droplets were asymmetrically distributed in the cerebral ventricle walls, mainly located in the lateral wall. We also found that LLC in the subventricular region co-expressed beclin-1 or LC3, markers for autophagosome or autophagolysosome formation, and perilipin (PLIN), a lipid droplet-associated protein, suggesting lipophagic activity. Some cerebral LLC exhibited β galactosidase activity indicating a senescence phenotype. Moreover, we detected production of the pro-inflammatory cytokine TNF-α in cortical PLIN(+) LLC. Some cortical NeuN(+) neurons, GFAP(+) glia limitans astrocytes, Iba-1(+) microglia and S100β(+) ependymal cells expressed PLIN in the aging brain. Our findings suggest that cerebral LLC exhibit distinct cellular phenotypes and may participate in the age-associated neuroinflammatory processes.
Ellipsoids for anomaly detection in remote sensing imagery
NASA Astrophysics Data System (ADS)
Grosklos, Guenchik; Theiler, James
2015-05-01
For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.
Laksmana, F L; Van Vliet, L J; Hartman Kok, P J A; Vromans, H; Frijlink, H W; Van der Voort Maarschalk, K
2009-04-01
This study aims to develop a characterization method for coating structure based on image analysis, which is particularly promising for the rational design of coated particles in the pharmaceutical industry. The method applies the MATLAB image processing toolbox to images of coated particles taken with Confocal Laser Scanning Microscopy (CSLM). The coating thicknesses have been determined along the particle perimeter, from which a statistical analysis could be performed to obtain relevant thickness properties, e.g. the minimum coating thickness and the span of the thickness distribution. The characterization of the pore structure involved a proper segmentation of pores from the coating and a granulometry operation. The presented method facilitates the quantification of porosity, thickness and pore size distribution of a coating. These parameters are considered the important coating properties, which are critical to coating functionality. Additionally, the effect of the coating process variations on coating quality can straight-forwardly be assessed. Enabling a good characterization of the coating qualities, the presented method can be used as a fast and effective tool to predict coating functionality. This approach also enables the influence of different process conditions on coating properties to be effectively monitored, which latterly leads to process tailoring.
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1978-01-01
Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.
NASA Astrophysics Data System (ADS)
Jeon, Seong-Beom; Yi, Se Won; Samal, Monica; Park, Keun-Hong; Yun, Kyusik
2018-04-01
We investigated the biocompatibility of GQDs in terms of the cellular response, an aspect often overlooked. Herein, we synthesized two types of GQDs - Glu-GQDs (GQDs which are derived from glucose) and Gr-GQDs (GQDs which are derived from graphite) - with different functional groups on their surfaces. Both types of GQDs shared similar morphological features (shape and size distribution); the size distribution varied between 1.5 nm to 9.5 nm in both cases. Spectral analysis confirmed the difference in their chemical composition. The presence of nitrogen and chlorine in the Glu-GQDs is the major distinction between the two types of GQDs. Fluorescence emission of the obtained GQDs was observed at 480 nm for the Glu-GQDs, and at 550 nm for the Gr-GQDs. The cytotoxicity in NHDF and HeLa cell line was evaluated by a CCK-8 assay, and it confirmed that the cell viability was above 80% despite the high concentration (1024 μg/mL) in both cases. Cellular response after GQDs treatment was different from the control, but it was not lethal in the cell viability aspect. Furthermore, the potential of the GQDs as bio-imaging agents was examined using a fluorescence microscope and a laser scanning confocal microscope. The Glu-GQDs dispersed throughout the cells in NHDF and HeLa cell line, while the Gr-GQDs dispersed in the cytoplasm of the NHDF cells, and were distributed throughout the cell in HeLa. This study demonstrates that GQDs have potential in biomedical applications, even though their functionalities may be different.
NASA Astrophysics Data System (ADS)
Hsu, H. T.; Lawrence, C. R.; Winnick, M.; Druhan, J. L.; Williams, K. H.; Maher, K.; Rainaldi, G. R.; McCormick, M. E.
2016-12-01
The cycling of carbon through soils is one of the least understood aspects of the global carbon cycle and represents a key uncertainty in the prediction of land-surface response to global warming. Thus, there is an urgent need for advanced characterization of soil organic carbon (SOC) to develop and evaluate a new generation of soil carbon models. We hypothesize that shifts in SOC composition and spatial distribution as a function of soil depth can be used to constrain rates of transformation between the litter layer and the deeper subsoil (extending to a depth of approximately 1 m). To evaluate the composition and distribution of SOC, we collected soil samples from East River, a shale-dominated watershed near Crested Butte, CO, and characterized relative changes in SOC species as a function of depth using elemental analysis (EA), Fourier transform infrared spectroscopy (FT-IR) and bulk C X-ray absorption spectroscopy (XAS). Our results show that total organic carbon (TOC) decreases with depth, and high total inorganic carbon (TIC) content was found in deeper soils (after 75 cm), a characteristic of the bedrock (shale). The distribution of aliphatic C relative to the parent material generally decreases with depth and that polysaccharide can be a substantial component of SOC at various depths. On the other hand, the relative distribution of aromatic C, traditionally viewed as recalcitrant, only makes up a very small part of SOC regardless of depth. These observations confirm that molecular structure is not the only determinant of SOC turnover rate. To study other contributors to SOC decomposition, we studied changes in the spatial correlation of SOC and minerals using X-ray fluorescence spectroscopy (XRF) and scanning transmission X-ray microscopy (STXM). We found that aromatics mostly locate on the surface of small soil aggregates (1-10 μm). Polysaccharides and proteins, both viewed as labile traditionally, are more evenly distributed over the interior of the particles, which could limit microbial access and thus decrease decomposition rate. The speciation and spatial distribution results can be compared to field-measured CO2-fluxes, soil moisture, and radiocarbon data to assess the factors that control SOC turnover rates in different environments across the catchment and enhance the development of SOC models.
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
A decentralized mechanism for improving the functional robustness of distribution networks.
Shi, Benyun; Liu, Jiming
2012-10-01
Most real-world distribution systems can be modeled as distribution networks, where a commodity can flow from source nodes to sink nodes through junction nodes. One of the fundamental characteristics of distribution networks is the functional robustness, which reflects the ability of maintaining its function in the face of internal or external disruptions. In view of the fact that most distribution networks do not have any centralized control mechanisms, we consider the problem of how to improve the functional robustness in a decentralized way. To achieve this goal, we study two important problems: 1) how to formally measure the functional robustness, and 2) how to improve the functional robustness of a network based on the local interaction of its nodes. First, we derive a utility function in terms of network entropy to characterize the functional robustness of a distribution network. Second, we propose a decentralized network pricing mechanism, where each node need only communicate with its distribution neighbors by sending a "price" signal to its upstream neighbors and receiving "price" signals from its downstream neighbors. By doing so, each node can determine its outflows by maximizing its own payoff function. Our mathematical analysis shows that the decentralized pricing mechanism can produce results equivalent to those of an ideal centralized maximization with complete information. Finally, to demonstrate the properties of our mechanism, we carry out a case study on the U.S. natural gas distribution network. The results validate the convergence and effectiveness of our mechanism when comparing it with an existing algorithm.
Software Modules for the Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Veregge, John R.; Gao, Jay L.; Clare, Loren P.; Mills, David
2012-01-01
The Proximity-1 Space Link Interleaved Time Synchronization (PITS) protocol provides time distribution and synchronization services for space systems. A software prototype implementation of the PITS algorithm has been developed that also provides the test harness to evaluate the key functionalities of PITS with simulated data source and sink. PITS integrates time synchronization functionality into the link layer of the CCSDS Proximity-1 Space Link Protocol. The software prototype implements the network packet format, data structures, and transmit- and receive-timestamp function for a time server and a client. The software also simulates the transmit and receive-time stamp exchanges via UDP (User Datagram Protocol) socket between a time server and a time client, and produces relative time offsets and delay estimates.
Integrated health monitoring and controls for rocket engines
NASA Technical Reports Server (NTRS)
Merrill, W. C.; Musgrave, J. L.; Guo, T. H.
1992-01-01
Current research in intelligent control systems at the Lewis Research Center is described in the context of a functional framework. The framework is applicable to a variety of reusable space propulsion systems for existing and future launch vehicles. It provides a 'road map' technology development to enable enhanced engine performance with increased reliability, durability, and maintainability. The framework hierarchy consists of a mission coordination level, a propulsion system coordination level, and an engine control level. Each level is described in the context of the Space Shuttle Main Engine. The concept of integrating diagnostics with control is discussed within the context of the functional framework. A distributed real time simulation testbed is used to realize and evaluate the functionalities in closed loop.
Distribution Management System Volt/VAR Evaluation | Grid Modernization |
NREL Distribution Management System Volt/VAR Evaluation Distribution Management System Volt/VAR Evaluation This project involves building a prototype distribution management system testbed that links a GE Grid Solutions distribution management system to power hardware-in-the-loop testing. This setup is
SU-F-J-194: Development of Dose-Based Image Guided Proton Therapy Workflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, R; Sun, B; Zhao, T
Purpose: To implement image-guided proton therapy (IGPT) based on daily proton dose distribution. Methods: Unlike x-ray therapy, simple alignment based on anatomy cannot ensure proper dose coverage in proton therapy. Anatomy changes along the beam path may lead to underdosing the target, or overdosing the organ-at-risk (OAR). With an in-room mobile computed tomography (CT) system, we are developing a dose-based IGPT software tool that allows patient positioning and treatment adaption based on daily dose distributions. During an IGPT treatment, daily CT images are acquired in treatment position. After initial positioning based on rigid image registration, proton dose distribution is calculatedmore » on daily CT images. The target and OARs are automatically delineated via deformable image registration. Dose distributions are evaluated to decide if repositioning or plan adaptation is necessary in order to achieve proper coverage of the target and sparing of OARs. Besides online dose-based image guidance, the software tool can also map daily treatment doses to the treatment planning CT images for offline adaptive treatment. Results: An in-room helical CT system is commissioned for IGPT purposes. It produces accurate CT numbers that allow proton dose calculation. GPU-based deformable image registration algorithms are developed and evaluated for automatic ROI-delineation and dose mapping. The online and offline IGPT functionalities are evaluated with daily CT images of the proton patients. Conclusion: The online and offline IGPT software tool may improve the safety and quality of proton treatment by allowing dose-based IGPT and adaptive proton treatments. Research is partially supported by Mevion Medical Systems.« less
Kappa Distribution in a Homogeneous Medium: Adiabatic Limit of a Super-diffusive Process?
NASA Astrophysics Data System (ADS)
Roth, I.
2015-12-01
The classical statistical theory predicts that an ergodic, weakly interacting system like charged particles in the presence of electromagnetic fields, performing Brownian motions (characterized by small range deviations in phase space and short-term microscopic memory), converges into the Gibbs-Boltzmann statistics. Observation of distributions with a kappa-power-law tails in homogeneous systems contradicts this prediction and necessitates a renewed analysis of the basic axioms of the diffusion process: characteristics of the transition probability density function (pdf) for a single interaction, with a possibility of non-Markovian process and non-local interaction. The non-local, Levy walk deviation is related to the non-extensive statistical framework. Particles bouncing along (solar) magnetic field with evolving pitch angles, phases and velocities, as they interact resonantly with waves, undergo energy changes at undetermined time intervals, satisfying these postulates. The dynamic evolution of a general continuous time random walk is determined by pdf of jumps and waiting times resulting in a fractional Fokker-Planck equation with non-integer derivatives whose solution is given by a Fox H-function. The resulting procedure involves the known, although not frequently used in physics fractional calculus, while the local, Markovian process recasts the evolution into the standard Fokker-Planck equation. Solution of the fractional Fokker-Planck equation with the help of Mellin transform and evaluation of its residues at the poles of its Gamma functions results in a slowly converging sum with power laws. It is suggested that these tails form the Kappa function. Gradual vs impulsive solar electron distributions serve as prototypes of this description.
Abebe, Workineh; Collar, Concha; Ronda, Felicidad
2015-01-22
Tef grain is becoming very attractive in the Western countries since it is a gluten-free grain with appreciated nutritional advantages. However there is little information of its functional properties and starch digestibility and how they are affected by variety type and particle size distribution. This work evaluates the effect of the grain variety and the mill used on tef flour physico-chemical and functional properties, mainly derived from starch behavior. In vitro starch digestibility of the flours by Englyst method was assessed. Two types of mills were used to obtain whole flours of different granulation. Rice and wheat flours were analyzed as references. Protein molecular weight distribution and flour structure by SEM were also analyzed to justify some of the differences found among the cereals studied. Tef cultivar and mill type exhibited important effect on granulation, bulking density and starch damage, affecting the processing performance of the flours and determining the hydration and pasting properties. The color was darker although one of the white varieties had a lightness near the reference flours. Different granulation of tef flour induced different in vitro starch digestibility. The disc attrition mill led to higher starch digestibility rate index and rapidly available glucose, probably as consequence of a higher damaged starch content. The results confirm the adequacy of tef flour as ingredient in the formulation of new cereal based foods and the importance of the variety and the mill on its functional properties. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simulation study of entropy production in the one-dimensional Vlasov system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie
2016-07-15
The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.
A Multi-level Fuzzy Evaluation Method for Smart Distribution Network Based on Entropy Weight
NASA Astrophysics Data System (ADS)
Li, Jianfang; Song, Xiaohui; Gao, Fei; Zhang, Yu
2017-05-01
Smart distribution network is considered as the future trend of distribution network. In order to comprehensive evaluate smart distribution construction level and give guidance to the practice of smart distribution construction, a multi-level fuzzy evaluation method based on entropy weight is proposed. Firstly, focus on both the conventional characteristics of distribution network and new characteristics of smart distribution network such as self-healing and interaction, a multi-level evaluation index system which contains power supply capability, power quality, economy, reliability and interaction is established. Then, a combination weighting method based on Delphi method and entropy weight method is put forward, which take into account not only the importance of the evaluation index in the experts’ subjective view, but also the objective and different information from the index values. Thirdly, a multi-level evaluation method based on fuzzy theory is put forward. Lastly, an example is conducted based on the statistical data of some cites’ distribution network and the evaluation method is proved effective and rational.
Estrada, Luis; Torres, Abel; Garcia-Casado, Javier; Sarlabous, Leonardo; Prats-Boluda, Gema; Jane, Raimon
2016-08-01
The use of non-invasive methods for the study of respiratory muscle signals can provide clinical information for the evaluation of the respiratory muscle function. The aim of this study was to evaluate time-frequency characteristics of the electrical activity of the sternocleidomastoid muscle recorded superficially by means of concentric ring electrodes (CREs) in a bipolar configuration. The CREs enhance the spatial resolution, attenuate interferences, as the cardiac activity, and also simplify the orientation problem associated to the electrode location. Five healthy subjects underwent a respiratory load test in which an inspiratory load was imposed during the inspiratory phase. During the test, the electromyographic signal of the sternocleidomastoid muscle (EMGsc) and the inspiratory mouth pressure (Pmouth) were acquired. Time-frequency characteristics of the EMGsc signal were analyzed by means of eight time-frequency representations (TFRs): the spectrogram (SPEC), the Morlet scalogram (SCAL), the Wigner-Ville distribution (WVD), the Choi-Williams distribution (CHWD), two generalized exponential distributions (GED1 and GED2), the Born-Jordan distribution (BJD) and the Cone-Kernel distribution (CKD). The instantaneous central frequency of the EMGsc showed an increasing behavior during the inspiratory cycle and with the increase of the inspiratory load. The bilinear TFRs (WVD, CHWD, GEDs and BJD) were less sensitive to cardiac activity interference than classical TFRs (SPEC and SCAL). The GED2 was the TFR that shown the best results for the characterization of the instantaneous central frequency of the EMGsc.
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828
Hyaluronan-Inorganic Nanohybrid Materials for Biomedical Applications.
Cai, Zhixiang; Zhang, Hongbin; Wei, Yue; Cong, Fengsong
2017-06-12
Nanomaterials, including gold, silver, and magnetic nanoparticles, carbon, and mesoporous materials, possess unique physiochemical and biological properties, thus offering promising applications in biomedicine, such as in drug delivery, biosensing, molecular imaging, and therapy. Recent advances in nanotechnology have improved the features and properties of nanomaterials. However, these nanomaterials are potentially cytotoxic and demonstrate a lack of cell-specific function. Thus, they have been functionalized with various polymers, especially polysaccharides, to reduce toxicity and improve biocompatibility and stability under physiological conditions. In particular, nanomaterials have been widely functionalized with hyaluronan (HA) to enhance their distribution in specific cells and tissues. This review highlights the most recent advances on HA-functionalized nanomaterials for biotechnological and biomedical applications, as nanocarriers in drug delivery, contrast agents in molecular imaging, and diagnostic agents in cancer therapy. A critical evaluation of barriers affecting the use of HA-functionalized nanomaterials is also discussed, and insights into the outlook of the field are explored.
Boyd, Charlotte; Castillo, Ramiro; Hunt, George L; Punt, André E; VanBlaricom, Glenn R; Weimerskirch, Henri; Bertrand, Sophie
2015-11-01
Understanding the ecological processes that underpin species distribution patterns is a fundamental goal in spatial ecology. However, developing predictive models of habitat use is challenging for species that forage in marine environments, as both predators and prey are often highly mobile and difficult to monitor. Consequently, few studies have developed resource selection functions for marine predators based directly on the abundance and distribution of their prey. We analysed contemporaneous data on the diving locations of two seabird species, the shallow-diving Peruvian Booby (Sula variegata) and deeper diving Guanay Cormorant (Phalacrocorax bougainvilliorum), and the abundance and depth distribution of their main prey, Peruvian anchoveta (Engraulis ringens). Based on this unique data set, we developed resource selection functions to test the hypothesis that the probability of seabird diving behaviour at a given location is a function of the relative abundance of prey in the upper water column. For both species, we show that the probability of diving behaviour is mostly explained by the distribution of prey at shallow depths. While the probability of diving behaviour increases sharply with prey abundance at relatively low levels of abundance, support for including abundance in addition to the depth distribution of prey is weak, suggesting that prey abundance was not a major factor determining the location of diving behaviour during the study period. The study thus highlights the importance of the depth distribution of prey for two species of seabird with different diving capabilities. The results complement previous research that points towards the importance of oceanographic processes that enhance the accessibility of prey to seabirds. The implications are that locations where prey is predictably found at accessible depths may be more important for surface foragers, such as seabirds, than locations where prey is predictably abundant. Analysis of the relative importance of abundance and accessibility is essential for the design and evaluation of effective management responses to reduced prey availability for seabirds and other top predators in marine systems. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
NASA Astrophysics Data System (ADS)
Tsuzuki, Kentaro; Hasegawa, Hideyuki; Kanai, Hiroshi; Ichiki, Masataka; Tezuka, Fumiaki
2008-05-01
Pathologic changes in arterial walls significantly influence their mechanical properties. We have developed a correlation-based method, the phased tracking method [H. Kanai et al.: IEEE Trans. Ultrason. Ferroelectr. Freq. Control 43 (1996) 791], for measurement of the regional elasticity of the arterial wall. Using this method, elasticity distributions of lipids, blood clots, fibrous tissue, and calcified tissue were measured in vitro by experiments on excised arteries (mean±SD: lipids 89±47 kPa, blood clots 131 ±56 kPa, fibrous tissue 1022±1040 kPa, calcified tissue 2267 ±1228 kPa) [H. Kanai et al.: Circulation 107 (2003) 3018; J. Inagaki et al.: Jpn. J. Appl. Phys. 44 (2005) 4593]. It was found that arterial tissues can be classified into soft tissues (lipids and blood clots) and hard tissues (fibrous tissue and calcified tissue) on the basis of their elasticity. However, there are large overlaps between elasticity distributions of lipids and blood clots and those of fibrous tissue and calcified tissue. Thus, it was difficult to differentiate lipids from blood clots and fibrous tissue from calcified tissue by simply thresholding elasticity value. Therefore, we previously proposed a method by classifying the elasticity distribution in each region of interest (ROI) (not a single pixel) in an elasticity image into lipids, blood clots, fibrous tissue, or calcified tissue based on a likelihood function for each tissue [J. Inagaki et al.: Jpn. J. Appl. Phys. 44 (2006) 4732]. In our previous study, the optimum size of an ROI was determined to be 1,500 µm in the arterial radial direction and 1,500 µm in the arterial longitudinal direction [K. Tsuzuki et al.: Ultrasound Med. Biol. 34 (2008) 573]. In this study, the threshold for the likelihood function used in the tissue classification was set by evaluating the variance in the ultrasonic measurement of radial strain. The recognition rate was improved from 50 to 54% by the proposed thresholding.
NASA Technical Reports Server (NTRS)
Roth, R. J.
1973-01-01
The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space and cause an isotropic energy distribution. When the distributions depart from Maxwellian, they are enhanced above the Maxwellian tail.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Intermodal transport and distribution patterns in ports relationship to hinterland
NASA Astrophysics Data System (ADS)
Dinu, O.; Dragu, V.; Ruscă, F.; Ilie, A.; Oprea, C.
2017-08-01
It is of great importance to examine all interactions between ports, terminals, intermodal transport and logistic actors of distribution channels, as their optimization can lead to operational improvement. Proposed paper starts with a brief overview of different goods types and allocation of their logistic costs, with emphasis on storage component. Present trend is to optimize storage costs by means of port storage area buffer function, by making the best use of free storage time available, most of the ports offer. As a research methodology, starting point is to consider the cost structure of a generic intermodal transport (storage, handling and transport costs) and to link this to intermodal distribution patterns most frequently cast-off in port relationship to hinterland. The next step is to evaluate storage costs impact on distribution pattern selection. For a given value of port free storage time, a corresponding value of total storage time in the distribution channel can be identified, in order to substantiate a distribution pattern shift. Different scenarios for transport and handling costs variation, recorded when distribution pattern shift, are integrated in order to establish the reaction of the actors involved in port related logistic and intermodal transport costs evolution is analysed in order to optimize distribution pattern selection.
NASA Astrophysics Data System (ADS)
Cortez, E.; Remsen, E.; Chlanda, V.; Wideman, T.; Zank, G.; Carrol, P.; Sneddon, L.
1998-06-01
Boron Nitride, BN, and composite SiNCB ceramic fibers are important structural materials because of their excellent thermal and oxidative stabilities. Consequently, polymeric materials as precursors to ceramic composites are receiving increasing attention. Characterization of these materials requires the ability to evaluate simultaneous molecular weight and compositional heterogeneity within the polymer. Size exclusion chromatography equipped with viscometric and refractive index detection as well as coupled to a LC-transform device for infrared absorption analysis has been employed to examine these heterogeneities. Using these combined approaches, the solution properties and the relative amounts of individual functional groups distributed through the molecular weight distribution of SiNCB and BN polymeric precursors were characterized.
Application of spatial Poisson process models to air mass thunderstorm rainfall
NASA Technical Reports Server (NTRS)
Eagleson, P. S.; Fennessy, N. M.; Wang, Qinliang; Rodriguez-Iturbe, I.
1987-01-01
Eight years of summer storm rainfall observations from 93 stations in and around the 154 sq km Walnut Gulch catchment of the Agricultural Research Service, U.S. Department of Agriculture, in Arizona are processed to yield the total station depths of 428 storms. Statistical analysis of these random fields yields the first two moments, the spatial correlation and variance functions, and the spatial distribution of total rainfall for each storm. The absolute and relative worth of three Poisson models are evaluated by comparing their prediction of the spatial distribution of storm rainfall with observations from the second half of the sample. The effect of interstorm parameter variation is examined.
Comparison of information theoretic divergences for sensor management
NASA Astrophysics Data System (ADS)
Yang, Chun; Kadar, Ivan; Blasch, Erik; Bakich, Michael
2011-06-01
In this paper, we compare the information-theoretic metrics of the Kullback-Leibler (K-L) and Renyi (α) divergence formulations for sensor management. Information-theoretic metrics have been well suited for sensor management as they afford comparisons between distributions resulting from different types of sensors under different actions. The difference in distributions can also be measured as entropy formulations to discern the communication channel capacity (i.e., Shannon limit). In this paper, we formulate a sensor management scenario for target tracking and compare various metrics for performance evaluation as a function of the design parameter (α) so as to determine which measures might be appropriate for sensor management given the dynamics of the scenario and design parameter.
Unstable density distribution associated with equatorial plasma bubble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.
2016-04-15
In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less
NDL-v2.0: A new version of the numerical differentiation library for parallel architectures
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.
2014-07-01
We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
Estimations of expectedness and potential surprise in possibility theory
NASA Technical Reports Server (NTRS)
Prade, Henri; Yager, Ronald R.
1992-01-01
This note investigates how various ideas of 'expectedness' can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying 'I would not be surprised that...' before an event takes place, or by saying 'I knew it' after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined.
Audit of availability and distribution of paediatric cardiology services and facilities in Nigeria
Ekure, Ekanem N; Sadoh, Wilson E; Bode-Thomas, Fidelia; Yilgwan, Christopher S; Orogade, Adeola A; Animasahun, Adeola B; Ogunkunle, Oluwatoyin O; Omokhodion, Samuel I; Babaniyi, Iretiola; Anah, Maxwell U; Otaigbe, Barbara E; Olowu, Adebiyi; Okpokowuruk, Frances; Maduka, Ogechi C; Onakpoya, Uvie U; Adiele, Daberechi K; Sani, Usman. M; Asani, Mustapha; Daniels, Queennette; Uzodimma, Chinyere C; Duru, Chika O; Abdulkadir, Mohammad B; Afolabi, Joseph K; Okeniyi, John A
2017-01-01
Summary Background Paediatric cardiac services in Nigeria have been perceived to be inadequate but no formal documentation of availability and distribution of facilities and services has been done. Objective: To evaluate and document the currently available paediatric cardiac services in Nigeria. Methods In this questionnaire-based, cross-sectional descriptive study, an audit was undertaken from January 2010 to December 2014, of the personnel and infrastructure, with their distributions according to geopolitical zones of Nigeria. Results Forty-eight centres participated in the study, with 33 paediatric cardiologists and 31 cardiac surgeons. Echocardiography, electrocardiography and pulse oximetry were available in 45 (93.8%) centres while paediatric intensive care units were in 23 (47.9%). Open-heart surgery was performed in six (12.5%) centres. South-West zone had the majority of centres (20; 41.7%). Conclusions Available paediatric cardiac services in Nigeria are grossly inadequate and poorly distributed. Efforts should be intensified to upgrade existing facilities, establish new and functional centres, and train personnel. PMID:27701490