Automated basin delineation from digital terrain data
NASA Technical Reports Server (NTRS)
Marks, D.; Dozier, J.; Frew, J.
1983-01-01
While digital terrain grids are now in wide use, accurate delineation of drainage basins from these data is difficult to efficiently automate. A recursive order N solution to this problem is presented. The algorithm is fast because no point in the basin is checked more than once, and no points outside the basin are considered. Two applications for terrain analysis and one for remote sensing are given to illustrate the method, on a basin with high relief in the Sierra Nevada. This technique for automated basin delineation will enhance the utility of digital terrain analysis for hydrologic modeling and remote sensing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jodda, A; Piotrowski, T
2014-06-01
Purpose: The intra- and inter-observer variability in delineation of the parotids on the kilo-voltage computed tomography (kVCT) and mega-voltage computed tomography (MVCT) were examined to establish their impact on the dose calculation during adaptive head and neck helical tomotherapy (HT). Methods: Three observers delineated left and right parotids for ten randomly selected patients with oropharynx cancer treated on HT. The pre-treatment kVCT and the MVCT from the first fraction of irradiation were selected to delineation. The delineation procedure was repeated three times by each observer. The parotids were delineated according to the institutional protocol. The analyses included intra-observer reproducibility andmore » inter-structure, -observer and -modality variability of the volume and dose. Results: The differences between the left and right parotid outlines were not statistically significant (p>0.3). The reproducibility of the delineation was confirmed for each observer on the kVCT (p>0.2) and on the MVCT (p>0.1). The inter-observer variability of the outlines was significant (p<0.001) as well as the inter-modality variability (p<0.006). The parotids delineated on the MVCT were 10% smaller than on the kVCT. The inter-observer variability of the parotids delineation did not affect the average dose (p=0.096 on the kVCT and p=0.176 on the MVCT). The dose calculated on the MVCT was higher by 3.3% than dose from the kVCT (p=0.009). Conclusion: Usage of the institutional protocols for the parotids delineation reduces intra-observer variability and increases reproducibility of the outlines. These protocols do not eliminate delineation differences between the observers, but these differences are not clinically significant and do not affect average doses in the parotids. The volumes of the parotids delineated on the MVCT are smaller than on the kVCT, which affects the differences in the calculated doses.« less
Mini-DNA barcode in identification of the ornamental fish: A case study from Northeast India.
Dhar, Bishal; Ghosh, Sankar Kumar
2017-09-05
The ornamental fishes were exported under the trade names or generic names, thus creating problems in species identification. In this regard, DNA barcoding could effectively elucidate the actual species status. However, the problem arises if the specimen is having taxonomic disputes, falsified by trade/generic names, etc., On the other hand, barcoding the archival museum specimens would be of greater benefit to address such issues as it would create firm, error-free reference database for rapid identification of any species. This can be achieved only by generating short sequences as DNA from chemically preserved are mostly degraded. Here we aimed to identify a short stretch of informative sites within the full-length barcode segment, capable of delineating diverse group of ornamental fish species, commonly traded from NE India. We analyzed 287 full-length barcode sequences from the major fish orders and compared the interspecific K2P distance with nucleotide substitutions patterns and found a strong correlation of interspecies distance with transversions (0.95, p<0.001). We, therefore, proposed a short stretch of 171bp (transversion rich) segment as mini-barcode. The proposed segment was compared with the full-length barcodes and found to delineate the species effectively. Successful PCR amplification and sequencing of the 171bp segment using designed primers for different orders validated it as mini-barcodes for ornamental fishes. Thus, our findings would be helpful in strengthening the global database with the sequence of archived fish species as well as an effective identification tool of the traded ornamental fish species, as a less time consuming, cost effective field-based application. Copyright © 2017 Elsevier B.V. All rights reserved.
Willert, Jeffrey; Park, H.; Taitano, William
2015-11-01
High-order/low-order (or moment-based acceleration) algorithms have been used to significantly accelerate the solution to the neutron transport k-eigenvalue problem over the past several years. Recently, the nonlinear diffusion acceleration algorithm has been extended to solve fixed-source problems with anisotropic scattering sources. In this paper, we demonstrate that we can extend this algorithm to k-eigenvalue problems in which the scattering source is anisotropic and a significant acceleration can be achieved. Lastly, we demonstrate that the low-order, diffusion-like eigenvalue problem can be solved efficiently using a technique known as nonlinear elimination.
Growing Season Definition and Use in Wetland Delineation: A Literature Review
2010-08-01
1999.) Location (Source) Species Date of First Flower Eastern Massachusetts ( Debbie Flanders, personal communication, 1998) Acer rubrum April 8–14...obvious stunting of growth but no mortality. The species order of most-to-least recovery of wetland bottomland trees is as follows: river birch (Betula...silver birch (Betula pendula) ecotypes. Physiologia Plantarum 117: 206–212. Lipson, D. A., and R. K. Monson. 1998. Plant-microbe competition for
Finding fixed satellite service orbital allotments with a k-permutation algorithm
NASA Technical Reports Server (NTRS)
Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.
1990-01-01
A satellite system synthesis problem, the satellite location problem (SLP), is addressed. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the fixed satellite service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: the problem of ordering the satellites and the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, has been developed to find solutions to SLPs. Solutions to small sample problems are presented and analyzed on the basis of calculated interferences.
De Silvestro, A; Martini, K; Becker, A S; Kim-Nguyen, T D L; Guggenberger, R; Calcagni, M; Frauenfelder, T
2018-02-01
To prospectively investigate digital tomosynthesis (DTS) as an alternative to digital radiography (DR) for postoperative imaging of orthopaedic hardware after trauma or arthrodesis in the hand and wrist. Thirty-six consecutive patients (12 female, median age 36 years, range 19-86 years) were included in this institutional review board approved clinical trial. Imaging was performed with DTS in dorso-palmar projection and DR was performed in dorso-palmar, lateral, and oblique views. Images were evaluated by two independent radiologists for qualitative and diagnosis-related imaging parameters using a four-point Likert scale (1=excellent, 4not diagnostic) and nominal scale. Interobserver agreement between the two readers was assessed with Cohen's kappa (k). Differences between DTS and CR were tested with Wilcoxon's signed-rank test. A p-value <0.05 was considered statistically significant. Regarding image quality, interobserver agreement was higher for DTS compared to DR, especially for fracture-related parameters (delineation osteosynthesis material [OSM]: K DTS 0.96 versus K DR 0.45; delineation fracture margins: K DTS 0.78 versus K DR 0.35). Delineation of fracture margins and delineation of adjacent joint spaces scored significant better for DTS compared to DR (delineation fracture margins: DTS1.54, DR2.28, p0.001; delineation adjacent joint spaces: DTS1.31, DR2.24, p0.001). Regarding diagnosis-related findings, interobserver agreement was almost equal. DTS showed a significant higher sharpness of fracture margins (DTS1.94, DR2.33, p0.04). Mean dose area product (DAP) for DTS was significant higher compared to DR (mean DR0.219 Gy·cm 2 , mean DTS0.903 Gy·cm 2 , p0.001). Fracture healing is more visible and interobserver agreement is higher for DTS compared to DR in the postoperative assessment of orthopaedic hardware in the hand and wrist. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Silvoniemi, Antti; Din, Mueez U; Suilamo, Sami; Shepherd, Tony; Minn, Heikki
2016-11-01
Delineation of gross tumour volume in 3D is a critical step in the radiotherapy (RT) treatment planning for oropharyngeal cancer (OPC). Static [ 18 F]-FDG PET/CT imaging has been suggested as a method to improve the reproducibility of tumour delineation, but it suffers from low specificity. We undertook this pilot study in which dynamic features in time-activity curves (TACs) of [ 18 F]-FDG PET/CT images were applied to help the discrimination of tumour from inflammation and adjacent normal tissue. Five patients with OPC underwent dynamic [ 18 F]-FDG PET/CT imaging in treatment position. Voxel-by-voxel analysis was performed to evaluate seven dynamic features developed with the knowledge of differences in glucose metabolism in different tissue types and visual inspection of TACs. The Gaussian mixture model and K-means algorithms were used to evaluate the performance of the dynamic features in discriminating tumour voxels compared to the performance of standardized uptake values obtained from static imaging. Some dynamic features showed a trend towards discrimination of different metabolic areas but lack of consistency means that clinical application is not recommended based on these results alone. Impact of inflammatory tissue remains a problem for volume delineation in RT of OPC, but a simple dynamic imaging protocol proved practicable and enabled simple data analysis techniques that show promise for complementing the information in static uptake values.
Solving ODE Initial Value Problems With Implicit Taylor Series Methods
NASA Technical Reports Server (NTRS)
Scott, James R.
2000-01-01
In this paper we introduce a new class of numerical methods for integrating ODE initial value problems. Specifically, we propose an extension of the Taylor series method which significantly improves its accuracy and stability while also increasing its range of applicability. To advance the solution from t (sub n) to t (sub n+1), we expand a series about the intermediate point t (sub n+mu):=t (sub n) + mu h, where h is the stepsize and mu is an arbitrary parameter called an expansion coefficient. We show that, in general, a Taylor series of degree k has exactly k expansion coefficients which raise its order of accuracy. The accuracy is raised by one order if k is odd, and by two orders if k is even. In addition, if k is three or greater, local extrapolation can be used to raise the accuracy two additional orders. We also examine stability for the problem y'= lambda y, Re (lambda) less than 0, and identify several A-stable schemes. Numerical results are presented for both fixed and variable stepsizes. It is shown that implicit Taylor series methods provide an effective integration tool for most problems, including stiff systems and ODE's with a singular point.
NASA Astrophysics Data System (ADS)
Wasserman, Richard Marc
The radiation therapy treatment planning (RTTP) process may be subdivided into three planning stages: gross tumor delineation, clinical target delineation, and modality dependent target definition. The research presented will focus on the first two planning tasks. A gross tumor target delineation methodology is proposed which focuses on the integration of MRI, CT, and PET imaging data towards the generation of a mathematically optimal tumor boundary. The solution to this problem is formulated within a framework integrating concepts from the fields of deformable modelling, region growing, fuzzy logic, and data fusion. The resulting fuzzy fusion algorithm can integrate both edge and region information from multiple medical modalities to delineate optimal regions of pathological tissue content. The subclinical boundaries of an infiltrating neoplasm cannot be determined explicitly via traditional imaging methods and are often defined to extend a fixed distance from the gross tumor boundary. In order to improve the clinical target definition process an estimation technique is proposed via which tumor growth may be modelled and subclinical growth predicted. An in vivo, macroscopic primary brain tumor growth model is presented, which may be fit to each patient undergoing treatment, allowing for the prediction of future growth and consequently the ability to estimate subclinical local invasion. Additionally, the patient specific in vivo tumor model will be of significant utility in multiple diagnostic clinical applications.
A k-permutation algorithm for Fixed Satellite Service orbital allotments
NASA Technical Reports Server (NTRS)
Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.
1988-01-01
A satellite system synthesis problem, the satellite location problem (SLP), is addressed in this paper. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the Fixed Satellite Service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: (1) the problem of ordering the satellites and (2) the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, that has been developed to find solutions to SLPs formulated in the manner suggested is described. Solutions to small example problems are presented and analyzed.
NASA Astrophysics Data System (ADS)
Vezzoli, G. Christopher; Chen, Michaeline F.; Burke, Terence; Rosen, Carol
1996-08-01
Data are presented herein that support a phase boundary or quasi-phase-boundary delineating in Y1Ba2Cu3O7-δ and in Bi2Sr2Ca2Cu3O10 ceramic materials a transition from a vortex solid lattice to a line-flux disordered state that has been referred to as representing flux lattice melting to a flux liquid, but herein is interpreted not in terms of a liquid but in the form of a less-mobile `polymer'-like or entangled chain species. These data are derived from electrical resistance (r) versus applied magnetic field (H) measurements at various isotherms (T) corresponding to the zero resistance state of yttrium--barium--cuprate, and the mixed state foot regime of bismuth--strontium--calcium--cuprate. We interpret significant slope changes in r versus B at constant T in these materials to be indicative of the H-T conditions for a second-order or weakly first-order phase transition delineating the disordering of a flux lattice vortex solid. We believe that this technique is in ways more direct and at least as accurate as the conventional mechanical oscillator and vibrating magnetometer method to study the flux state. Additional very-low-field studies in Gd1Ba2(Fe0.02Cu0.98)3O7-δ, from 1 to 1000 mT, yield indication for what appears to be a magnetic transition at ca. 77 K at 575 mT, and possibly a second transition at 912 mT (also at ca. 77 K). These data points correspond well with the extrapolated low-field experimental magnetic phase transition boundary curve described at higher field herein (and by others using the mechanical technique), and also correspond well to theoretically predicted work regarding transition involving the vortex state.
Song, Qi; Chen, Mingqing; Bai, Junjie; Sonka, Milan; Wu, Xiaodong
2011-01-01
Multi-object segmentation with mutual interaction is a challenging task in medical image analysis. We report a novel solution to a segmentation problem, in which target objects of arbitrary shape mutually interact with terrain-like surfaces, which widely exists in the medical imaging field. The approach incorporates context information used during simultaneous segmentation of multiple objects. The object-surface interaction information is encoded by adding weighted inter-graph arcs to our graph model. A globally optimal solution is achieved by solving a single maximum flow problem in a low-order polynomial time. The performance of the method was evaluated in robust delineation of lung tumors in megavoltage cone-beam CT images in comparison with an expert-defined independent standard. The evaluation showed that our method generated highly accurate tumor segmentations. Compared with the conventional graph-cut method, our new approach provided significantly better results (p < 0.001). The Dice coefficient obtained by the conventional graph-cut approach (0.76 +/- 0.10) was improved to 0.84 +/- 0.05 when employing our new method for pulmonary tumor segmentation.
Colony image acquisition and segmentation
NASA Astrophysics Data System (ADS)
Wang, W. X.
2007-12-01
For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.
Shephard, David A.E.; Grogono, Basil J.S.
2002-01-01
A casebook written by Dr. John Mackieson (1795–1885), of Charlottetown, contains the records of 49 surgical cases he managed between 1826 and 1857. In view of the rarity of first-hand accounts of surgical practice in Canada in the mid-19th century, Mackieson’s case records are a significant source of information. These cases are discussed in order to delineate Mackieson’s approach to the surgical problems he faced in his general practice. His case records also illustrate some of the general problems that beset surgeons in that era. PMID:11939660
NASA Astrophysics Data System (ADS)
Rodriguez-Pretelin, A.; Nowak, W.
2017-12-01
For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.
30 CFR 282.14 - Noncompliance, remedies, and penalties.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Section 282.14 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, REGULATION, AND ENFORCEMENT... Delineation, Testing, or Mining Plan; or the Director's orders or instructions, and the Director determines... requirements of an approved Delineation, Testing, or Mining Plan; or the Director's orders or instructions, and...
Enhancements to TauDEM to support Rapid Watershed Delineation Services
NASA Astrophysics Data System (ADS)
Sazib, N. S.; Tarboton, D. G.
2015-12-01
Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coplen, T.B.
1973-10-01
Preliminary studies indicate that the Imperial Valley has a large geothermal potential. In order to delineate additional geothermal systems a chemical and isotopic investigation of samples from water wells, springs, and geothermal wells in the Imperial Valley and Yuma areas was conducted. Na, K, and Ca concentrations of nearly 200 well water, spring water, hot spring, and geothermal fluid samples from the Imperial Valley area were measured by atomic absorption spectrophotometry. Fournier and Truesdell's function was determined for each water sample. Suspected geothermal areas are identified. Hydrogen and oxygen isotope abundances were determined in order to determine and to identifymore » the source of the water in the Mesa geothermal system. (JGB)« less
Saporito, Salvatore; Van Riper, David; Wakchaure, Ashwini
2017-01-01
The School Attendance Boundary Information System is a social science data infrastructure project that assembles, processes, and distributes spatial data delineating K through 12th grade school attendance boundaries for thousands of school districts in U.S. Although geography is a fundamental organizing feature of K to 12 education, until now school attendance boundary data have not been made readily available on a massive basis and in an easy-to-use format. The School Attendance Boundary Information System removes these barriers by linking spatial data delineating school attendance boundaries with tabular data describing the demographic characteristics of populations living within those boundaries. This paper explains why a comprehensive GIS database of K through 12 school attendance boundaries is valuable, how original spatial information delineating school attendance boundaries is collected from local agencies, and techniques for modeling and storing the data so they provide maximum flexibility to the user community. An important goal of this paper is to share the techniques used to assemble the SABINS database so that local and state agencies apply a standard set of procedures and models as they gather data for their regions. PMID:29151773
Saporito, Salvatore; Van Riper, David; Wakchaure, Ashwini
2013-01-01
The School Attendance Boundary Information System is a social science data infrastructure project that assembles, processes, and distributes spatial data delineating K through 12 th grade school attendance boundaries for thousands of school districts in U.S. Although geography is a fundamental organizing feature of K to 12 education, until now school attendance boundary data have not been made readily available on a massive basis and in an easy-to-use format. The School Attendance Boundary Information System removes these barriers by linking spatial data delineating school attendance boundaries with tabular data describing the demographic characteristics of populations living within those boundaries. This paper explains why a comprehensive GIS database of K through 12 school attendance boundaries is valuable, how original spatial information delineating school attendance boundaries is collected from local agencies, and techniques for modeling and storing the data so they provide maximum flexibility to the user community. An important goal of this paper is to share the techniques used to assemble the SABINS database so that local and state agencies apply a standard set of procedures and models as they gather data for their regions.
Cosmological models in energy-momentum-squared gravity
NASA Astrophysics Data System (ADS)
Board, Charles V. R.; Barrow, John D.
2017-12-01
We study the cosmological effects of adding terms of higher order in the usual energy-momentum tensor to the matter Lagrangian of general relativity. This is in contrast to most studies of higher-order gravity which focus on generalizing the Einstein-Hilbert curvature contribution to the Lagrangian. The resulting cosmological theories give rise to field equations of similar form to several particular theories with different fundamental bases, including bulk viscous cosmology, loop quantum gravity, k -essence, and brane-world cosmologies. We find a range of exact solutions for isotropic universes, discuss their behaviors with reference to the early- and late-time evolution, accelerated expansion, and the occurrence or avoidance of singularities. We briefly discuss extensions to anisotropic cosmologies and delineate the situations where the higher-order matter terms will dominate over anisotropies on approach to cosmological singularities.
Engineering calculations for solving the orbital allotment problem
NASA Technical Reports Server (NTRS)
Reilly, C.; Walton, E. K.; Mount-Campbell, C.; Caldecott, R.; Aebker, E.; Mata, F.
1988-01-01
Four approaches for calculating downlink interferences for shaped-beam antennas are described. An investigation of alternative mixed-integer programming models for satellite synthesis is summarized. Plans for coordinating the various programs developed under this grant are outlined. Two procedures for ordering satellites to initialize the k-permutation algorithm are proposed. Results are presented for the k-permutation algorithms. Feasible solutions are found for 5 of the 6 problems considered. Finally, it is demonstrated that the k-permutation algorithm can be used to solve arc allotment problems.
Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data
NASA Astrophysics Data System (ADS)
Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam
2018-04-01
Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.
Fully Decomposable Split Graphs
NASA Astrophysics Data System (ADS)
Broersma, Hajo; Kratsch, Dieter; Woeginger, Gerhard J.
We discuss various questions around partitioning a split graph into connected parts. Our main result is a polynomial time algorithm that decides whether a given split graph is fully decomposable, i.e., whether it can be partitioned into connected parts of order α 1,α 2,...,α k for every α 1,α 2,...,α k summing up to the order of the graph. In contrast, we show that the decision problem whether a given split graph can be partitioned into connected parts of order α 1,α 2,...,α k for a given partition α 1,α 2,...,α k of the order of the graph, is NP-hard.
Surveying unsteady flows by means of movie sequences - A case study
NASA Astrophysics Data System (ADS)
Freymuth, P.; Bank, W.; Finaish, F.
Photographic surveying techniques and their results are presented for vortical pattern development in unsteady two-dimensional flows, which depends on a multitude of parameters that have heretofore hampered broad investigation, in order to delineate the more important parametric dependencies. Samples are given from 100 films representing over 2000 sequences consisting of 400,000 photographic frames. Attention is given to the problems posed by resolution of time and lateral dimensions, spanwise vortical structure, and the dependence of angle of attack on Reynolds number and flow geometry.
NASA Astrophysics Data System (ADS)
Saco, P. M.; Moreno de las Heras, M.; Willgoose, G. R.
2014-12-01
Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.
Reversed-phase high-performance liquid chromatography of unsubstituted aminobenzoic acids
Abidi, S.L.
1989-01-01
High-performance liquid chromatographic (HPLC) characteristics of three position isomers of aminobenzoic acids (potential metabolites of important anesthetic drugs), were delineated with respect to their interactions with various mobile phases and stationary phases. HPLC with five hydrocarbonaceous phase, I?-cyclodextrin silica (CDS), macrophase MP-1 polymer (MP), macroporous polystyrene/divinylbenzene (MPD), octadecylsilica (ODS), and propylphenylsilica (PPS), yielded results explicable in terms of substituent effects derived from the bifunctional amino- and carboxy groups. For cases where mobile phases contained sulfonates or quaternary ammonium salts both having longer chain alkyls, retention of analytes on all but CDS appeared to proceed predominantly via an ion-pairing mechanism. The extent of the corresponding counter-ion effects decreased in the order: MPD > ODS > PPS > MP, while the analyte retention order paralleled thier pH2 values. On the other hand, an inverse relationship between the magnitude of capacity factors (k') and pK1 values of the title compounds was observed in experiments that produced retention data incompatible with ion-pair interaction rationales. The unique HPLC results obtained with the CDS phase are compared with those obtained with other phases.
Flood-plain delineation for Accotink Creek Basin, Fairfax County, Virginia
Soule, Pat L.
1977-01-01
Water-surface profiles of the 25-year and 100-year floods maps on which the 25-, 50-, and 100-year flood limits are delineated for streams in the Accotink Creek basin are presented in this report. Excluded are segments of Accotink Creek within the Fort Belvoir Military Reservation. The techniques used in the computation of the flood profiles and delineation of flood limits are presented, and specific hydraulic problems encountered within the study area are also included.
Oversight of human participants research: identifying problems to evaluate reform proposals.
Emanuel, Ezekiel J; Wood, Anne; Fleischman, Alan; Bowen, Angela; Getz, Kenneth A; Grady, Christine; Levine, Carol; Hammerschmidt, Dale E; Faden, Ruth; Eckenwiler, Lisa; Muse, Carianne Tucker; Sugarman, Jeremy
2004-08-17
The oversight of research involving human participants is widely believed to be inadequate. The U.S. Congress, national commissions, the Department of Health and Human Services, the Institute of Medicine, numerous professional societies, and others are proposing remedies based on the assumption that the main problems are researchers' conflict of interest, lack of institutional review board (IRB) resources, and the volume and complexity of clinical research. Developing appropriate reform proposals requires carefully delineating the problems of the current system to know what reforms are needed. To stimulate a more informed and meaningful debate, we delineate 15 current problems into 3 broad categories. First, structural problems encompass 8 specific problems related to the way the research oversight system is organized. Second, procedural problems constitute 5 specific problems related to the operations of IRB review. Finally, performance assessment problems include 2 problems related to absence of systematic assessment of the outcomes of the oversight system. We critically assess proposed reforms, such as accreditation and central IRBs, according to how well they address these 15 problems. None of the reforms addresses all 15 problems. Indeed, most focus on the procedural problems, failing to address either the structure or the performance assessment problems. Finally, on the basis of the delineation of problems, we outline components of a more effective reform proposal, including bringing all research under federal oversight, a permanent advisory committee to address recurrent ethical issues in clinical research, mandatory single-time review for multicenter research protocols, additional financial support for IRB functions, and a standardized system for collecting and disseminating data on both adverse events and the performance assessment of IRBs.
Riera, Thomas V.; Zheng, Lianqing; Josephine, Helen R.; Min, Donghong; Yang, Wei; Hedstrom, Lizbeth
2011-01-01
Allosteric activators are generally believed to shift the equilibrium distribution of enzyme conformations to favor a catalytically productive structure; the kinetics of conformational exchange is seldom addressed. Several observations suggested that the usual allosteric mechanism might not apply to the activation of IMP dehydrogenase (IMPDH) by monovalent cations. Therefore we investigated the mechanism of K+ activation in IMPDH by delineating the kinetic mechanism in the absence of monovalent cations. Surprisingly, the K+-dependence of kcat derives from the rate of flap closure, which increases by ≥65-fold in the presence of K+. We performed both alchemical free energy simulations and potential of mean force calculations using the orthogonal space random walk strategy to computationally analyze how K+ accelerates this conformational change. The simulations recapitulate the preference of IMPDH for K+, validating the computational models. When K+ is replaced with a dummy ion, the residues of the K+ binding site relax into ordered secondary structure, creating a barrier to conformational exchange. K+ mobilizes these residues by providing alternate interactions for the main chain carbonyls. Potential of mean force calculations indicate that K+ changes the shape of the energy well, shrinking the reaction coordinate by shifting the closed conformation toward the open state. This work suggests that allosteric regulation can be under kinetic as well as thermodynamic control. PMID:21870820
NASA Technical Reports Server (NTRS)
Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna
2015-01-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.
2015-12-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
The Erdős-Rothschild problem on edge-colourings with forbidden monochromatic cliques
NASA Astrophysics Data System (ADS)
Pikhurko, Oleg; Staden, Katherine; Yilma, Zelealem B.
2017-09-01
Let $\\mathbf{k} := (k_1,\\dots,k_s)$ be a sequence of natural numbers. For a graph $G$, let $F(G;\\mathbf{k})$ denote the number of colourings of the edges of $G$ with colours $1,\\dots,s$ such that, for every $c \\in \\{1,\\dots,s\\}$, the edges of colour $c$ contain no clique of order $k_c$. Write $F(n;\\mathbf{k})$ to denote the maximum of $F(G;\\mathbf{k})$ over all graphs $G$ on $n$ vertices. This problem was first considered by Erd\\H{o}s and Rothschild in 1974, but it has been solved only for a very small number of non-trivial cases. We prove that, for every $\\mathbf{k}$ and $n$, there is a complete multipartite graph $G$ on $n$ vertices with $F(G;\\mathbf{k}) = F(n;\\mathbf{k})$. Also, for every $\\mathbf{k}$ we construct a finite optimisation problem whose maximum is equal to the limit of $\\log_2 F(n;\\mathbf{k})/{n\\choose 2}$ as $n$ tends to infinity. Our final result is a stability theorem for complete multipartite graphs $G$, describing the asymptotic structure of such $G$ with $F(G;\\mathbf{k}) = F(n;\\mathbf{k}) \\cdot 2^{o(n^2)}$ in terms of solutions to the optimisation problem.
Visualizing geoelectric - Hydrogeological parameters of Fadak farm at Najaf Ashraf by using 2D spa
NASA Astrophysics Data System (ADS)
Al-Khafaji, Wadhah Mahmood Shakir; Al-Dabbagh, Hayder Abdul Zahra
2016-12-01
A geophysical survey achieved to produce an electrical resistivity grid data of 23 Schlumberger Vertical Electrical Sounding (VES) points distributed across the area of Fadak farm at Najaf Ashraf province in Iraq. The current research deals with the application of six interpolation methods used to delineate subsurface groundwater aquifer properties. One example of such features is the delineation of high and low groundwater hydraulic conductivity (K). Such methods could be useful in predicting high (K) zones and predicting groundwater flowing directions within the studied aquifer. Interpolation methods were helpful in predicting some aquifer hydrogeological and structural characteristics. The results produced some important conclusions for any future groundwater investment.
Remote sensing of wet lands in irrigated areas
NASA Technical Reports Server (NTRS)
Ham, H. H.
1972-01-01
The use of airborne remote sensing techniques to: (1) detect drainage problem areas, (2) delineate the problem in terms of areal extent, depth to the water table, and presence of excessive salinity, and (3) evaluate the effectiveness of existing subsurface drainage facilities, is discussed. Experimental results show that remote sensing, as demonstrated in this study and as presently constituted and priced, does not represent a practical alternative as a management tool to presently used visual and conventional photographic methods in the systematic and repetitive detection and delineation of wetlands.
Development and Evaluation of a Casualty Evacuation Model for a European Conflict.
1985-12-01
EVAC, the computer code which implements our technique, has been used to solve a series of test problems in less time and requiring less memory than...the order of 1/K the amount of main memory for a K-commodity problem, so it can solve significantly larger problems than MCNF. I . 10 CHAPTER II A...technique may require only half the memory of the general L.P. package [6]. These advances are due to the efficient data structures which have been
A Direct Mapping of Max k-SAT and High Order Parity Checks to a Chimera Graph
Chancellor, N.; Zohren, S.; Warburton, P. A.; Benjamin, S. C.; Roberts, S.
2016-01-01
We demonstrate a direct mapping of max k-SAT problems (and weighted max k-SAT) to a Chimera graph, which is the non-planar hardware graph of the devices built by D-Wave Systems Inc. We further show that this mapping can be used to map a similar class of maximum satisfiability problems where the clauses are replaced by parity checks over potentially large numbers of bits. The latter is of specific interest for applications in decoding for communication. We discuss an example in which the decoding of a turbo code, which has been demonstrated to perform near the Shannon limit, can be mapped to a Chimera graph. The weighted max k-SAT problem is the most general class of satisfiability problems, so our result effectively demonstrates how any satisfiability problem may be directly mapped to a Chimera graph. Our methods faithfully reproduce the low energy spectrum of the target problems, so therefore may also be used for maximum entropy inference. PMID:27857179
NASA Astrophysics Data System (ADS)
Mihálka, Zsuzsanna É.; Surján, Péter R.
2017-12-01
The method of analytic continuation is applied to estimate eigenvalues of linear operators from finite order results of perturbation theory even in cases when the latter is divergent. Given a finite number of terms E(k ),k =1 ,2 ,⋯M resulting from a Rayleigh-Schrödinger perturbation calculation, scaling these numbers by μk (μ being the perturbation parameter) we form the sum E (μ ) =∑kμkE(k ) for small μ values for which the finite series is convergent to a certain numerical accuracy. Extrapolating the function E (μ ) to μ =1 yields an estimation of the exact solution of the problem. For divergent series, this procedure may serve as resummation tool provided the perturbation problem has a nonzero radius of convergence. As illustrations, we treat the anharmonic (quartic) oscillator and an example from the many-electron correlation problem.
A Simple Algorithm for the Metric Traveling Salesman Problem
NASA Technical Reports Server (NTRS)
Grimm, M. J.
1984-01-01
An algorithm was designed for a wire list net sort problem. A branch and bound algorithm for the metric traveling salesman problem is presented for this. The algorithm is a best bound first recursive descent where the bound is based on the triangle inequality. The bounded subsets are defined by the relative order of the first K of the N cities (i.e., a K city subtour). When K equals N, the bound is the length of the tour. The algorithm is implemented as a one page subroutine written in the C programming language for the VAX 11/750. Average execution times for randomly selected planar points using the Euclidean metric are 0.01, 0.05, 0.42, and 3.13 seconds for ten, fifteen, twenty, and twenty-five cities, respectively. Maximum execution times for a hundred cases are less than eleven times the averages. The speed of the algorithms is due to an initial ordering algorithm that is a N squared operation. The algorithm also solves the related problem where the tour does not return to the starting city and the starting and/or ending cities may be specified. It is possible to extend the algorithm to solve a nonsymmetric problem satisfying the triangle inequality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, C; Noid, G; Dalah, E
2015-06-15
Purpose: It has been reported recently that the change of CT number (CTN) during and after radiation therapy (RT) may be used to assess RT response. The purpose of this work is to develop a tool to automatically segment the regions with differentiating CTN and/or with change of CTN in a series of CTs. Methods: A software tool was developed to identify regions with differentiating CTN using K-mean Cluster of CT numbers and to automatically delineate these regions using convex hull enclosing method. Pre- and Post-RT CT, PET, or MRI images acquired for sample lung and pancreatic cancer cases weremore » used to test the software tool. K-mean cluster of CT numbers within the gross tumor volumes (GTVs) delineated based on PET SUV (standard uptake value of fludeoxyglucose) and/or MRI ADC (apparent diffusion coefficient) map was analyzed. The cluster centers with higher value were considered as active tumor volumes (ATV). The convex hull contours enclosing preset clusters were used to delineate these ATVs with color washed displays. The CTN defined ATVs were compared with the SUV- or ADC-defined ATVs. Results: CTN stability of the CT scanner used to acquire the CTs in this work is less than 1.5 Hounsfield Unit (HU) variation annually. K-mean cluster centers in the GTV have difference of ∼20 HU, much larger than variation due to CTN stability, for the lung cancer cases studied. The dice coefficient between the ATVs delineated based on convex hull enclosure of high CTN centers and the PET defined GTVs based on SUV cutoff value of 2.5 was 90(±5)%. Conclusion: A software tool was developed using K-mean cluster and convex hull contour to automatically segment high CTN regions which may not be identifiable using a simple threshold method. These CTN regions were reasonably overlapped with the PET or MRI defined GTVs.« less
Using Technology To Enhance Problem Solving and Critical Thinking Skills.
ERIC Educational Resources Information Center
Mingus, Tabitha; Grassl, Richard
1997-01-01
Secondary mathematics teachers participated in a problem-solving course in which technology became a means to develop as teachers and as problem solvers. Findings indicate a delineation between technical competence and metatechnology--thinking about how and when to apply technology to particular problems. (PVD)
Bakhshipour, Zeinab; Huat, Bujang B K; Ibrahim, Shaharin; Asadi, Afshin; Kura, Nura Umar
2013-01-01
This work describes the application of the electrical resistivity (ER) method to delineating subsurface structures and cavities in Kuala Lumpur Limestone within the Batu Cave area of Selangor Darul Ehsan, Malaysia. In all, 17 ER profiles were measured by using a Wenner electrode configuration with 2 m spacing. The field survey was accompanied by laboratory work, which involves taking resistivity measurements of rock, soil, and water samples taken from the field to obtain the formation factor. The relationship between resistivity and the formation factor and porosity for all the samples was established. The porosity values were plotted and contoured. A 2-dimensional and 3-dimensional representation of the subsurface topography of the area was prepared through use of commercial computer software. The results show the presence of cavities and sinkholes in some parts of the study area. This work could help engineers and environmental managers by providing the information necessary to produce a sustainable management plan in order to prevent catastrophic collapses of structures and other related geohazard problems.
Bakhshipour, Zeinab; Huat, Bujang B. K.; Ibrahim, Shaharin; Asadi, Afshin
2013-01-01
This work describes the application of the electrical resistivity (ER) method to delineating subsurface structures and cavities in Kuala Lumpur Limestone within the Batu Cave area of Selangor Darul Ehsan, Malaysia. In all, 17 ER profiles were measured by using a Wenner electrode configuration with 2 m spacing. The field survey was accompanied by laboratory work, which involves taking resistivity measurements of rock, soil, and water samples taken from the field to obtain the formation factor. The relationship between resistivity and the formation factor and porosity for all the samples was established. The porosity values were plotted and contoured. A 2-dimensional and 3-dimensional representation of the subsurface topography of the area was prepared through use of commercial computer software. The results show the presence of cavities and sinkholes in some parts of the study area. This work could help engineers and environmental managers by providing the information necessary to produce a sustainable management plan in order to prevent catastrophic collapses of structures and other related geohazard problems. PMID:24501583
A swallowtail catastrophe model for the emergence of leadership in coordination-intensive groups.
Guastello, Stephen J; Bond, Robert W
2007-04-01
This research extended the previous studies concerning the swallowtail catastrophe model for leadership emergence to coordination-intensive groups. Thirteen 4-person groups composed of undergraduates played in Intersection coordination (card game) task and were allowed to talk while performing it; 13 other groups worked nonverbally. A questionnaire measured leadership emergence at the end of the game along with other social contributions to the groups' efforts. The swallowtail catastrophe model that was evident in previous leadership emergence phenomena in creative problem solving and production groups was found here also. All three control parameters were identified: a general participation variable that was akin to K in the rugged landscape model of self-organization, task control, and whether the groups worked verbally or nonverbally. Several new avenues for future research were delineated.
Teacher Performance Appraisal in Thailand: Poison or Panacea?
ERIC Educational Resources Information Center
Pimpa, Nattavud
2005-01-01
This research focuses on the examination of problems related to the national teacher performance appraisal system by the Thai Ministry of Education. It highlights major problems of the current performance appraisal system by delineating the weaknesses and pitfalls of the current appraisal system. The findings indicate problems to three major…
Using high hydraulic conductivity nodes to simulate seepage lakes
Anderson, Mary P.; Hunt, Randall J.; Krohelski, James T.; Chung, Kuopo
2002-01-01
In a typical ground water flow model, lakes are represented by specified head nodes requiring that lake levels be known a priori. To remove this limitation, previous researchers assigned high hydraulic conductivity (K) values to nodes that represent a lake, under the assumption that the simulated head at the nodes in the high-K zone accurately reflects lake level. The solution should also produce a constant water level across the lake. We developed a model of a simple hypothetical ground water/lake system to test whether solutions using high-K lake nodes are sensitive to the value of K selected to represent the lake. Results show that the larger the contrast between the K of the aquifer and the K of the lake nodes, the smaller the error tolerance required for the solution to converge. For our test problem, a contrast of three orders of magnitude produced a head difference across the lake of 0.005 m under a regional gradient of the order of 10−3 m/m, while a contrast of four orders of magnitude produced a head difference of 0.001 m. The high-K method was then used to simulate lake levels in Pretty Lake, Wisconsin. Results for both the hypothetical system and the application to Pretty Lake compared favorably with results using a lake package developed for MODFLOW (Merritt and Konikow 2000). While our results demonstrate that the high-K method accurately simulates lake levels, this method has more cumbersome postprocessing and longer run times than the same problem simulated using the lake package.
Project Thrive. Ways and Means: Strategies for Solving Classroom Problems. Volume I.
ERIC Educational Resources Information Center
Richards, Merle; Biemiller, Andrew
Strategies are delineated for solving elementary school classroom problems. After an introductory chapter, chapter 2 reviews problems cited by 24 kindergarten, Grade 1, and Grade 2 teachers and the strategies chosen as likely solutions to the problems. Strategies later found to be unsuccessful are discussed if they illustrate the nature of the…
NASA Astrophysics Data System (ADS)
Pratavieira, S.; Santos, P. L. A.; Bagnato, V. S.; Kurachi, C.
2009-06-01
Oral and skin cancers constitute a major global health problem that cause great impact in patients. The most common screening method for oral cancer is visual inspection and palpation of the mouth. Visual examination relies heavily on the experience and skills of the physician to identify and delineate early premalignant and cancer changes, which is not simple due to the similar characteristics of early stage cancers and benign lesions. Optical imaging has the potential to address these clinical challenges. Contrast between normal and neoplastic areas may be increased, distinct to the conventional white light, when using illumination and detection conditions. Reflectance imaging can detect local changes in tissue scattering and absorption and fluorescence imaging can probe changes in the biochemical composition. These changes have shown to be indicatives of malignant progression. Widefield optical imaging systems are interesting because they may enhance the screening ability in large regions allowing the discrimination and the delineation of neoplastic and potentially of occult lesions. Digital image processing allows the combination of autofluorescence and reflectance images in order to objectively identify and delineate the peripheral extent of neoplastic lesions in the skin and oral cavity. Combining information from different imaging modalities has the potential of increasing diagnostic performance, due to distinct provided information. A simple widefiled imaging device based on fluorescence and reflectance modes together with a digital image processing was assembled and its performance tested in an animal study.
NASA Astrophysics Data System (ADS)
Albrecht, F.; Hölbling, D.; Friedl, B.
2017-09-01
Landslide mapping benefits from the ever increasing availability of Earth Observation (EO) data resulting from programmes like the Copernicus Sentinel missions and improved infrastructure for data access. However, there arises the need for improved automated landslide information extraction processes from EO data while the dominant method is still manual delineation. Object-based image analysis (OBIA) provides the means for the fast and efficient extraction of landslide information. To prove its quality, automated results are often compared to manually delineated landslide maps. Although there is awareness of the uncertainties inherent in manual delineations, there is a lack of understanding how they affect the levels of agreement in a direct comparison of OBIA-derived landslide maps and manually derived landslide maps. In order to provide an improved reference, we present a fuzzy approach for the manual delineation of landslides on optical satellite images, thereby making the inherent uncertainties of the delineation explicit. The fuzzy manual delineation and the OBIA classification are compared by accuracy metrics accepted in the remote sensing community. We have tested this approach for high resolution (HR) satellite images of three large landslides in Austria and Italy. We were able to show that the deviation of the OBIA result from the manual delineation can mainly be attributed to the uncertainty inherent in the manual delineation process, a relevant issue for the design of validation processes for OBIA-derived landslide maps.
Magnetic refrigeration for low-temperature applications
NASA Technical Reports Server (NTRS)
Barclay, J. A.
1985-01-01
The application of refrigeration at low temperatures ranging from production of liquid helium for medical imaging systems to cooling of infrared sensors on surveillance satellites is discussed. Cooling below about 15 K with regenerative refrigerators is difficult because of the decreasing thermal mass of the regenerator compared to that of the working material. In order to overcome this difficulty with helium gas as the working material, a heat exchanger plus a Joule-Thomson or other exponder is used. Regenerative magnetic refrigerators with magnetic solids as the working material have the same regenerator problem as gas refrigerators. This problem provides motivation for the development of nonregenerative magnetic refrigerators that span approximately 1 K to approximately 0 K. Particular emphasis is placed on high reliability and high efficiency. Calculations indicate considerable promise in this area. The principles, the potential, the problems, and the progress towards development of successful 4 to 20 K magnetic refrigerators are discussed.
A 3-D turbulent flow analysis using finite elements with k-ɛ model
NASA Astrophysics Data System (ADS)
Okuda, H.; Yagawa, G.; Eguchi, Y.
1989-03-01
This paper describes the finite element turbulent flow analysis, which is suitable for three-dimensional large scale problems. The k-ɛ turbulence model as well as the conservation equations of mass and momentum are discretized in space using rather low order elements. Resulting coefficient matrices are evaluated by one-point quadrature in order to reduce the computational storage and the CPU cost. The time integration scheme based on the velocity correction method is employed to obtain steady state solutions. For the verification of this FEM program, two-dimensional plenum flow is simulated and compared with experiment. As the application to three-dimensional practical problems, the turbulent flows in the upper plenum of the fast breeder reactor are calculated for various boundary conditions.
Why Inquiry Is Inherently Difficult...and Some Ways to Make It Easier
ERIC Educational Resources Information Center
Meyer, Daniel Z.; Avery, Leanne M.
2010-01-01
In this article, the authors offer a framework that identifies two critical problems in designing inquiry-based instruction and suggests three models for developing instruction that overcomes those problems. The Protocol Model overcomes the Getting on Board Problem by providing students an initial experience through clearly delineated steps with a…
The design and simulation of UHF RFID microstrip antenna
NASA Astrophysics Data System (ADS)
Chen, Xiangqun; Huang, Rui; Shen, Liman; Liu, Liping; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Renheng, Xu
2018-02-01
At present, China has delineated UHF RFID communicating frequency range which is 840 ∼ 845 MHz and 920 ∼ 925 MHz, but most UHF microstrip antenna don’t carry out this standard, that leads to radio frequency pollution. In order to solve the problems above, a method combining theory and simulation is adopted. Combining with a new ceramic material, a 925.5 MHz RFID microstrip antenna is designed, which is optimized and simulated by HFSS software. The results show that the VSWR of this RFID microstrip antenna is relatively small in the vicinity of 922.5 MHz, the gain is 2.1 dBi, which can be widely used in China’s UHF RFID communicating equipments.
Off-Shell Persistence of Composite Pions and Kaons
Qin, Si -Xue; Chen, Chen; Mezrag, Cedric; ...
2018-01-17
In order for a Sullivan-like process to provide reliable access to a meson target as t becomes spacelike, the pole associated with that meson should remain the dominant feature of the quarkantiquark scattering matrix and the wave function describing the related correlation must evolve slowly and smoothly. Using continuum methods for the strong-interaction bound-state problem, we explore and delineate the circumstances under which these conditions are satisfied: for the pion, this requires -t ≲ 0.6 GeV 2, whereas -t ≲ 0.9 GeV 2 will suffice for the kaon. Furthermore, these results should prove useful in evaluating the potential of numerousmore » experiments at existing and proposed facilities.« less
A new method of cardiographic image segmentation based on grammar
NASA Astrophysics Data System (ADS)
Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.
2011-10-01
The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.
Time Investment in Drug Supply Problems by Flemish Community Pharmacies.
De Weerdt, Elfi; Simoens, Steven; Casteels, Minne; Huys, Isabelle
2017-01-01
Introduction: Drug supply problems are a known problem for pharmacies. Community and hospital pharmacies do everything they can to minimize impact on patients. This study aims to quantify the time spent by Flemish community pharmacies on drug supply problems. Materials and Methods: During 18 weeks, employees of 25 community pharmacies filled in a template with the total time spent on drug supply problems. The template stated all the steps community pharmacies could undertake to manage drug supply problems. Results: Considering the median over the study period, the median time spent on drug supply problems was 25 min per week, with a minimum of 14 min per week and a maximum of 38 min per week. After calculating the median of each pharmacy, large differences were observed between pharmacies: about 25% spent less than 15 min per week and one-fifth spent more than 1 h per week. The steps on which community pharmacists spent most time are: (i) "check missing products from orders," (ii) "contact wholesaler/manufacturers regarding potential drug shortages," and (iii) "communicating to patients." These three steps account for about 50% of the total time spent on drug supply problems during the study period. Conclusion: Community pharmacies spend about half an hour per week on drug supply problems. Although 25 min per week does not seem that much, the time spent is not delineated and community pharmacists are constantly confronted with drug supply problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Guoping; Zheng, Chunmiao
Two biodegradation models are developed to represent natural attenuation of fuel-hydrocarbon contaminants as observed in a comprehensive natural-gradient tracer test in a heterogeneous aquifer on the Columbus Air Force Base in Mississippi, USA. The first, a first-order mass loss model, describes the irreversible losses of BTEX and its individual components, i.e., benzene (B), toluene (T), ethyl benzene (E), and xylene (X). The second, a reactive pathway model, describes sequential degradation pathways for BTEX utilizing multiple electron acceptors, including oxygen, nitrate, iron and sulfate, and via methanogenesis. The heterogeneous aquifer is represented by multiple hydraulic conductivity (K) zones delineated on themore » basis of numerous flowmeter K measurements. A direct propagation artificial neural network (DPN) is used as an inverse modeling tool to estimate the biodegradation rate constants associated with each of the K zones. In both the mass loss model and the reactive pathway model, the biodegradation rate constants show an increasing trend with the hydraulic conductivity. The finding of correlation between biodegradation kinetics and hydraulic conductivity distributions is of general interest and relevance to characterization and modeling of natural attenuation of hydrocarbons in other petroleum-product contaminated sites.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Guoping; Zheng, Chunmiao
Two biodegradation models are developed to represent natural attenuation of fuel-hydrocarbon contaminants as observed in a comprehensive natural-gradient tracer test in a heterogeneous aquifer on the Columbus Air Force Base in Mississippi. The first, a first-order mass loss model, describes the irreversible losses of BTEX and its individual components, i.e., benzene (B), toluene (T), ethyl benzene (E), and xylene (X). The second, a reactive pathway model, describes sequential degradation pathways for BTEX utilizing multiple electron acceptors, including oxygen, nitrate, iron and sulfate, and via methanogenesis. The heterogeneous aquifer is represented by multiple hydraulic conductivity (K) zones delineated on the basismore » of numerous flowmeter K measurements. A direct propagation artificial neural network (DPN) is used as an inverse modeling tool to estimate the biodegradation rate constants associated with each of the K zones. In both the mass loss model and the reactive pathway model, the biodegradation rate constants show an increasing trend with the hydraulic conductivity. The finding of correlation between biodegradation kinetics and hydraulic conductivity distributions is of general interest and relevance to characterization and modeling of natural attenuation of hydrocarbons in other petroleum-product contaminated sites.« less
2007-09-01
07-24 102 Viereck, L. A., K. Van Cleve, and C. T. Dyrness. 1986. Forest ecosystem distribution in the Taiga environment, Chapter 3. In Forest...ecosystems in the Alaska Taiga : A synthesis of structure and function. ed. K. Van Cleve, F. S. Chapin III, P. W. Flanagan, L. A. Viereck, and C. T
Permeability profiles in granular aquifers using flowmeters in direct-push wells.
Paradis, Daniel; Lefebvre, René; Morin, Roger H; Gloaguen, Erwan
2011-01-01
Numerical hydrogeological models should ideally be based on the spatial distribution of hydraulic conductivity (K), a property rarely defined on the basis of sufficient data due to the lack of efficient characterization methods. Electromagnetic borehole flowmeter measurements during pumping in uncased wells can effectively provide a continuous vertical distribution of K in consolidated rocks. However, relatively few studies have used the flowmeter in screened wells penetrating unconsolidated aquifers, and tests conducted in gravel-packed wells have shown that flowmeter data may yield misleading results. This paper describes the practical application of flowmeter profiles in direct-push wells to measure K and delineate hydrofacies in heterogeneous unconsolidated aquifers having low-to-moderate K (10(-6) to 10(-4) m/s). The effect of direct-push well installation on K measurements in unconsolidated deposits is first assessed based on the previous work indicating that such installations minimize disturbance to the aquifer fabric. The installation and development of long-screen wells are then used in a case study validating K profiles from flowmeter tests at high-resolution intervals (15 cm) with K profiles derived from multilevel slug tests between packers at identical intervals. For 119 intervals tested in five different wells, the difference in log K values obtained from the two methods is consistently below 10%. Finally, a graphical approach to the interpretation of flowmeter profiles is proposed to delineate intervals corresponding to distinct hydrofacies, thus providing a method whereby both the scale and magnitude of K contrasts in heterogeneous unconsolidated aquifers may be represented. Journal compilation © 2010 National Ground Water Association. No claim to original US government works.
363. A.C.S., Delineator March 1934 STATE OF CALIFORNIA; DEPARTMENT OF ...
363. A.C.S., Delineator March 1934 STATE OF CALIFORNIA; DEPARTMENT OF PUBLIC WORKS; SAN FRANCISCO - OAKLAND BAY BRIDGE; CONTRACT NO. 6A; SUPERSTRUCTURE - WEST BAY CROSSING; SAN FRANCISCO ANCHORAGE; AMERICAN BRIDGE CO.; AMBRIDGE PLANT; ORDER NO. G4866; SHEET NO E3 - San Francisco Oakland Bay Bridge, Spanning San Francisco Bay, San Francisco, San Francisco County, CA
NASA Astrophysics Data System (ADS)
Abo-Ezz, E. R.; Essa, K. S.
2016-04-01
A new linear least-squares approach is proposed to interpret magnetic anomalies of the buried structures by using a new magnetic anomaly formula. This approach depends on solving different sets of algebraic linear equations in order to invert the depth ( z), amplitude coefficient ( K), and magnetization angle ( θ) of buried structures using magnetic data. The utility and validity of the new proposed approach has been demonstrated through various reliable synthetic data sets with and without noise. In addition, the method has been applied to field data sets from USA and India. The best-fitted anomaly has been delineated by estimating the root-mean squared (rms). Judging satisfaction of this approach is done by comparing the obtained results with other available geological or geophysical information.
Analysis and optimization of cyclic methods in orbit computation
NASA Technical Reports Server (NTRS)
Pierce, S.
1973-01-01
The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.
Problems of the Child's Mental Development.
ERIC Educational Resources Information Center
Davydov, V. V.
1988-01-01
Chapter two from V. V. Davydov's "Problems of Developmental Teaching" (1986) is excerpted. Analyzes Piagetian theory, finding it neglects the influence of teaching and child rearing on mental development. Builds from Lev S. Vygotsky's theories to delineate stages in personality and mental development during childhood. (CH)
No-Go Theorem for k-Essence Dark Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonvin, Camille; Caprini, Chiara; Durrer, Ruth
We demonstrate that if k-essence can solve the coincidence problem and play the role of dark energy in the Universe, the fluctuations of the field have to propagate superluminally at some stage. We argue that this implies that successful k-essence models violate causality. It is not possible to define a time ordered succession of events in a Lorentz invariant way. Therefore, k-essence cannot arise as a low energy effective field theory of a causal, consistent high energy theory.
Belli, Maria Luisa; Mori, Martina; Broggi, Sara; Cattaneo, Giovanni Mauro; Bettinardi, Valentino; Dell'Oca, Italo; Fallanca, Federico; Passoni, Paolo; Vanoli, Emilia Giovanna; Calandrino, Riccardo; Di Muzio, Nadia; Picchio, Maria; Fiorino, Claudio
2018-05-01
To investigate the robustness of PET radiomic features (RF) against tumour delineation uncertainty in two clinically relevant situations. Twenty-five head-and-neck (HN) and 25 pancreatic cancer patients previously treated with 18 F-Fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT)-based planning optimization were considered. Seven FDG-based contours were delineated for tumour (T) and positive lymph nodes (N, for HN patients only) following manual (2 observers), semi-automatic (based on SUV maximum gradient: PET_Edge) and automatic (40%, 50%, 60%, 70% SUV_max thresholds) methods. Seventy-three RF (14 of first order and 59 of higher order) were extracted using the CGITA software (v.1.4). The impact of delineation on volume agreement and RF was assessed by DICE and Intra-class Correlation Coefficients (ICC). A large disagreement between manual and SUV_max method was found for thresholds ≥50%. Inter-observer variability showed median DICE values between 0.81 (HN-T) and 0.73 (pancreas). Volumes defined by PET_Edge were better consistent with the manual ones compared to SUV40%. Regarding RF, 19%/19%/47% of the features showed ICC < 0.80 between observers for HN-N/HN-T/pancreas, mostly in the Voxel-alignment matrix and in the intensity-size zone matrix families. RFs with ICC < 0.80 against manual delineation (taking the worst value) increased to 44%/36%/61% for PET_Edge and to 69%/53%/75% for SUV40%. About 80%/50% of 72 RF were consistent between observers for HN/pancreas patients. PET_edge was sufficiently robust against manual delineation while SUV40% showed a worse performance. This result suggests the possibility to replace manual with semi-automatic delineation of HN and pancreas tumours in studies including PET radiomic analyses. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A critical evaluation of two-equation models for near wall turbulence
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Abid, Ridha; Anderson, E. Clay
1990-01-01
A variety of two-equation turbulence models,including several versions of the K-epsilon model as well as the K-omega model, are analyzed critically for near wall turbulent flows from a theoretical and computational standpoint. It is shown that the K-epsilon model has two major problems associated with it: the lack of natural boundary conditions for the dissipation rate and the appearance of higher-order correlations in the balance of terms for the dissipation rate at the wall. In so far as the former problem is concerned, either physically inconsistent boundary conditions have been used or the boundary conditions for the dissipation rate have been tied to higher-order derivatives of the turbulent kinetic energy which leads to numerical stiffness. The K-omega model can alleviate these problems since the asymptotic behavior of omega is known in more detail and since its near wall balance involves only exact viscous terms. However, the modeled form of the omega equation that is used in the literature is incomplete-an exact viscous term is missing which causes the model to behave in an asymptotically inconsistent manner. By including this viscous term and by introducing new wall damping functions with improved asymptotic behavior, a new K-tau model (where tau is identical with 1/omega is turbulent time scale) is developed. It is demonstrated that this new model is computationally robust and yields improved predictions for turbulent boundary layers.
Statistical dynamo theory: Mode excitation.
Hoyng, P
2009-04-01
We compute statistical properties of the lowest-order multipole coefficients of the magnetic field generated by a dynamo of arbitrary shape. To this end we expand the field in a complete biorthogonal set of base functions, viz. B= summation operator_{k}a;{k}(t)b;{k}(r) . The properties of these biorthogonal function sets are treated in detail. We consider a linear problem and the statistical properties of the fluid flow are supposed to be given. The turbulent convection may have an arbitrary distribution of spatial scales. The time evolution of the expansion coefficients a;{k} is governed by a stochastic differential equation from which we infer their averages a;{k} , autocorrelation functions a;{k}(t)a;{k *}(t+tau) , and an equation for the cross correlations a;{k}a;{l *} . The eigenfunctions of the dynamo equation (with eigenvalues lambda_{k} ) turn out to be a preferred set in terms of which our results assume their simplest form. The magnetic field of the dynamo is shown to consist of transiently excited eigenmodes whose frequency and coherence time is given by Ilambda_{k} and -1/Rlambda_{k} , respectively. The relative rms excitation level of the eigenmodes, and hence the distribution of magnetic energy over spatial scales, is determined by linear theory. An expression is derived for |a;{k}|;{2}/|a;{0}|;{2} in case the fundamental mode b;{0} has a dominant amplitude, and we outline how this expression may be evaluated. It is estimated that |a;{k}|;{2}/|a;{0}|;{2} approximately 1/N , where N is the number of convective cells in the dynamo. We show that the old problem of a short correlation time (or first-order smoothing approximation) has been partially eliminated. Finally we prove that for a simple statistically steady dynamo with finite resistivity all eigenvalues obey Rlambda_{k}<0 .
NASA Astrophysics Data System (ADS)
Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar
2018-04-01
Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.
Flat-panel-detector chest radiography: effect of tube voltage on image quality.
Uffmann, Martin; Neitzel, Ulrich; Prokop, Mathias; Kabalan, Nahla; Weber, Michael; Herold, Christian J; Schaefer-Prokop, Cornelia
2005-05-01
To compare the visibility of anatomic structures in direct-detector chest radiographs acquired with different tube voltages at equal effective doses to the patient. The study protocol was approved by the institutional internal review board, and written informed consent was obtained from all patients. Posteroanterior chest radiographs of 48 consecutively selected patients were obtained at 90, 121, and 150 kVp by using a flat-panel-detector unit that was based on cesium iodide technology and automated exposure control. Monte Carlo simulations were used to verify that the effective dose for all kilovoltage settings was equal. Five radiologists subjectively and independently rated the delineation of anatomic structures on hard-copy images by using a five-point scale. They also ranked image quality in a blinded side-by-side comparison. Average ranking scores were compared by using one-way analysis of variance with repeated measures. Data were analyzed for the entire patient group and for two patient subgroups that were formed according to body mass index (BMI). The visibility scores of most anatomic structures were significantly superior with the 90-kVp images (mean score, 3.11), followed by the 121-kVp (mean score, 2.95) and 150-kVp images (mean score, 2.80). Differences did not reach significance (P > .05) only for the delineation of the peripheral vessels, the heart contours, and the carina. This was also true for the subgroup of patients (n = 24) with a BMI greater than and the subgroup of patients (n = 24) with a BMI less than the mean BMI (26.9 kg/m(2)). At side-by-side comparison, the readers rated 90-kVp images as having superior image quality in the majority of image triplets; the percentage of 90-kVp images rated as "first choice" ranged from 60% (29 of 48 patients) to 90% (43 of 48 patients), with a median of 88% (42 of 48 patients), among the readers. Delineation of most anatomic structures and overall image quality were ranked superior in digital radiographs acquired with lower kilovoltage at a constant effective patient dose. (c) RSNA, 2005.
NASA Astrophysics Data System (ADS)
Rubeaux, Mathieu; Simon, Antoine; Gnep, Khemara; Colliaux, Jérémy; Acosta, Oscar; de Crevoisier, Renaud; Haigron, Pascal
2013-03-01
Image-Guided Radiation Therapy (IGRT) aims at increasing the precision of radiation dose delivery. In the context of prostate cancer, a planning Computed Tomography (CT) image with manually defined prostate and organs at risk (OAR) delineations is usually associated with daily Cone Beam Computed Tomography (CBCT) follow-up images. The CBCT images allow to visualize the prostate position and to reposition the patient accordingly. They also should be used to evaluate the dose received by the organs at each fraction of the treatment. To do so, the first step is a prostate and OAR segmentation on the daily CBCTs, which is very timeconsuming. To simplify this task, CT to CBCT non-rigid registration could be used in order to propagate the original CT delineations in the CBCT images. For this aim, we compared several non-rigid registration methods. They are all based on the Mutual Information (MI) similarity measure, and use a BSpline transformation model. But we add different constraints to this global scheme in order to evaluate their impact on the final results. These algorithms are investigated on two real datasets, representing a total of 70 CBCT on which a reference delineation has been realized. The evaluation is led using the Dice Similarity Coefficient (DSC) as a quality criteria. The experiments show that a rigid penalty term on the bones improves the final registration result, providing high quality propagated delineations.
364. J.G.M., Delineator February 1934 STATE OF CALIFORNIA; DEPARTMENT OF ...
364. J.G.M., Delineator February 1934 STATE OF CALIFORNIA; DEPARTMENT OF PUBLIC WORKS; SAN FRANCISCO - OAKLAND BAY BRIDGE; CONTRACT NO. 6; SUPERSTRUCTURE - WEST BAY CROSSING; SAN FRANCISCO ANCHORAGE CABLE BENT CASTING; AMERICAN BRIDGE CO.; AMBRIDGE PLANT; ORDER NO. G 4852 C; SHEET NO. 100 - San Francisco Oakland Bay Bridge, Spanning San Francisco Bay, San Francisco, San Francisco County, CA
378. A.C.S., Delineator March 1933 STATE OF CALIFORNIA; DEPARTMENT OF ...
378. A.C.S., Delineator March 1933 STATE OF CALIFORNIA; DEPARTMENT OF PUBLIC WORKS; SAN FRANCISCO - OAKLAND BAY BRIDGE; CONTRACT NO. 6A; SUPERSTRUCTURE - WEST BAY CROSSING; YERBA BUENA ANCHORAGE & CABLE BENT. AMERICAN BRIDGE CO.; AMBRIDGE PLANT; ORDER NO. G 4866; SHEET NO. E4 - San Francisco Oakland Bay Bridge, Spanning San Francisco Bay, San Francisco, San Francisco County, CA
Approximating smooth functions using algebraic-trigonometric polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharapudinov, Idris I
2011-01-14
The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3
Oehler, Christoph; Lang, Stephanie; Dimmerling, Peter; Bolesch, Christian; Kloeck, Stephan; Tini, Alessandra; Glanzmann, Christoph; Najafi, Yousef; Studer, Gabriela; Zwahlen, Daniel R
2014-11-11
To evaluate PTV margins for hypofractionated IGRT of prostate comparing kV/kV imaging or CBCT. Between 2009 and 2012, 20 patients with low- (LR), intermediate- (IR) and high-risk (HR) prostate cancer were treated with VMAT in supine position with fiducial markers (FM), endorectal balloon (ERB) and full bladder. CBCT's and kV/kV imaging were performed before and additional CBCT's after treatment assessing intra-fraction motion. CTVP for 5 patients with LR and CTVPSV for 5 patients with IR/HR prostate cancer were contoured independently by 3 radiation oncologists using MRI. The van Hark formula (PTV margin =2.5Σ +0.7σ) was applied to calculate PTV margins of prostate/seminal vesicles (P/PSV) using CBCT or FM. 172 and 52 CBCTs before and after RT and 507 kV/kV images before RT were analysed. Differences between FM in CBCT or in planar kV image pairs were below 1 mm. Accounting for both random and systematic uncertainties anisotropic PTV margins were 5-8 mm for P (LR) and 6-11 mm for PSV (IR/HR). Random uncertainties like intra-fraction and inter-fraction (setup) uncertainties were of similar magnitude (0.9-1.4 mm). Largest uncertainty was introduced by CTV delineation (LR: 1-2 mm, IR/HR: 1.6-3.5 mm). Patient positioning using bone matching or ERB-matching resulted in larger PTV margins. For IGRT CBCT or kV/kV-image pairs with FM are interchangeable in respect of accuracy. Especially for hypofractionated RT, PTV margins can be kept in the range of 5 mm or below if stringent daily IGRT, ideally including prostate tracking, is applied. MR-based CTV delineation optimization is recommended.
Methods for Instructional Diagnosis with Limited Available Resources.
ERIC Educational Resources Information Center
Gillmore, Gerald M.; Clark, D. Joseph
College teaching should be approached with the same careful delineation of problems and systematic attempts to find solutions which characterize research. Specific methods for the diagnosis of instructional problems include audio-video taping, use of teaching assistants, colleague assistance, classroom tests, student projects in and out of class,…
Horner, Richard L
2001-01-01
Obstructive sleep apnoea is a common and serious breathing problem that is caused by effects of sleep on pharyngeal muscle tone in individuals with narrow upper airways. There has been increasing focus on delineating the brain mechanisms that modulate pharyngeal muscle activity in the awake and asleep states in order to understand the pathogenesis of obstructive apnoeas and to develop novel neurochemical treatments. Although initial clinical studies have met with only limited success, it is proposed that more rational and realistic approaches may be devised for neurochemical modulation of pharyngeal muscle tone as the relevant neurotransmitters and receptors that are involved in sleep-dependent modulation are identified following basic experiments. PMID:11686898
A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem
NASA Astrophysics Data System (ADS)
Willert, Jeffrey; Park, H.; Knoll, D. A.
2014-10-01
Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.
Time Investment in Drug Supply Problems by Flemish Community Pharmacies
De Weerdt, Elfi; Simoens, Steven; Casteels, Minne; Huys, Isabelle
2017-01-01
Introduction: Drug supply problems are a known problem for pharmacies. Community and hospital pharmacies do everything they can to minimize impact on patients. This study aims to quantify the time spent by Flemish community pharmacies on drug supply problems. Materials and Methods: During 18 weeks, employees of 25 community pharmacies filled in a template with the total time spent on drug supply problems. The template stated all the steps community pharmacies could undertake to manage drug supply problems. Results: Considering the median over the study period, the median time spent on drug supply problems was 25 min per week, with a minimum of 14 min per week and a maximum of 38 min per week. After calculating the median of each pharmacy, large differences were observed between pharmacies: about 25% spent less than 15 min per week and one-fifth spent more than 1 h per week. The steps on which community pharmacists spent most time are: (i) “check missing products from orders,” (ii) “contact wholesaler/manufacturers regarding potential drug shortages,” and (iii) “communicating to patients.” These three steps account for about 50% of the total time spent on drug supply problems during the study period. Conclusion: Community pharmacies spend about half an hour per week on drug supply problems. Although 25 min per week does not seem that much, the time spent is not delineated and community pharmacists are constantly confronted with drug supply problems. PMID:28878679
Thermochemistry of dense hydrous magnesium silicates
NASA Technical Reports Server (NTRS)
Bose, Kunal; Burnley, Pamela; Navrotsky, Alexandra
1994-01-01
Recent experimental investigations under mantle conditions have identified a suite of dense hydrous magnesium silicate (DHMS) phases that could be conduits to transport water to at least the 660 km discontinuity via mature, relatively cold, subducting slabs. Water released from successive dehydration of these phases during subduction could be responsible for deep focus earthquakes, mantle metasomatism and a host of other physico-chemical processes central to our understanding of the earth's deep interior. In order to construct a thermodynamic data base that can delineate and predict the stability ranges for DHMS phases, reliable thermochemical and thermophysical data are required. One of the major obstacles in calorimetric studies of phases synthesized under high pressure conditions has been limitation due to the small (less than 5 mg) sample mass. Our refinement of calorimeter techniques now allow precise determination of enthalpies of solution of less than 5 mg samples of hydrous magnesium silicates. For example, high temperature solution calorimetry of natural talc (Mg(0.99) Fe(0.01)Si4O10(OH)2), periclase (MgO) and quartz (SiO2) yield enthalpies of drop solution at 1044 K to be 592.2 (2.2), 52.01 (0.12) and 45.76 (0.4) kJ/mol respectively. The corresponding enthalpy of formation from oxides at 298 K for talc is minus 5908.2 kJ/mol agreeing within 0.1 percent to literature values.
Numerical computations on one-dimensional inverse scattering problems
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Hariharan, S. I.
1983-01-01
An approximate method to determine the index of refraction of a dielectric obstacle is presented. For simplicity one dimensional models of electromagnetic scattering are treated. The governing equations yield a second order boundary value problem, in which the index of refraction appears as a functional parameter. The availability of reflection coefficients yield two additional boundary conditions. The index of refraction by a k-th order spline which can be written as a linear combination of B-splines is approximated. For N distinct reflection coefficients, the resulting N boundary value problems yield a system of N nonlinear equations in N unknowns which are the coefficients of the B-splines.
372. J.W.M., Delineator August 1934 STATE OF CALIFORNIA; DEPARTMENT OF ...
372. J.W.M., Delineator August 1934 STATE OF CALIFORNIA; DEPARTMENT OF PUBLIC WORKS; SAN FRANCISCO - OAKLAND BAY BRIDGE; CONTRACT NO. 6; SUPERSTRUCTURE - WEST BAY CROSSING; OUTSIDE ELEVATION OF HOUSING; CENTER ANCHORAGE - PIER NO. 4; AMERICAN BRIDGE CO.; AMBRIDGE PLANT; ORDER NO. G 4854-XI; SHEET NO. E8 - San Francisco Oakland Bay Bridge, Spanning San Francisco Bay, San Francisco, San Francisco County, CA
Numerical solution of turbulent flow past a backward facing step using a nonlinear K-epsilon model
NASA Technical Reports Server (NTRS)
Speziale, C. G.; Ngo, Tuan
1987-01-01
The problem of turbulent flow past a backward facing step is important in many technological applications and has been used as a standard test case to evaluate the performance of turbulence models in the prediction of separated flows. It is well known that the commonly used kappa-epsilon (and K-l) models of turbulence yield inaccurate predictions for the reattachment points in this problem. By an analysis of the mean vorticity transport equation, it will be argued that the intrinsically inaccurate prediction of normal Reynolds stress differences by the Kappa-epsilon and K-l models is a major contributor to this problem. Computations using a new nonlinear kappa-epsilon model (which alleviates this deficiency) are made with the TEACH program. Comparisons are made between the improved results predicted by this nonlinear kappa-epsilon model and those obtained from the linear kappa-epsilon model as well as from second-order closure models.
Group Solutions, Too! More Cooperative Logic Activities for Grades K-4. Teacher's Guide. LHS GEMS.
ERIC Educational Resources Information Center
Goodman, Jan M.; Kopp, Jaine
There is evidence that structured cooperative logic is an effective way to introduce or reinforce mathematics concepts, explore thinking processes basic to both math and science, and develop the important social skills of cooperative problem-solving. This book contains a number of cooperative logic activities for grades K-4 in order to improve…
Maritime zones delimitation - Problems and solutions
NASA Astrophysics Data System (ADS)
Kastrisios, Christos; Tsoulos, Lysandros
2018-05-01
The delimitation of maritime zones and boundaries foreseen by the United Nations Convention on the Law of the Sea (UNCLOS) is a factor of economic growth, effective management of the coastal and ocean environment and the cornerstone for maritime spatial planning. Maritime zones and boundaries form the outermost limits of coastal states and their accurate delineation and cartographic portrayal is a matter of national priority. Although UNCLOS is a legal document, its implementation -at first place- is purely technical and requires -amongst others- theoretical and applied background on Geodesy, Cartography and Geographic Information Systems (GIS) for those involved. This paper provides a brief historical background of the evolution of the UNCLOS, presents the various concepts of the Convention and identifies the problems inherent in the maritime delimitation process. Furthermore, it presents solutions that will facilitate the cartographer's work in order to achieve unquestionable results. Through the paper it becomes evident that the role of the cartographer and the GIS expert is critical for the successful implementation of maritime delimitation.
NASA Astrophysics Data System (ADS)
Alsaqqa, Ali; Kilcoyne, Colin; Singh, Sujay; Horrocks, Gregory; Marley, Peter; Banerjee, Sarbajit; Sambandamurthy, G.
Vanadium dioxide (VO2) is a strongly correlated material that exhibits a sharp thermally driven metal-insulator transition at Tc ~ 340 K. The transition can also be triggered by a DC voltage in the insulating phase with a threshold (Vth) behavior. The mechanisms behind these transitions are hotly discussed and resistance noise spectroscopy is a suitable tool to delineate different transport mechanisms in correlated systems. We present results from a systematic study of the low frequency (1 mHz < f < 10 Hz) noise behavior in VO2 nanobeams across the thermally and electrically driven transitions. In the thermal transition, the power spectral density (PSD) of the resistance noise is unchanged as we approach Tc from 300 K and an abrupt drop in the magnitude is seen above Tc and it remains unchanged till 400 K. However, the noise behavior in the electrically driven case is distinctly different: as the voltage is ramped from zero, the PSD gradually increases by an order of magnitude before reaching Vth and an abrupt increase is seen at Vth. The noise magnitude decreases above Vth, approaching the V = 0 value. The individual roles of percolation, Joule heating and signatures of correlated behavior will be discussed. This work is supported by NSF DMR 0847324.
Enhanced delineation of degradation in aortic walls through OCT
NASA Astrophysics Data System (ADS)
Real, Eusebio; Val-Bernal, José Fernando; Revuelta, José M.; Pontón, Alejandro; Calvo Díez, Marta; Mayorga, Marta; López-Higuera, José M.; Conde, Olga M.
2015-03-01
Degradation of the wall of human ascending thoracic aorta has been assessed through Optical Coherence Tomography (OCT). OCT images of the media layer of the aortic wall exhibit micro-structure degradation in case of diseased aortas from aneurysmal vessels or in aortas prone to aortic dissections. The degeneration in vessel walls appears as low-reflectivity areas due to the invasive appearance of acidic polysaccharides and mucopolysaccharides within a typical ordered microstructure of parallel lamellae of smooth muscle cells, elastin and collagen fibers. An OCT indicator of wall degradation can be generated upon the spatial quantification of the extension of degraded areas in a similar way as conventional histopathology. This proposed OCT marker offers a real-time clinical insight of the vessel status to help cardiovascular surgeons in vessel repair interventions. However, the delineation of degraded areas on the B-scan image from OCT is sometimes difficult due to presence of speckle noise, variable SNR conditions on the measurement process, etc. Degraded areas could be outlined by basic thresholding techniques taking advantage of disorders evidences in B-scan images, but this delineation is not always optimum and requires complex additional processing stages. This work proposes an optimized delineation of degraded spots in vessel walls, robust to noisy environments, based on the analysis of the second order variation of image intensity of backreflection to determine the type of local structure. Results improve the delineation of wall anomalies providing a deeper physiological perception of the vessel wall conditions. Achievements could be also transferred to other clinical scenarios: carotid arteries, aorto-iliac or ilio-femoral sections, intracranial, etc.
Child Abuse and Neglect: A Shared Community Concern. Revised.
ERIC Educational Resources Information Center
National Center on Child Abuse and Neglect (DHHS/OHDS), Washington, DC.
The purpose of this publication is to help the reader understand the problems of child abuse and neglect and become familiar with prevention and intervention efforts. Introductory pages define child abuse and neglect, suggest the scope of the problem, delineate reasons for its occurrence, and explain how to recognize abuse or neglect. This section…
NASA Technical Reports Server (NTRS)
Christenson, J. W.; Lachowski, H. M.
1977-01-01
LANDSAT digital multispectral scanner data, in conjunction with supporting ground truth, were investigated to determine their utility in delineation of urban-rural boundaries. The digital data for the metropolitan areas of Washington, D. C.; Austin, Texas; and Seattle, Washingtion; were processed using an interactive image processing system. Processing focused on identification of major land cover types typical of the zone of transition from urban to rural landscape, and definition of their spectral signatures. Census tract boundaries were input into the interactive image processing system along with the LANDSAT single and overlayed multiple date MSS data. Results of this investigation indicate that satellite collected information has a practical application to the problem of urban area delineation and to change detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schrander-Stumpel, C.; Hoeweler, C.; Jones, M.
X-linked hydrocephalus (HSAS) (MIM{sup *}307000), MASA syndrome (MIM {sup *}303350), and complicated spastic paraplegia (SPG1) (MIM {sup *}3129000) are closely related. Soon after delineation, SPG1 was incorporated into the spectrum of MASA syndrome. HSAS and MASA syndrome show great clinical overlap; DNA linkage analysis places the loci at Xq28. In an increasing number of families with MASA syndrome or HSAS, mutations in L1CAM, a gene located at Xq28, have been reported. In order to further delineate the clinical spectrum, we studied 6 families with male patients presenting with MASA syndrome, HSAS, or a mixed phenotype. We summarized data from previousmore » reports and compared them with our data. Clinical variability appears to be great, even within families. Problems in genetic counseling and prenatal diagnosis, the possible overlap with X-linked corpus callosum agenesis and FG syndrome, and the different forms of X-linked complicated spastic paraplegia are discussed. Since adducted thumbs and spastic paraplegia are found in 90% of the patients, the condition may be present in males with nonspecific mental retardation. We propose to abandon the designation MASA syndrome and use the term HSAS/MASA spectrum, incorporating SPG1. 79 refs., 6 figs., 2 tabs.« less
Homaeinezhad, M R; Erfanianmoshiri-Nejad, M; Naseri, H
2014-01-01
The goal of this study is to introduce a simple, standard and safe procedure to detect and to delineate P and T waves of the electrocardiogram (ECG) signal in real conditions. The proposed method consists of four major steps: (1) a secure QRS detection and delineation algorithm, (2) a pattern recognition algorithm designed for distinguishing various ECG clusters which take place between consecutive R-waves, (3) extracting template of the dominant events of each cluster waveform and (4) application of the correlation analysis in order to delineate automatically the P- and T-waves in noisy conditions. The performance characteristics of the proposed P and T detection-delineation algorithm are evaluated versus various ECG signals whose qualities are altered from the best to the worst cases based on the random-walk noise theory. Also, the method is applied to the MIT-BIH Arrhythmia and the QT databases for comparing some parts of its performance characteristics with a number of P and T detection-delineation algorithms. The conducted evaluations indicate that in a signal with low quality value of about 0.6, the proposed method detects the P and T events with sensitivity Se=85% and positive predictive value of P+=89%, respectively. In addition, at the same quality, the average delineation errors associated with those ECG events are 45 and 63ms, respectively. Stable delineation error, high detection accuracy and high noise tolerance were the most important aspects considered during development of the proposed method. © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mangiarotti, S.; Muddu, S.; Sharma, A. K.; Corgne, S.; Ruiz, L.; Hubert-Moy, L.
2015-12-01
Groundwater is one of the main water reservoirs used for irrigation in regions of scarce water resources. For this reason, crop irrigation is expected to have a direct influence on this reservoir. To understand the time evolution of the groundwater table and its storage changes, it is important to delineate irrigated crops, whose evaporative demand is relatively higher. Such delineation may be performed based on classical classification approaches using optical remote sensing. However, it remains a difficult problem in regions where plots do not exceed a few hectares and exhibit a very heterogeneous pattern with multiple crops. This difficulty is emphasized in South India where two or three months of cloudy conditions during the monsoon period can hide crop growth during the year. An alternative approach is introduced here that takes advantage of such scarce signal. Ten different crops are considered in the present study. A bank of crop models is first established based on the global modeling technique [1]. These models are then tested using original time series (from which models were obtained) in order to evaluate the information that can be deduced from these models in an inverse approach. The approach is then tested on an independent data set and is finally applied to a large ensemble of 10,000 time series of plot data extracted from the Berambadi catchment (AMBHAS site) part of the Kabini River basin CZO, South India. Results show that despite the important two-month gap in satellite observations in the visible band, interpolated vegetation index remains an interesting indicator for identification of crops in South India. [1] S. Mangiarotti, R. Coudret, L. Drapeau, & L. Jarlan, Polynomial search and global modeling: Two algorithms for modeling chaos, Phys. Rev. E, 86(4), 046205 (2012).
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin
2017-01-01
This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.
A Multivariate Randomization Text of Association Applied to Cognitive Test Results
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Beard, Bettina
2009-01-01
Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.
ERIC Educational Resources Information Center
White, April D., Ed.
2008-01-01
This first issue of "Blueprint" focuses on standards that states have adopted to delineate what students should know at each grade level of the K-12 system. Textbooks, teacher training, professional development, and assessments are built upon education standards. To generate information that will help state leaders improve their…
Heritability of personality disorder traits: a twin study.
Jang, K L; Livesley, W J; Vernon, P A; Jackson, D N
1996-12-01
Genetic and non-genetic influences on the hierarchy of traits that delineate personality disorder as measured by the Dimensional Assessment of Personality Problems (DAPP-DQ) scale were examined using data from a sample of 483 volunteer twin pairs (236 monozygotic pairs and 247 dizygotic pairs). The DAPP-DQ assesses four higher-order factors, 18 basic dimensions and 69 facet traits of personality disorder. The correlation coefficients for monozygotic and dizygotic twin pairs ranged from 0.26 to 0.56 and from 0.03 to 0.41, respectively. Broad heritability estimates ranged from 0 to 58% (median value 45%). Additive genetic effects and unique environmental effects emerged as the primary influences on these scales, with unique environmental influences accounting for the largest proportion of the variance for most traits at all levels of the hierarchy.
Empowering Women for Equity: A Counseling Approach.
ERIC Educational Resources Information Center
Aspy, Cheryl Blalock; Sandhu, Daya Singh
The purpose of this book is to describe the process through which women can achieve equity and to delineate the skills by which counselors can assist them. It is organized to into five sections and provides a developmental look at the problem, its manifestations, remedies, and the processes through which the problem can be vanquished. Section 1,…
ERIC Educational Resources Information Center
Suor, Jennifer H.; Sturge-Apple, Melissa L.; Davies, Patrick T.; Cicchetti, Dante
2017-01-01
Harsh environments are known to predict deficits in children's cognitive abilities. Life history theory approaches challenge this interpretation, proposing stressed children's cognition becomes specialized to solve problems in fitness-enhancing ways. The goal of this study was to examine associations between early environmental harshness and…
NASA Astrophysics Data System (ADS)
Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.
2014-03-01
Benign radiation-induced lung injury is a common finding following stereotactic ablative radiotherapy (SABR) for lung cancer, and is often difficult to differentiate from a recurring tumour due to the ablative doses and highly conformal treatment with SABR. Current approaches to treatment response assessment have shown limited ability to predict recurrence within 6 months of treatment. The purpose of our study was to evaluate the accuracy of second order texture statistics for prediction of eventual recurrence based on computed tomography (CT) images acquired within 6 months of treatment, and compare with the performance of first order appearance and lesion size measures. Consolidative and ground-glass opacity (GGO) regions were manually delineated on post-SABR CT images. Automatic consolidation expansion was also investigated to act as a surrogate for GGO position. The top features for prediction of recurrence were all texture features within the GGO and included energy, entropy, correlation, inertia, and first order texture (standard deviation of density). These predicted recurrence with 2-fold cross validation (CV) accuracies of 70-77% at 2- 5 months post-SABR, with energy, entropy, and first order texture having leave-one-out CV accuracies greater than 80%. Our results also suggest that automatic expansion of the consolidation region could eliminate the need for manual delineation, and produced reproducible results when compared to manually delineated GGO. If validated on a larger data set, this could lead to a clinically useful computer-aided diagnosis system for prediction of recurrence within 6 months of SABR and allow for early salvage therapy for patients with recurrence.
Kinetics of nucleation and crystallization in poly(e-caprolactone) (PCL)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuravlev, Evgeny; Schmelzer, Jurn; Wunderlich, Bernhard
2011-01-01
The recently developed differential fast scanning calorimetry (DFSC) is used for a new look at the crystal growth of poly(3-caprolactone) (PCL) from 185 K, below the glass transition temperature, to 330 K, close to the equilibrium melting temperature. The DFSC allows temperature control of the sample and determination of its heat capacity using heating rates from 50 to 50,000 K/s. The crystal nucleation and crystallization halftimes were determined simultaneously. The obtained halftimes cover a range from 3 102 s (nucleation at 215 K) to 3 109 s (crystallization at 185 K). After attempting to analyze the experiments with the classicalmore » nucleation and growth model, developed for systems consisting of small molecules, a new methodology is described which addresses the specific problems of crystallization of flexible linear macromolecules. The key problems which are attempted to be resolved concern the differences between the structures of the various entities identified and their specific role in the mechanism of growth. The structures range from configurations having practically unmeasurable latent heats of ordering (nuclei) to being clearly-recognizable, ordered species with rather sharp disordering endotherms in the temperature range from the glass transition to equilibrium melting for increasingly perfect and larger crystals. The mechanisms and kinetics of growth involve also a detailed understanding of the interaction with the surrounding rigid-amorphous fraction (RAF) in dependence of crystal size and perfection.« less
Comparison of turbulence models and CFD solution options for a plain pipe
NASA Astrophysics Data System (ADS)
Canli, Eyub; Ates, Ali; Bilir, Sefik
2018-06-01
Present paper is partly a declaration of state of a currently ongoing PhD work about turbulent flow in a thick walled pipe in order to analyze conjugate heat transfer. An ongoing effort on CFD investigation of this problem using cylindrical coordinates and dimensionless governing equations is identified alongside a literature review. The mentioned PhD work will be conducted using an in-house developed code. However it needs preliminary evaluation by means of commercial codes available in the field. Accordingly ANSYS CFD was utilized in order to evaluate mesh structure needs and asses the turbulence models and solution options in terms of computational power versus difference signification. Present work contains a literature survey, an arrangement of governing equations of the PhD work, CFD essentials of the preliminary analysis and findings about the mesh structure and solution options. Mesh element number was changed between 5,000 and 320,000. k-ɛ, k-ω, Spalart-Allmaras and Viscous-Laminar models were compared. Reynolds number was changed between 1,000 and 50,000. As it may be expected due to the literature, k-ɛ yields more favorable results near the pipe axis and k-ωyields more convenient results near the wall. However k-ɛ is found sufficient to give turbulent structures for a conjugate heat transfer problem in a thick walled plain pipe.
Zhao, Pengxiang; Zhou, Suhong
2018-01-01
Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals’ activity space. First, a survey was conducted to collect individuals’ daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment. PMID:29439392
NASA Astrophysics Data System (ADS)
Liu, J.; Michael, F.; Hager, B. H.
2013-12-01
Iceland, the on-land continuation of the Mid-Atlantic Ridge, is the result of the interaction between the Mid-Atlantic Ridge and the North-Atlantic mantle plume. The superposition and relative motion of the spreading plate boundary over the mantle plume are manifested by the volcanism and seismicity in Iceland. The Krýsuvík geothermal field is one of the most active geothermal fields in southwest Iceland. In 2010, Massachusetts Institute of Technology, Reykjanes University, Uppsala University, and the Iceland Geosurvey (ISOR) deployed 38 temporary seismic stations on the Reykjanes Peninsula. Using data from 18 of the temporary seismic stations and from 5 stations of the South Iceland Lowland network around Krýsuvík, we captured an earthquake swarm that occurred between November, 2010 and February, 2011. We applied double difference tomography to relocate the events and determine the velocity structure in the region. Activity is clustered around the center of the Krýsuvík volcano system. Our seismic tomography result indicates a low velocity zone at a depth of about 6 km, right under the earthquake swarm. We consider that this low velocity zone contains some crustal magma that may be the thermal source for the geothermal field. At the same time, our relocated events delineate faults above and around this magma chamber. The system is within the stress field of a combination of left-lateral shear and extension; the majority of the strike-slip faults are right-lateral and most dip-slip faults are normal faults. Published geodetic measurements for the time period between 2009 and 2012 show a few centimeters uplift and extension in the area of the seismic swarm. We modeled the observed deformation using the Coulomb 3.3 software (U.S. Geological Survey). Our result indicates that a Mogi source of about 20×10^6 m^3 at a depth of about 6 km, consistent with the location and size of the tomography result, can explain the main deformation. The normal and the right-lateral strike-slip faults that are delineated by the earthquake relocations, consistent with the local stress induced by magma intrusion and the regional stress field caused by the interaction of the spreading plate boundary and mantle plume, explain the observed second order deformation. Our results are consistent with the hypothesis that seismicity in the Krýsuvík area in 2010-2011 might be triggered by magmatic activity.
A comparison of zero-order, first-order, and monod biotransformation models
Bekins, B.A.; Warren, E.; Godsy, E.M.
1998-01-01
Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K(s), this assumption is often made without verification of this condition. We present a formal error analysis showing that the relative error in the first-order approximation is S/K(S) and in the zero-order approximation the error is K(s)/S. We then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K(s), it may better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of KS for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, we apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set.A formal error analysis is presented showing that the relative error in the first-order approximation is S/KS and in the zero-order approximation the error is KS/S where S is the substrate concentration and KS is the half-saturation constant. The problems that arise when the first-order approximation is used outside the range for which it is valid are examined. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than KS, it may be better to model degradation using a zero-order rate expression.
Moschella, Phillip C.; Rao, Vijay U.; McDermott, Paul J.; Kuppuswamy, Dhandapani
2007-01-01
SUMMARY Activation of both mTOR and its downstream target, S6K1 (p70 S6 kinase) have been implicated to affect cardiac hypertrophy. Our earlier work, in a feline model of 1–48 h pressure overload, demonstrated that mTOR/S6K1 activation occurred primarily through a PKC/c-Raf pathway. To further delineate the role of specific PKC isoforms on mTOR/S6K1 activation, we utilized primary cultures of adult feline cardiomyocytes in vitro and stimulated with endothelin-1 (ET-1), phenylephrine (PE), TPA, or insulin. All agonist treatments resulted in S2248 phosphorylation of mTOR and T389 and S421/T424 phosphorylation of S6K1, however only ET-1 and TPA-stimulated mTOR/S6K1 activation was abolished with infection of a dominant negative adenoviral c-Raf (DN-Raf) construct. Expression of DN-PKCε blocked ET-1-stimulated mTOR S2448 and S6K1 S421/T424 and T389 phosphorylation but had no effect on insulin-stimulated S6K1 phosphorylation. Expression of DN-PKCδ or pretreatment of cardiomyocytes with rottlerin, a PKCδ specific inhibitor, blocked both ET-1 and insulin stimulated mTOR S2448 and S6K1 T389 phosphorylation. However, treatment with Gö6976, a specific classical PKC (cPKC) inhibitor did not affect mTOR/S6K1 activation. These data indicate that: (i) PKCε is required for ET-1-stimulated T421/S424 phosphorylation of S6K1, (ii) both PKCε and PKCδ are required for ET-1-stimulated mTOR S2448 and S6K1 T389 phosphorylation, (iii) PKCδ is also required for insulin-stimulated mTOR S2448 and S6K1 T389 phosphorylation. Together, these data delineate both distinct and combinatorial roles of specific PKC isoforms on mTOR and S6K1 activation in adult cardiac myocytes following hypertrophic stimulation. PMID:17976640
Moschella, Phillip C; Rao, Vijay U; McDermott, Paul J; Kuppuswamy, Dhandapani
2007-12-01
Activation of both mTOR and its downstream target, S6K1 (p70 S6 kinase) have been implicated to affect cardiac hypertrophy. Our earlier work, in a feline model of 1-48 h pressure overload, demonstrated that mTOR/S6K1 activation occurred primarily through a PKC/c-Raf pathway. To further delineate the role of specific PKC isoforms on mTOR/S6K1 activation, we utilized primary cultures of adult feline cardiomyocytes in vitro and stimulated with endothelin-1 (ET-1), phenylephrine (PE), TPA, or insulin. All agonist treatments resulted in S2248 phosphorylation of mTOR and T389 and S421/T424 phosphorylation of S6K1, however only ET-1 and TPA-stimulated mTOR/S6K1 activation was abolished with infection of a dominant negative adenoviral c-Raf (DN-Raf) construct. Expression of DN-PKC(epsilon) blocked ET-1-stimulated mTOR S2448 and S6K1 S421/T424 and T389 phosphorylation but had no effect on insulin-stimulated S6K1 phosphorylation. Expression of DN-PKC(delta) or pretreatment of cardiomyocytes with rottlerin, a PKC(delta) specific inhibitor, blocked both ET-1 and insulin stimulated mTOR S2448 and S6K1 T389 phosphorylation. However, treatment with Gö6976, a specific classical PKC (cPKC) inhibitor did not affect mTOR/S6K1 activation. These data indicate that: (i) PKC(epsilon) is required for ET-1-stimulated T421/S424 phosphorylation of S6K1, (ii) both PKC(epsilon) and PKC(delta) are required for ET-1-stimulated mTOR S2448 and S6K1 T389 phosphorylation, (iii) PKC(delta) is also required for insulin-stimulated mTOR S2448 and S6K1 T389 phosphorylation. Together, these data delineate both distinct and combinatorial roles of specific PKC isoforms on mTOR and S6K1 activation in adult cardiac myocytes following hypertrophic stimulation.
Mythic Evolution of "The New Frontier" in Mass Mediated Rhetoric.
ERIC Educational Resources Information Center
Rushing, Janice Hocker
1986-01-01
Combines "rhetorical narration" with K. Burke's dramatistic pentad to argue that definitional cultural myths are rhetorically meaningful in relation to social consciousness if both evolved teleologically. Delineates two phases in America's frontier myth associated with recent space fiction films' representation of a pentadic term's…
NASA Astrophysics Data System (ADS)
Farlin, J.; Drouet, L.; Gallé, T.; Pittois, D.; Bayerle, M.; Braun, C.; Maloszewski, P.; Vanderborght, J.; Elsner, M.; Kies, A.
2013-06-01
A simple method to delineate the recharge areas of a series of springs draining a fractured aquifer is presented. Instead of solving the flow and transport equations, the delineation is reformulated as a mass balance problem assigning arable land in proportion to the pesticide mass discharged annually in a spring at minimum total transport cost. The approach was applied to the Luxembourg Sandstone, a fractured-rock aquifer supplying half of the drinking water for Luxembourg, using the herbicide atrazine. Predictions of the recharge areas were most robust in situations of strong competition by neighbouring springs while the catchment boundaries for isolated springs were extremely sensitive to the parameter controlling flow direction. Validation using a different pesticide showed the best agreement with the simplest model used, whereas using historical crop-rotation data and spatially distributed soil-leaching data did not improve predictions. The whole approach presents the advantage of integrating objectively information on land use and pesticide concentration in spring water into the delineation of groundwater recharge zones in a fractured-rock aquifer.
Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes
NASA Astrophysics Data System (ADS)
Don, Wai-Sun; Borges, Rafael
2013-10-01
In the reconstruction step of (2r-1) order weighted essentially non-oscillatory conservative finite difference schemes (WENO) for solving hyperbolic conservation laws, nonlinear weights αk and ωk, such as the WENO-JS weights by Jiang et al. and the WENO-Z weights by Borges et al., are designed to recover the formal (2r-1) order (optimal order) of the upwinded central finite difference scheme when the solution is sufficiently smooth. The smoothness of the solution is determined by the lower order local smoothness indicators βk in each substencil. These nonlinear weight formulations share two important free parameters in common: the power p, which controls the amount of numerical dissipation, and the sensitivity ε, which is added to βk to avoid a division by zero in the denominator of αk. However, ε also plays a role affecting the order of accuracy of WENO schemes, especially in the presence of critical points. It was recently shown that, for any design order (2r-1), ε should be of Ω(Δx2) (Ω(Δxm) means that ε⩾CΔxm for some C independent of Δx, as Δx→0) for the WENO-JS scheme to achieve the optimal order, regardless of critical points. In this paper, we derive an alternative proof of the sufficient condition using special properties of βk. Moreover, it is unknown if the WENO-Z scheme should obey the same condition on ε. Here, using same special properties of βk, we prove that in fact the optimal order of the WENO-Z scheme can be guaranteed with a much weaker condition ε=Ω(Δxm), where m(r,p)⩾2 is the optimal sensitivity order, regardless of critical points. Both theoretical results are confirmed numerically on smooth functions with arbitrary order of critical points. This is a highly desirable feature, as illustrated with the Lax problem and the Mach 3 shock-density wave interaction of one dimensional Euler equations, for a smaller ε allows a better essentially non-oscillatory shock capturing as it does not over-dominate over the size of βk. We also show that numerical oscillations can be further attenuated by increasing the power parameter 2⩽p⩽r-1, at the cost of increased numerical dissipation. Compact formulas of βk for WENO schemes are also presented.
Agati, Giovanni; Soudani, Kamel; Tuccio, Lorenza; Fierini, Elisa; Ben Ghozlen, Naïma; Fadaili, El Mostafa; Romani, Annalisa; Cerovic, Zoran G
2018-06-13
We analyzed the potential of non-destructive optical sensing of grape skin anthocyanins for selective harvesting in precision viticulture. We measured anthocyanins by a hand-held fluorescence optical sensor on a 7 ha Sangiovese vineyard plot in central Italy. Optical indices obtained by the sensor were calibrated for the transformation in units of anthocyanins per berry mass, i.e., milligrams per gram of berry fresh weight. A full protocol for optimal data filtration, interpolation, and homogeneous zone delineation based on a very large number of optical measurements is proposed. Both the single signal-based fluorescence index (ANTH R ) and the two signal ratio-based index (ANTH RG ) can be used for Sangiovese grapes. Significant separations of grape-quality batches were obtained by several methods of data classification and zone delineations. Basic statistical criteria were as efficient as the K-means clustering. The best separations were obtained for three classes of grape skin anthocyanin.
A first-order k-space model for elastic wave propagation in heterogeneous media.
Firouzi, K; Cox, B T; Treeby, B E; Saffari, N
2012-09-01
A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.
Groundwater contaminant plume maps and volumes, 100-K and 100-N Areas, Hanford Site, Washington
Johnson, Kenneth H.
2016-09-27
This study provides an independent estimate of the areal and volumetric extent of groundwater contaminant plumes which are affected by waste disposal in the 100-K and 100-N Areas (study area) along the Columbia River Corridor of the Hanford Site. The Hanford Natural Resource Trustee Council requested that the U.S. Geological Survey perform this interpolation to assess the accuracy of delineations previously conducted by the U.S. Department of Energy and its contractors, in order to assure that the Natural Resource Damage Assessment could rely on these analyses. This study is based on previously existing chemical (or radionuclide) sampling and analysis data downloaded from publicly available Hanford Site Internet sources, geostatistically selected and interpreted as representative of current (from 2009 through part of 2012) but average conditions for groundwater contamination in the study area. The study is limited in scope to five contaminants—hexavalent chromium, tritium, nitrate, strontium-90, and carbon-14, all detected at concentrations greater than regulatory limits in the past.All recent analytical concentrations (or activities) for each contaminant, adjusted for radioactive decay, non-detections, and co-located wells, were converted to log-normal distributions and these transformed values were averaged for each well location. The log-normally linearized well averages were spatially interpolated on a 50 × 50-meter (m) grid extending across the combined 100-N and 100-K Areas study area but limited to avoid unrepresentative extrapolation, using the minimum curvature geostatistical interpolation method provided by SURFER®data analysis software. Plume extents were interpreted by interpolating the log-normally transformed data, again using SURFER®, along lines of equal contaminant concentration at an appropriate established regulatory concentration . Total areas for each plume were calculated as an indicator of relative environmental damage. These plume extents are shown graphically and in tabular form for comparison to previous estimates. Plume data also were interpolated to a finer grid (10 × 10 m) for some processing, particularly to estimate volumes of contaminated groundwater. However, hydrogeologic transport modeling was not considered for the interpolation. The compilation of plume extents for each contaminant also allowed estimates of overlap of the plumes or areas with more than one contaminant above regulatory standards.A mapping of saturated aquifer thickness also was derived across the 100-K and 100–N study area, based on the vertical difference between the groundwater level (water table) at the top and the altitude of the top of the Ringold Upper Mud geologic unit, considered the bottom of the uppermost unconfined aquifer. Saturated thickness was calculated for each cell in the finer (10 × 10 m) grid. The summation of the cells’ saturated thickness values within each polygon of plume regulatory exceedance provided an estimate of the total volume of contaminated aquifer, and the results also were checked using a SURFER® volumetric integration procedure. The total volume of contaminated groundwater in each plume was derived by multiplying the aquifer saturated thickness volume by a locally representative value of porosity (0.3).Estimates of the uncertainty of the plume delineation also are presented. “Upper limit” plume delineations were calculated for each contaminant using the same procedure as the “average” plume extent except with values at each well that are set at a 95-percent upper confidence limit around the log-normally transformed mean concentrations, based on the standard error for the distribution of the mean value in that well; “lower limit” plumes are calculated at a 5-percent confidence limit around the geometric mean. These upper- and lower-limit estimates are considered unrealistic because the statistics were increased or decreased at each well simultaneously and were not adjusted for correlation among the well distributions (i.e., it is not realistic that all wells would be high simultaneously). Sources of the variability in the distributions used in the upper- and lower-extent maps include time varying concentrations and analytical errors.The plume delineations developed in this study are similar to the previous plume descriptions developed by U.S. Department of Energy and its contractors. The differences are primarily due to data selection and interpolation methodology. The differences in delineated plumes are not sufficient to result in the Hanford Natural Resource Trustee Council adjusting its understandings of contaminant impact or remediation.
78 FR 26485 - Community Programs Guaranteed Loans
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-07
... community facility such as childcare, educational, or health care facilities; and amending Sec. 3575.25... manner delineated in 7 CFR part 3015, subpart V. Executive Order 12988, Civil Justice Reform This rule has been reviewed under Executive Order 12988, Civil Justice Reform. In accordance with this rule: (1...
NASA Astrophysics Data System (ADS)
Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery
2013-04-01
Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.
Parallel k-Means Clustering for Quantitative Ecoregion Delineation Using Large Data Sets
Jitendra Kumar; Richard T. Mills; Forrest M Hoffman; William W Hargrove
2011-01-01
Identification of geographic ecoregions has long been of interest to environmental scientists and ecologists for identifying regions of similar ecological and environmental conditions. Such classifications are important for predicting suitable species ranges, for stratification of ecological samples, and to help prioritize habitat preservation and remediation efforts....
Fine Arts: Music Core Curriculum, Grades 7-12.
ERIC Educational Resources Information Center
Utah State Office of Education, Salt Lake City.
This guide delineates Utah's secondary school music course curricula. The introductory section, "Music Connections," contains a music achievement portfolio for a general music course. The guide explains that "Music Connections" is an extension of the K-6 music core and includes concepts and skills to integrate music into…
The Practical Enactment of Adventure Learning: Where Will You AL@?
ERIC Educational Resources Information Center
Miller, Brant G.; Hougham, R. Justin; Eitel, Karla Bradley
2013-01-01
The Adventure Learning (AL) approach to designing and implementing learning experiences has great potential for practitioners. This manuscript delineates the practical enactment of AL to support the K-12 community, teacher educators, and residential environmental science program providers in the conceptualization and delivery of their own AL…
ERIC Educational Resources Information Center
Larmar, Stephen; Gatfield, Terry
2007-01-01
The Early Impact (EI) program is an early intervention and prevention program for reducing the incidence of conduct problems in pre-school aged children. The EI intervention framework is ecological in design and includes universal and indicated components. This paper delineates key principles and associated strategies that underpin the EI program.…
Review of Recent Literature on Figure Drawing Tests as Related to Research Problems in Art Education
ERIC Educational Resources Information Center
McWhinnie, Harold J.
1971-01-01
McFee's perception-delineation theory is supported. Major methodological problems of the psychological research presented are in the area of set and the control of specific art materials. Among the conclusions: figure drawing may not be culture fair; a person trained in visual arts should be employed in research using figure drawing tests. (VW)
ERIC Educational Resources Information Center
Kroneman, Leoniek M.; Hipwell, Alison E.; Loeber, Rolf; Koot, Hans M.; Pardini, Dustin A.
2011-01-01
Background: The presence of callous-unemotional (CU) features may delineate a severe and persistent form of conduct problems in children with unique developmental origins. Contextual risk factors such as poor parenting, delinquent peers, or neighborhood risk are believed to influence the development of conduct problems primarily in children with…
An analytically iterative method for solving problems of cosmic-ray modulation
NASA Astrophysics Data System (ADS)
Kolesnyk, Yuriy L.; Bobik, Pavol; Shakhov, Boris A.; Putis, Marian
2017-09-01
The development of an analytically iterative method for solving steady-state as well as unsteady-state problems of cosmic-ray (CR) modulation is proposed. Iterations for obtaining the solutions are constructed for the spherically symmetric form of the CR propagation equation. The main solution of the considered problem consists of the zero-order solution that is obtained during the initial iteration and amendments that may be obtained by subsequent iterations. The finding of the zero-order solution is based on the CR isotropy during propagation in the space, whereas the anisotropy is taken into account when finding the next amendments. To begin with, the method is applied to solve the problem of CR modulation where the diffusion coefficient κ and the solar wind speed u are constants with an Local Interstellar Spectra (LIS) spectrum. The solution obtained with two iterations was compared with an analytical solution and with numerical solutions. Finally, solutions that have only one iteration for two problems of CR modulation with u = constant and the same form of LIS spectrum were obtained and tested against numerical solutions. For the first problem, κ is proportional to the momentum of the particle p, so it has the form κ = k0η, where η =p/m_0c. For the second problem, the diffusion coefficient is given in the form κ = k0βη, where β =v/c is the particle speed relative to the speed of light. There was a good matching of the obtained solutions with the numerical solutions as well as with the analytical solution for the problem where κ = constant.
Magnetocaloric properties of rare-earth substituted DyCrO{sub 3}
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDannald, A.; Jain, M., E-mail: menka.jain@uconn.edu; Department of Physics, University of Connecticut, Storrs, Connecticut 06269
Recently, there has been a focus on the need for efficient refrigeration technology without the use of expensive or harmful working fluids, especially at temperatures below 30 K. Solid state refrigeration, based on the magnetocaloric effect, provides a possible solution to this problem. The rare-earth chromites (RCrO{sub 3}), especially DyCrO{sub 3}, with its large magnetic moment dysprosium ion, are potential candidates for such an application. The Dy{sup 3+} ordering transition at low temperatures (<10 K) likely causes a large magnetocaloric response in this material. This study investigates the possibility of tuning the magnetocaloric properties through the use of rare-earth substitution. Both Y{supmore » 3+} and Ho{sup 3+} substitutions were found to decrease the magnetocaloric response by disrupting the R{sup 3+} ordering. Whereas Er{sup 3+} substitution was found to increase the magnetocaloric response, likely due to an increase in the R{sup 3+} ordering temperature. The large magnetocaloric entropy change of Er{sup 3+} substituted DyCrO{sub 3} (10.92 J/kg K with a relative cooling power of 237 J/kg at 40 kOe and 5 K) indicates that this material system is well suited for low temperature (<30 K) solid state refrigeration applications.« less
NASA Astrophysics Data System (ADS)
Hooshyar, Milad; Wang, Dingbao; Kim, Seoyoung; Medeiros, Stephen C.; Hagen, Scott C.
2016-10-01
A method for automatic extraction of valley and channel networks from high-resolution digital elevation models (DEMs) is presented. This method utilizes both positive (i.e., convergent topography) and negative (i.e., divergent topography) curvature to delineate the valley network. The valley and ridge skeletons are extracted using the pixels' curvature and the local terrain conditions. The valley network is generated by checking the terrain for the existence of at least one ridge between two intersecting valleys. The transition from unchannelized to channelized sections (i.e., channel head) in each first-order valley tributary is identified independently by categorizing the corresponding contours using an unsupervised approach based on k-means clustering. The method does not require a spatially constant channel initiation threshold (e.g., curvature or contributing area). Moreover, instead of a point attribute (e.g., curvature), the proposed clustering method utilizes the shape of contours, which reflects the entire cross-sectional profile including possible banks. The method was applied to three catchments: Indian Creek and Mid Bailey Run in Ohio and Feather River in California. The accuracy of channel head extraction from the proposed method is comparable to state-of-the-art channel extraction methods.
Sensitivity Functions and Their Uses in Inverse Problems
2007-07-21
Σ0 is used in formu- lating the standard errors for our estimates θ̂n; these are given by SEk = √ (Σ0)kk, k = 1, 2, ..., p. (5) Because θ0 in (4) is...standard formula SEk = √ σ̂2(χT χ)−1kk , k = 1, 2, ..., p, (7) with χ(θ) an n× p sensitivity matrix for our model given by χjk(θ) = ∂f(tj, θ) ∂θk . (8) 5 For...Note that since θ = (K, r, x0), the standard error for K is indicated as the first entry in each of the ordered sets in each table, i.e., SEK = SEθ1
Time accurate application of the MacCormack 2-4 scheme on massively parallel computers
NASA Technical Reports Server (NTRS)
Hudson, Dale A.; Long, Lyle N.
1995-01-01
Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.
Improving the performance of minimizers and winnowing schemes
Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl
2017-01-01
Abstract Motivation: The minimizers scheme is a method for selecting k-mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k-mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k-mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. Results: We provide an in-depth analysis of the effect of k-mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al.) on the expected density of minimizers in a random sequence. Availability and Implementation: The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git. Contact: gmarcais@cs.cmu.edu or carlk@cs.cmu.edu PMID:28881970
A critical evaluation of two-equation models for near wall turbulence
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Anderson, E. Clay; Abid, Ridha
1990-01-01
A basic theoretical and computational study of two-equation models for near-wall turbulent flows was conducted. Two major problems established for the K-epsilon model are discussed, the lack of natural boundary conditions for the dissipation rate and the appearance of higher-order correlations in the balance of terms for the dissipation rate at the wall. The K-omega equation is shown to have two problems also: an exact viscous term is missing, and the destruction of the dissipation term is not properly damped near the wall. A new K-tau model (where tau = 1/omega is the turbulent time scale) was developed by inclusion of the exact viscous term, and by introduction of new wall damping functions with improved asymptotic behavior. A preliminary test of the new model yields improved predictions for the flat-plate turbulent boundary layer.
Delineation and management of sulfidic materials in Virginia highway corridors.
DOT National Transportation Integrated Search
2002-01-01
Excavation through sulfidic geologic materials during road construction has resulted in acid drainage related problems at numerous discrete locations across Virginia. Barren acidic roadbanks, and acidic runoff and fill seepage clearly cause local env...
NASA Astrophysics Data System (ADS)
Enzenhoefer, R.; Rodriguez-Pretelin, A.; Nowak, W.
2012-12-01
"From an engineering standpoint, the quantification of uncertainty is extremely important not only because it allows estimating risk but mostly because it allows taking optimal decisions in an uncertain framework" (Renard, 2007). The most common way to account for uncertainty in the field of subsurface hydrology and wellhead protection is to randomize spatial parameters, e.g. the log-hydraulic conductivity or porosity. This enables water managers to take robust decisions in delineating wellhead protection zones with rationally chosen safety margins in the spirit of probabilistic risk management. Probabilistic wellhead protection zones are commonly based on steady-state flow fields. However, several past studies showed that transient flow conditions may substantially influence the shape and extent of catchments. Therefore, we believe they should be accounted for in the probabilistic assessment and in the delineation process. The aim of our work is to show the significance of flow transients and to investigate the interplay between spatial uncertainty and flow transients in wellhead protection zone delineation. To this end, we advance our concept of probabilistic capture zone delineation (Enzenhoefer et al., 2012) that works with capture probabilities and other probabilistic criteria for delineation. The extended framework is able to evaluate the time fraction that any point on a map falls within a capture zone. In short, we separate capture probabilities into spatial/statistical and time-related frequencies. This will provide water managers additional information on how to manage a well catchment in the light of possible hazard conditions close to the capture boundary under uncertain and time-variable flow conditions. In order to save computational costs, we take advantage of super-positioned flow components with time-variable coefficients. We assume an instantaneous development of steady-state flow conditions after each temporal change in driving forces, following an idea by Festger and Walter, 2002. These quasi steady-state flow fields are cast into a geostatistical Monte Carlo framework to admit and evaluate the influence of parameter uncertainty on the delineation process. Furthermore, this framework enables conditioning on observed data with any conditioning scheme, such as rejection sampling, Ensemble Kalman Filters, etc. To further reduce the computational load, we use the reverse formulation of advective-dispersive transport. We simulate the reverse transport by particle tracking random walk in order to avoid numerical dispersion to account for well arrival times.
Reduced-Order Direct Numerical Simulation of Solute Transport in Porous Media
NASA Astrophysics Data System (ADS)
Mehmani, Yashar; Tchelepi, Hamdi
2017-11-01
Pore-scale models are an important tool for analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Current direct numerical simulation (DNS) techniques, while very accurate, are computationally prohibitive for sample sizes that are statistically representative of the porous structure. Reduced-order approaches such as pore-network models (PNM) aim to approximate the pore-space geometry and physics to remedy this problem. Predictions from current techniques, however, have not always been successful. This work focuses on single-phase transport of a passive solute under advection-dominated regimes and delineates the minimum set of approximations that consistently produce accurate PNM predictions. Novel network extraction (discretization) and particle simulation techniques are developed and compared to high-fidelity DNS simulations for a wide range of micromodel heterogeneities and a single sphere pack. Moreover, common modeling assumptions in the literature are analyzed and shown that they can lead to first-order errors under advection-dominated regimes. This work has implications for optimizing material design and operations in manufactured (electrodes) and natural (rocks) porous media pertaining to energy systems. This work was supported by the Stanford University Petroleum Research Institute for Reservoir Simulation (SUPRI-B).
High-order centered difference methods with sharp shock resolution
NASA Technical Reports Server (NTRS)
Gustafsson, Bertil; Olsson, Pelle
1994-01-01
In this paper we consider high-order centered finite difference approximations of hyperbolic conservation laws. We propose different ways of adding artificial viscosity to obtain sharp shock resolution. For the Riemann problem we give simple explicit formulas for obtaining stationary one and two-point shocks. This can be done for any order of accuracy. It is shown that the addition of artificial viscosity is equivalent to ensuring the Lax k-shock condition. We also show numerical experiments that verify the theoretical results.
Knick, Steven T; Hanser, Steven E; Preston, Kristine L
2013-06-01
Greater sage-grouse Centrocercus urophasianus (Bonaparte) currently occupy approximately half of their historical distribution across western North America. Sage-grouse are a candidate for endangered species listing due to habitat and population fragmentation coupled with inadequate regulation to control development in critical areas. Conservation planning would benefit from accurate maps delineating required habitats and movement corridors. However, developing a species distribution model that incorporates the diversity of habitats used by sage-grouse across their widespread distribution has statistical and logistical challenges. We first identified the ecological minimums limiting sage-grouse, mapped similarity to the multivariate set of minimums, and delineated connectivity across a 920,000 km(2) region. We partitioned a Mahalanobis D (2) model of habitat use into k separate additive components each representing independent combinations of species-habitat relationships to identify the ecological minimums required by sage-grouse. We constructed the model from abiotic, land cover, and anthropogenic variables measured at leks (breeding) and surrounding areas within 5 km. We evaluated model partitions using a random subset of leks and historic locations and selected D (2) (k = 10) for mapping a habitat similarity index (HSI). Finally, we delineated connectivity by converting the mapped HSI to a resistance surface. Sage-grouse required sagebrush-dominated landscapes containing minimal levels of human land use. Sage-grouse used relatively arid regions characterized by shallow slopes, even terrain, and low amounts of forest, grassland, and agriculture in the surrounding landscape. Most populations were interconnected although several outlying populations were isolated because of distance or lack of habitat corridors for exchange. Land management agencies currently are revising land-use plans and designating critical habitat to conserve sage-grouse and avoid endangered species listing. Our results identifying attributes important for delineating habitats or modeling connectivity will facilitate conservation and management of landscapes important for supporting current and future sage-grouse populations.
Knick, Steven T.; Hanser, Steven E.; Preston, Kristine L.
2013-01-01
Greater sage-grouse Centrocercus urophasianus (Bonaparte) currently occupy approximately half of their historical distribution across western North America. Sage-grouse are a candidate for endangered species listing due to habitat and population fragmentation coupled with inadequate regulation to control development in critical areas. Conservation planning would benefit from accurate maps delineating required habitats and movement corridors. However, developing a species distribution model that incorporates the diversity of habitats used by sage-grouse across their widespread distribution has statistical and logistical challenges. We first identified the ecological minimums limiting sage-grouse, mapped similarity to the multivariate set of minimums, and delineated connectivity across a 920,000 km2 region. We partitioned a Mahalanobis D2 model of habitat use into k separate additive components each representing independent combinations of species–habitat relationships to identify the ecological minimums required by sage-grouse. We constructed the model from abiotic, land cover, and anthropogenic variables measured at leks (breeding) and surrounding areas within 5 km. We evaluated model partitions using a random subset of leks and historic locations and selected D2 (k = 10) for mapping a habitat similarity index (HSI). Finally, we delineated connectivity by converting the mapped HSI to a resistance surface. Sage-grouse required sagebrush-dominated landscapes containing minimal levels of human land use. Sage-grouse used relatively arid regions characterized by shallow slopes, even terrain, and low amounts of forest, grassland, and agriculture in the surrounding landscape. Most populations were interconnected although several outlying populations were isolated because of distance or lack of habitat corridors for exchange. Land management agencies currently are revising land-use plans and designating critical habitat to conserve sage-grouse and avoid endangered species listing. Our results identifying attributes important for delineating habitats or modeling connectivity will facilitate conservation and management of landscapes important for supporting current and future sage-grouse populations.
How to cluster in parallel with neural networks
NASA Technical Reports Server (NTRS)
Kamgar-Parsi, Behzad; Gualtieri, J. A.; Devaney, Judy E.; Kamgar-Parsi, Behrooz
1988-01-01
Partitioning a set of N patterns in a d-dimensional metric space into K clusters - in a way that those in a given cluster are more similar to each other than the rest - is a problem of interest in astrophysics, image analysis and other fields. As there are approximately K(N)/K (factorial) possible ways of partitioning the patterns among K clusters, finding the best solution is beyond exhaustive search when N is large. Researchers show that this problem can be formulated as an optimization problem for which very good, but not necessarily optimal solutions can be found by using a neural network. To do this the network must start from many randomly selected initial states. The network is simulated on the MPP (a 128 x 128 SIMD array machine), where researchers use the massive parallelism not only in solving the differential equations that govern the evolution of the network, but also by starting the network from many initial states at once, thus obtaining many solutions in one run. Researchers obtain speedups of two to three orders of magnitude over serial implementations and the promise through Analog VLSI implementations of speedups comensurate with human perceptual abilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letlow, K.; Lopreato, S.C.; Meriwether, M.
The institutional aspect of the study attempts to identify possible effects of geothermal research, development, and utilization on the area and its inhabitants in three chapters. Chapters I and II address key socio-economic and demographic variables. The initial chapter provides an overview of the area where the resource is located. Major data are presented that can be used to establish a baseline description of the region for comparison over time and to delineate crucial area for future study with regard to geothermal development. The chapter highlights some of the variables that reflect the cultural nature of the Gulf Coast, itsmore » social characteristics, labor force, and service in an attempt to delineate possible problems with and barriers to the development of geothermal energy in the region. The following chapter focuses on the local impacts of geothermal wells and power-generating facilities using data on such variables as size and nature of construction and operating crews. Data are summarized for the areas studied. A flow chart is utilized to describe research that is needed in order to exploit the resource as quickly and effectively as possible. Areas of interface among various parts of the research that will include exchange of data between the social-cultural group and the institutional, legal, environmental, and resource utilization groups are identified. (MCW)« less
Psychoanalytic theory in times of terror.
Connolly, Angela
2003-09-01
Recent events have underlined in the most tragic and dramatic way the need for depth psychology to turn its attention to the psychology of terror. The present paper attempts to distinguish between the psychological modes of horror and terror and explores the different theoretical approaches of Burke, Freud, Kristeva and Jung to this problem in order to cast light on the individual and collective functions that horror and terror play. While all these authors stress that terror and horror play a role in structuring the sense of identity and in strengthening community bonds, Freud and Kristeva believe that the experience of horror works to increase the exclusion of otherness through mechanisms of repression or foreclosure while Burke and Jung see in the encounter with the Negative Sublime or with the Shadow the possibility of widening the boundaries of ego consciousness and of integration of 'otherness'. The paper then uses the analysis of two horror movies and of a particular socio-cultural context to illustrate these different functions of horror and terror and to delineate possible solutions to the problems facing society.
Strengthening Hope and Purpose in Community College Futures through Strategic Marketing Planning.
ERIC Educational Resources Information Center
Scigliano, John A.
1981-01-01
After defining marketing, describes the application of strategic marketing planning to community college funding problems. Delineates alternative sources of funding and creative techniques for tapping them. A marketing index for higher education is appended. (AYC)
Coupling of activation and inactivation gate in a K+-channel: potassium and ligand sensitivity
Ader, Christian; Schneider, Robert; Hornig, Sönke; Velisetty, Phanindra; Vardanyan, Vitya; Giller, Karin; Ohmert, Iris; Becker, Stefan; Pongs, Olaf; Baldus, Marc
2009-01-01
Potassium (K+)-channel gating is choreographed by a complex interplay between external stimuli, K+ concentration and lipidic environment. We combined solid-state NMR and electrophysiological experiments on a chimeric KcsA–Kv1.3 channel to delineate K+, pH and blocker effects on channel structure and function in a membrane setting. Our data show that pH-induced activation is correlated with protonation of glutamate residues at or near the activation gate. Moreover, K+ and channel blockers distinctly affect the open probability of both the inactivation gate comprising the selectivity filter of the channel and the activation gate. The results indicate that the two gates are coupled and that effects of the permeant K+ ion on the inactivation gate modulate activation-gate opening. Our data suggest a mechanism for controlling coordinated and sequential opening and closing of activation and inactivation gates in the K+-channel pore. PMID:19661921
Direct discontinuous Galerkin method and its variations for second order elliptic equations
Huang, Hongying; Chen, Zheng; Li, Jin; ...
2016-08-23
In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less
Direct discontinuous Galerkin method and its variations for second order elliptic equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Hongying; Chen, Zheng; Li, Jin
In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less
Assessment of Growth Problems in Adolescents
Baker, F.W.
1986-01-01
Investigation of an adolescent growth problem consists of taking an adequate history and doing a complete physical examination. This procedure, along with a calculation of bone age and height/weight age, will allow family physicians to decide on the cause of the growth variance in most patients. Relatively simple studies can be done in the family physician's office to delineate the major causes of growth problems; the majority will be unrelated to the endocrine system. Further studies may be needed in a hospital-based setting. PMID:21267222
Homogenization of Winkler-Steklov spectral conditions in three-dimensional linear elasticity
NASA Astrophysics Data System (ADS)
Gómez, D.; Nazarov, S. A.; Pérez, M. E.
2018-04-01
We consider a homogenization Winkler-Steklov spectral problem that consists of the elasticity equations for a three-dimensional homogeneous anisotropic elastic body which has a plane part of the surface subject to alternating boundary conditions on small regions periodically placed along the plane. These conditions are of the Dirichlet type and of the Winkler-Steklov type, the latter containing the spectral parameter. The rest of the boundary of the body is fixed, and the period and size of the regions, where the spectral parameter arises, are of order ɛ . For fixed ɛ , the problem has a discrete spectrum, and we address the asymptotic behavior of the eigenvalues {β _k^ɛ }_{k=1}^{∞} as ɛ → 0. We show that β _k^ɛ =O(ɛ ^{-1}) for each fixed k, and we observe a common limit point for all the rescaled eigenvalues ɛ β _k^ɛ while we make it evident that, although the periodicity of the structure only affects the boundary conditions, a band-gap structure of the spectrum is inherited asymptotically. Also, we provide the asymptotic behavior for certain "groups" of eigenmodes.
Kernel temporal enhancement approach for LORETA source reconstruction using EEG data.
Torres-Valencia, Cristian A; Santamaria, M Claudia Joana; Alvarez, Mauricio A
2016-08-01
Reconstruction of brain sources from magnetoencephalography and electroencephalography (M/EEG) data is a well known problem in the neuroengineering field. A inverse problem should be solved and several methods have been proposed. Low Resolution Electromagnetic Tomography (LORETA) and the different variations proposed as standardized LORETA (sLORETA) and the standardized weighted LORETA (swLORETA) have solved the inverse problem following a non-parametric approach, that is by setting dipoles in the whole brain domain in order to estimate the dipole positions from the M/EEG data and assuming some spatial priors. Errors in the reconstruction of sources are presented due the low spatial resolution of the LORETA framework and the influence of noise in the observable data. In this work a kernel temporal enhancement (kTE) is proposed in order to build a preprocessing stage of the data that allows in combination with the swLORETA method a improvement in the source reconstruction. The results are quantified in terms of three dipole error localization metrics and the strategy of swLORETA + kTE obtained the best results across different signal to noise ratio (SNR) in random dipoles simulation from synthetic EEG data.
Attractor scenarios and superluminal signals in k-essence cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Jin U; Arnold Sommerfeld Center, Department of Physics, Ludwig-Maximilians University, Theresienstrasse 37, 80333 Munich; Vanchurin, Vitaly
Cosmological scenarios with k-essence are invoked in order to explain the observed late-time acceleration of the Universe. These scenarios avoid the need for fine-tuned initial conditions (the 'coincidence problem') because of the attractorlike dynamics of the k-essence field {phi}. It was recently shown that all k-essence scenarios with Lagrangians p=L(X){phi}{sup -2}, where X{identical_to}(1/2){phi}{sub ,{mu}}{phi}{sup ,{mu}}, necessarily involve an epoch where perturbations of {phi} propagate faster than light (the 'no-go theorem'). We carry out a comprehensive study of attractorlike cosmological solutions ('trackers') involving a k-essence scalar field {phi} and another matter component. The result of this study is a complete classificationmore » of k-essence Lagrangians that admit asymptotically stable tracking solutions, among all Lagrangians of the form p=K({phi})L(X). Using this classification, we select the class of models that describe the late-time acceleration and avoid the coincidence problem through the tracking mechanism. An analogous 'no-go theorem' still holds for this class of models, indicating the existence of a superluminal epoch. In the context of k-essence cosmology, the superluminal epoch does not lead to causality violations. We discuss the implications of superluminal signal propagation for possible causality violations in Lorentz-invariant field theories.« less
US Army Medical Bioengineering Research and Development Laboratory Annual Progress Report for FY 83.
1983-10-01
Army consumes is chlorinated. Also, the water from Army wastewater treatment plants is chlorinated before it is returned to the environment.I Because...12K; CY - K; BY - OK PROBLEM DEFINITION: Chlorination of drinking water and of effluents from wastewater treatment plants is standard practice employed...FY81 indicated no research - activity on this type of photocell. Materials and chemicals have been ordered and assembled. A preliminary cell has been
Spacecraft Charging Calculations: NASCAP-2K and SEE Spacecraft Charging Handbook
NASA Technical Reports Server (NTRS)
Davis, V. A.; Neergaard, L. F.; Mandell, M. J.; Katz, I.; Gardner, B. M.; Hilton, J. M.; Minor, J.
2002-01-01
For fifteen years NASA and the Air Force Charging Analyzer Program for Geosynchronous Orbits (NASCAP/GEO) has been the workhorse of spacecraft charging calculations. Two new tools, the Space Environment and Effects (SEE) Spacecraft Charging Handbook (recently released), and Nascap-2K (under development), use improved numeric techniques and modern user interfaces to tackle the same problem. The SEE Spacecraft Charging Handbook provides first-order, lower-resolution solutions while Nascap-2K provides higher resolution results appropriate for detailed analysis. This paper illustrates how the improvements in the numeric techniques affect the results.
Application of clustering for customer segmentation in private banking
NASA Astrophysics Data System (ADS)
Yang, Xuan; Chen, Jin; Hao, Pengpeng; Wang, Yanbo J.
2015-07-01
With fierce competition in banking industry, more and more banks have realised that accurate customer segmentation is of fundamental importance, especially for the identification of those high-value customers. In order to solve this problem, we collected real data about private banking customers of a commercial bank in China, conducted empirical analysis by applying K-means clustering technique. When determine the K value, we propose a mechanism that meet both academic requirements and practical needs. Through K-means clustering, we successfully segmented the customers into three categories, and features of each group have been illustrated in details.
Genetic assessment of leech species from yak (Bos grunniens) in the tract of Northeast India.
Chatterjee, Nilkantha; Dhar, Bishal; Bhattarcharya, Debasis; Deori, Sourabh; Doley, Juwar; Bam, Joken; Das, Pranab J; Bera, Asit K; Deb, Sitangshu M; Devi, Ningthoujam Neelima; Paul, Rajesh; Malvika, Sorokhaibam; Ghosh, Sankar Kumar
2018-01-01
Yak is an iconic symbol of Tibet and high altitudes of Northeast India. It is highly cherished for milk, meat, and skin. However, yaks suffer drastic change in milk production, weight loss, etc, when infested by parasites. Among them, infestation by leeches is a serious problem in the Himalayan belt of Northeast India. The parasite feeds on blood externally or from body orifices, like nasopharynx, oral, rectum, etc. But there has been limited data about the leech species infesting the yak in that region because of the difficulties in morphological identification due to plasticity of the body, changes in shape, and surface structure and thus, warrants for the molecular characterization of leech. In anticipation, this study would be influential in proper identification of leech species infesting yak track and also helpful in inventorying of leech species in Northeast India. Here, we investigated, through combined approach of molecular markers and morphological parameters for the identification of leech species infesting yak. The DNA sequences of COI barcode fragment, 18S and 28S rDNA, were analyzed for species identification. The generated sequences were subjected to similarity match in global database and analyzed further through Neighbour-Joining, K2P distance based as well as ML approach. Among the three markers, only COI was successful in delineating species whereas the 18S and 28S failed to delineate the species. Our study confirmed the presence of the species from genus Hirudinaria, Haemadipsa, Whitmania, and one species Myxobdella annandalae, which has not been previously reported from this region.
Informed choice and deaf children: underpinning concepts and enduring challenges.
Young, Alys; Carr, Gwen; Hunt, Ros; McCracken, Wendy; Skipp, Amy; Tattersall, Helen
2006-01-01
This article concerns the first stage of a research and development project that aimed to produce both parent and professional guidelines on the promotion and provision of informed choice for families with deaf children. It begins with a theoretical discussion of the problems associated with the concept of informed choice and deaf child services and then focuses specifically on why a metastudy approach was employed to address both the overcontextualized debate about informed choice when applied to deaf children and the problems associated with its investigation in practice with families and professionals. It presents a detailed analysis of the conceptual relevance of a range of identified studies "outside" the field of deafness. These are ordered according to 2 main conceptual categories and 7 subcategories-(a) the nature of information: "information that is evaluative, not just descriptive"; "the difficulties of information for a purpose"; "the origins and status of information"; and "informed choice and knowledge, not informed choice and information" and (b) parameters and definitions of choice: "informed choice as absolute and relative concept", "preferences and presumptions of rationality", and "informed choice for whom?" Relevant deaf child literature is integrated into the discussion of each conceptual debate in order both to expand and challenge current usage of informed choice as applied to deaf children and families and to delineate possible directions in the planning of the next stage of the main project aimed at producing parent/professional guidelines.
Physiological Imaging-Defined, Response-Driven Subvolumes of a Tumor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farjam, Reza; Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan; Tsien, Christina I.
2013-04-01
Purpose: To develop an image analysis framework to delineate the physiological imaging-defined subvolumes of a tumor in relating to treatment response and outcome. Methods and Materials: Our proposed approach delineates the subvolumes of a tumor based on its heterogeneous distributions of physiological imaging parameters. The method assigns each voxel a probabilistic membership function belonging to the physiological parameter classes defined in a sample of tumors, and then calculates the related subvolumes in each tumor. We applied our approach to regional cerebral blood volume (rCBV) and Gd-DTPA transfer constant (K{sup trans}) images of patients who had brain metastases and were treatedmore » by whole-brain radiation therapy (WBRT). A total of 45 lesions were included in the analysis. Changes in the rCBV (or K{sup trans})–defined subvolumes of the tumors from pre-RT to 2 weeks after the start of WBRT (2W) were evaluated for differentiation of responsive, stable, and progressive tumors using the Mann-Whitney U test. Performance of the newly developed metrics for predicting tumor response to WBRT was evaluated by receiver operating characteristic (ROC) curve analysis. Results: The percentage decrease in the high-CBV-defined subvolumes of the tumors from pre-RT to 2W was significantly greater in the group of responsive tumors than in the group of stable and progressive tumors (P<.007). The change in the high-CBV-defined subvolumes of the tumors from pre-RT to 2W was a predictor for post-RT response significantly better than change in the gross tumor volume observed during the same time interval (P=.012), suggesting that the physiological change occurs before the volumetric change. Also, K{sup trans} did not add significant discriminatory information for assessing response with respect to rCBV. Conclusion: The physiological imaging-defined subvolumes of the tumors delineated by our method could be candidates for boost target, for which further development and evaluation is warranted.« less
Wu, Gang; Liu, Wen; Berka, Vladimir; Tsai, Ah-Lim
2017-09-01
To delineate the commonalities and differences in gaseous ligand discrimination among the heme-based sensors with Heme Nitric oxide/OXygen binding protein (H-NOX) scaffold, the binding kinetic parameters for gaseous ligands NO, CO, and O 2 , including K D , k on , and k off , of Shewanella oneidensis H-NOX (So H-NOX) were characterized in detail in this study and compared to those of previously characterized H-NOXs from Clostridium botulinum (Cb H-NOX), Nostoc sp. (Ns H-NOX), Thermoanaerobacter tengcongensis (Tt H-NOX), Vibrio cholera (Vc H-NOX), and human soluble guanylyl cyclase (sGC), an H-NOX analogue. The K D (NO) and K D (CO) of each bacterial H-NOX or sGC follow the "sliding scale rule"; the affinities of the bacterial H-NOXs for NO and CO vary in a small range but stronger than those of sGC by at least two orders of magnitude. On the other hand, each bacterial H-NOX exhibits different characters in the stability of its 6c NO complex, reactivity with secondary NO, stability of oxyferrous heme and autoxidation to ferric heme. A facile access channel for gaseous ligands is also identified, implying that ligand access has only minimal effect on gaseous ligand selectivity of H-NOXs or sGC. This comparative study of the binding parameters of the bacterial H-NOXs and sGC provides a basis to guide future new structural and functional studies of each specific heme sensor with the H-NOX protein fold. Copyright © 2017 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.
On-line Tools for Assessing Petroleum Releases
The Internet tools described in this report provide methods and models for evaluation of contaminated sites. Two problems are addressed by models. The first is the placement of wells for correct delineation of contaminant plumes. Because aquifer recharge can displace plumes dow...
Homosexual Behavior and the School Counselor.
ERIC Educational Resources Information Center
Powell, Robert Earl
1987-01-01
Examines some of the problems and issues that confront adolescent gay and lesbian students in the school environment and focuses on an understandng of the sexual preference of these youths as a means of delineating roles for the school counselor. (Author/ABB)
Improving the performance of minimizers and winnowing schemes.
Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl
2017-07-15
The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
ERIC Educational Resources Information Center
Norberg, Melissa M.; Battisti, Robert A.; Copeland, Jan; Hermens, Daniel F.; Hickie, Ian B.
2012-01-01
The aim of the current study was to delineate the psychiatric profile of cannabis dependent young people (14-29 years old) with mental health problems (N = 36) seeking treatment via a research study. To do so, the Structured Clinical Interview for DSM-IV-TR Axis I Disorders and the Structured Clinical Interview for DSM-IV Childhood Diagnoses were…
Phenotypic and genetic structure of traits delineating personality disorder.
Livesley, W J; Jang, K L; Vernon, P A
1998-10-01
The evidence suggests that personality traits are hierarchically organized with more specific or lower-order traits combining to form more generalized higher-order traits. Agreement exists across studies regarding the lower-order traits that delineate personality disorder but not the higher-order traits. This study seeks to identify the higher-order structure of personality disorder by examining the phenotypic and genetic structures underlying lower-order traits. Eighteen lower-order traits were assessed using the Dimensional Assessment of Personality Disorder-Basic Questionnaire in samples of 656 personality disordered patients, 939 general population subjects, and a volunteer sample of 686 twin pairs. Principal components analysis yielded 4 components, labeled Emotional Dysregulation, Dissocial Behavior, Inhibitedness, and Compulsivity, that were similar across the 3 samples. Multivariate genetic analyses also yielded 4 genetic and environmental factors that were remarkably similar to the phenotypic factors. Analysis of the residual heritability of the lower-order traits when the effects of the higher-order factors were removed revealed a substantial residual heritable component for 12 of the 18 traits. The results support the following conclusions. First, the stable structure of traits across clinical and nonclinical samples is consistent with dimensional representations of personality disorders. Second, the higher-order traits of personality disorder strongly resemble dimensions of normal personality. This implies that a dimensional classification should be compatible with normative personality. Third, the residual heritability of the lower-order traits suggests that the personality phenotypes are based on a large number of specific genetic components.
The Local Discontinuous Galerkin Method for Time-Dependent Convection-Diffusion Systems
NASA Technical Reports Server (NTRS)
Cockburn, Bernardo; Shu, Chi-Wang
1997-01-01
In this paper, we study the Local Discontinuous Galerkin methods for nonlinear, time-dependent convection-diffusion systems. These methods are an extension of the Runge-Kutta Discontinuous Galerkin methods for purely hyperbolic systems to convection-diffusion systems and share with those methods their high parallelizability, their high-order formal accuracy, and their easy handling of complicated geometries, for convection dominated problems. It is proven that for scalar equations, the Local Discontinuous Galerkin methods are L(sup 2)-stable in the nonlinear case. Moreover, in the linear case, it is shown that if polynomials of degree k are used, the methods are k-th order accurate for general triangulations; although this order of convergence is suboptimal, it is sharp for the LDG methods. Preliminary numerical examples displaying the performance of the method are shown.
Rapacchi, Stanislas; Han, Fei; Natsuaki, Yutaka; Kroeker, Randall; Plotnik, Adam; Lehman, Evan; Sayre, James; Laub, Gerhard; Finn, J Paul; Hu, Peng
2014-01-01
Purpose We propose a compressed-sensing (CS) technique based on magnitude image subtraction for high spatial and temporal resolution dynamic contrast-enhanced MR angiography (CE-MRA). Methods Our technique integrates the magnitude difference image into the CS reconstruction to promote subtraction sparsity. Fully sampled Cartesian 3D CE-MRA datasets from 6 volunteers were retrospectively under-sampled and three reconstruction strategies were evaluated: k-space subtraction CS, independent CS, and magnitude subtraction CS. The techniques were compared in image quality (vessel delineation, image artifacts, and noise) and image reconstruction error. Our CS technique was further tested on 7 volunteers using a prospectively under-sampled CE-MRA sequence. Results Compared with k-space subtraction and independent CS, our magnitude subtraction CS provides significantly better vessel delineation and less noise at 4X acceleration, and significantly less reconstruction error at 4X and 8X (p<0.05 for all). On a 1–4 point image quality scale in vessel delineation, our technique scored 3.8±0.4 at 4X, 2.8±0.4 at 8X and 2.3±0.6 at 12X acceleration. Using our CS sequence at 12X acceleration, we were able to acquire dynamic CE-MRA with higher spatial and temporal resolution than current clinical TWIST protocol while maintaining comparable image quality (2.8±0.5 vs. 3.0±0.4, p=NS). Conclusion Our technique is promising for dynamic CE-MRA. PMID:23801456
NASA Astrophysics Data System (ADS)
Caceres, Jhon
Three-dimensional (3D) models of urban infrastructure comprise critical data for planners working on problems in wireless communications, environmental monitoring, civil engineering, and urban planning, among other tasks. Photogrammetric methods have been the most common approach to date to extract building models. However, Airborne Laser Swath Mapping (ALSM) observations offer a competitive alternative because they overcome some of the ambiguities that arise when trying to extract 3D information from 2D images. Regardless of the source data, the building extraction process requires segmentation and classification of the data and building identification. In this work, approaches for classifying ALSM data, separating building and tree points, and delineating ALSM footprints from the classified data are described. Digital aerial photographs are used in some cases to verify results, but the objective of this work is to develop methods that can work on ALSM data alone. A robust approach for separating tree and building points in ALSM data is presented. The method is based on supervised learning of the classes (tree vs. building) in a high dimensional feature space that yields good class separability. Features used for classification are based on the generation of local mappings, from three-dimensional space to two-dimensional space, known as "spin images" for each ALSM point to be classified. The method discriminates ALSM returns in compact spaces and even where the classes are very close together or overlapping spatially. A modified algorithm of the Hough Transform is used to orient the spin images, and the spin image parameters are specified such that the mutual information between the spin image pixel values and class labels is maximized. This new approach to ALSM classification allows us to fully exploit the 3D point information in the ALSM data while still achieving good class separability, which has been a difficult trade-off in the past. Supported by the spin image analysis for obtaining an initial classification, an automatic approach for delineating accurate building footprints is presented. The physical fact that laser pulses that happen to strike building edges can produce very different 1st and last return elevations has been long recognized. However, in older generation ALSM systems (<50 kHz pulse rates) such points were too few and far between to delineate building footprints precisely. Furthermore, without the robust separation of nearby trees and vegetation from the buildings, simply extracting ALSM shots where the elevation of the first return was much higher than the elevation of the last return, was not a reliable means of identifying building footprints. However, with the advent of ALSM systems with pulse rates in excess of 100 kHz, and by using spin-imaged based segmentation, it is now possible to extract building edges from the point cloud. A refined classification resulting from incorporating "on-edge" information is developed for obtaining quadrangular footprints. The footprint fitting process involves line generalization, least squares-based clustering and dominant points finding for segmenting individual building edges. In addition, an algorithm for fitting complex footprints using the segmented edges and data inside footprints is also proposed.
DELINEATING TOXIC AREAS BY CANINE OLFACTION
A research project was undertaken to learn how the highly acute olfactory sensitivity of the canine could be applied with advantage to environmental problems. The objectives were to determine how dogs could be trained to detect hazardous and toxic pollutants in the environment an...
A Fuzzy Logic Optimal Control Law Solution to the CMMCA Tracking Problem
1993-03-01
or from a transfer function. Many times, however, the resulting algorithms are so complex as to be completely or essentially useless. Applications...implemented in a nearly real time computer simulation. Located within the LQ framework are all the performance data for both the ClMCA and the CX...repuired nor desired. 34 - / k more general and less exacting framework was used. In order to concentrate on tho theory and problem solution, it was
ERIC Educational Resources Information Center
Maryland State Dept. of Education, Baltimore.
The Maryland School Performance Program for 1992 puts forward social studies outcomes and indicators for grades K-3, grades 4-5, and grades 6-8. Specific indicators for each grade grouping further delineate the following seven individual outcomes: (1) political systems--students will demonstrate an understanding of the historical development and…
Sutphin, David M.; Hammarstrom, Jane M.; Drew, Lawrence J.; Large, Duncan E.; Berger, Byron R.; Dicken, Connie L.; DeMarr, Michael W.; with contributions from Billa, Mario; Briskey, Joseph A.; Cassard, Daniel; Lips, Andor; Pertold, Zdeněk; Roşu, Emilian
2013-01-01
The assessment includes an overview with summary tables. Detailed descriptions of each tract, including the rationales for delineation and assessment, are given in appendixes A–G. Appendix H describes a geographic information system (GIS) that includes tract boundaries and point locations of known porphyry copper deposits and significant prospects.
ERIC Educational Resources Information Center
Shapiro, Edward S., Ed.; Kratochwill, Thomas R., Ed.
This guide brings educational practitioners up to date on how to administer and interpret a wide range of assessment methods for students presenting with emotional and behavioral difficulties. It offers insights and tools for K-12 practitioners and trainees in regular and special education. Delineating a concise conceptual framework, the first…
Physical Fitness in the K-12 Curriculum. Some Defensible Solutions to Perennial Problems.
ERIC Educational Resources Information Center
Corbin, Charles B.
1987-01-01
Appropriate regular physical activity produces significant health benefits. Physical education can promote such activity, but for lifetime fitness, people must move to higher-order objectives, such as establishing personal exercise programs. Ways physical educators can motivate students to enjoy a lifetime of fitness are presented. (MT)
Robust object matching for persistent tracking with heterogeneous features.
Guo, Yanlin; Hsu, Steve; Sawhney, Harpreet S; Kumar, Rakesh; Shan, Ying
2007-05-01
This paper addresses the problem of matching vehicles across multiple sightings under variations in illumination and camera poses. Since multiple observations of a vehicle are separated in large temporal and/or spatial gaps, thus prohibiting the use of standard frame-to-frame data association, we employ features extracted over a sequence during one time interval as a vehicle fingerprint that is used to compute the likelihood that two or more sequence observations are from the same or different vehicles. Furthermore, since our domain is aerial video tracking, in order to deal with poor image quality and large resolution and quality variations, our approach employs robust alignment and match measures for different stages of vehicle matching. Most notably, we employ a heterogeneous collection of features such as lines, points, and regions in an integrated matching framework. Heterogeneous features are shown to be important. Line and point features provide accurate localization and are employed for robust alignment across disparate views. The challenges of change in pose, aspect, and appearances across two disparate observations are handled by combining a novel feature-based quasi-rigid alignment with flexible matching between two or more sequences. However, since lines and points are relatively sparse, they are not adequate to delineate the object and provide a comprehensive matching set that covers the complete object. Region features provide a high degree of coverage and are employed for continuous frames to provide a delineation of the vehicle region for subsequent generation of a match measure. Our approach reliably delineates objects by representing regions as robust blob features and matching multiple regions to multiple regions using Earth Mover's Distance (EMD). Extensive experimentation under a variety of real-world scenarios and over hundreds of thousands of Confirmatory Identification (CID) trails has demonstrated about 95 percent accuracy in vehicle reacquisition with both visible and Infrared (IR) imaging cameras.
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Bárdossy, A.; Sudicky, E. A.
2017-09-01
This paper demonstrates quantitative reasoning to separate the dataset of spatially distributed variables into different entities and subsequently characterize their geostatistical properties, properly. The main contribution of the paper is a statistical based algorithm that matches the manual distinction results. This algorithm is based on measured data and is generally applicable. In this paper, it is successfully applied at two datasets of saturated hydraulic conductivity (K) measured at the Borden (Canada) and the Lauswiesen (Germany) aquifers. The boundary layer was successfully delineated at Borden despite its only mild heterogeneity and only small statistical differences between the divided units. The methods are verified with the more heterogeneous Lauswiesen aquifer K data-set, where a boundary layer has previously been delineated. The effects of the macro- and the microstructure on solute transport behaviour are evaluated using numerical solute tracer experiments. Within the microscale structure, both Gaussian and non-Gaussian models of spatial dependence of K are evaluated. The effects of heterogeneity both on the macro- and the microscale are analysed using numerical tracer experiments based on four scenarios: including or not including the macroscale structures and optimally fitting a Gaussian or a non-Gaussian model for the spatial dependence in the micro-structure. The paper shows that both micro- and macro-scale structures are important, as in each of the four possible geostatistical scenarios solute transport behaviour differs meaningfully.
Global Phosphoproteomic Analysis of Insulin/Akt/mTORC1/S6K Signaling in Rat Hepatocytes.
Zhang, Yuanyuan; Zhang, Yajie; Yu, Yonghao
2017-08-04
Insulin resistance is a hallmark of type 2 diabetes. Although multiple genetic and physiological factors interact to cause insulin resistance, deregulated signaling by phosphorylation is a common underlying mechanism. In particular, the specific phosphorylation-dependent regulatory mechanisms and signaling outputs of insulin are poorly understood in hepatocytes, which represents one of the most important insulin-responsive cell types. Using primary rat hepatocytes as a model system, we performed reductive dimethylation (ReDi)-based quantitative mass spectrometric analysis and characterized the phosphoproteome that is regulated by insulin as well as its key downstream kinases including Akt, mTORC1, and S6K. We identified a total of 12 294 unique, confidently localized phosphorylation sites and 3805 phosphorylated proteins in this single cell type. Detailed bioinformatic analysis on each individual data set identified both known and previously unrecognized targets of this key insulin downstream effector pathway. Furthermore, integrated analysis of the hepatic Akt/mTORC1/S6K signaling axis allowed the delineation of the substrate specificity of several close-related kinases within the insulin signaling pathway. We expect that the data sets will serve as an invaluable resource, providing the foundation for future hypothesis-driven research that helps delineate the molecular mechanisms that underlie the pathogenesis of type 2 diabetes and related metabolic syndrome.
Zhou, Wu
2014-01-01
The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n 2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k 2 n 2) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs.
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results.
Clustering Millions of Faces by Identity.
Otto, Charles; Wang, Dayong; Jain, Anil K
2018-02-01
Given a large collection of unlabeled face images, we address the problem of clustering faces into an unknown number of identities. This problem is of interest in social media, law enforcement, and other applications, where the number of faces can be of the order of hundreds of million, while the number of identities (clusters) can range from a few thousand to millions. To address the challenges of run-time complexity and cluster quality, we present an approximate Rank-Order clustering algorithm that performs better than popular clustering algorithms (k-Means and Spectral). Our experiments include clustering up to 123 million face images into over 10 million clusters. Clustering results are analyzed in terms of external (known face labels) and internal (unknown face labels) quality measures, and run-time. Our algorithm achieves an F-measure of 0.87 on the LFW benchmark (13 K faces of 5,749 individuals), which drops to 0.27 on the largest dataset considered (13 K faces in LFW + 123M distractor images). Additionally, we show that frames in the YouTube benchmark can be clustered with an F-measure of 0.71. An internal per-cluster quality measure is developed to rank individual clusters for manual exploration of high quality clusters that are compact and isolated.
Corrosion and scaling in solar heating systems
NASA Astrophysics Data System (ADS)
Foresti, R. J., Jr.
1981-12-01
Corrosion, as experienced in solar heating systems, is described in simplistic terms to familiarize designers and installers with potential problems and their solutions. The role of a heat transfer fluid in a solar system is briefly discussed, and the choice of an aqueous solution is justified. The complexities of the multiple chemical and physical reactions are discussed in order that uncertainties of corrosion behavior can be anticipated. Some basic theories of corrosion are described, aggressive environments for some common metals are identified, and the role of corrosion inhibitors is delineated. The similarities of thermal and material characteristics of a solor system and an automotive cooling system are discussed. Based on the many years of experience with corrosion in automotive systems, it is recommended that similar antifreezes and corrosion inhibitors should be used in solar systems. The importance of good solar system design and fabrication is stressed and specific characteristics that affect corrosion are identified.
The contribution of clinical leadership to service redesign: a naturalistic inquiry.
Storey, John; Holti, Richard
2012-08-01
Numerous policy papers and academic contributions across a range of countries emphasize the importance of clinical leadership in health services. This is seen as especially vital at a time of simultaneous resource constraints and rising demand. Most of the literature in this topic area concerns itself with conceptual clarification of types of leadership and with delineation of requisite competences. But other work on leadership has emphasized the importance of attending to practice in concrete situations in order to identify the dynamics at play and the nature of the challenges. The purpose of this article is to contribute to this latter task by drawing upon a set of data which reveals crucial aspects of the problems facing potential clinical leaders of service redesign. This paper reports on the nature and extent of the challenges as identified by clinicians of different types as well as managers and commissioners.
The effects of wavelet compression on Digital Elevation Models (DEMs)
Oimoen, M.J.
2004-01-01
This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.
Neural Stem Cells: Historical Perspective and Future Prospects
Breunig, Joshua J.; Haydar, Tarik F.; Rakic, Pasko
2011-01-01
How a single fertilized cell generates diverse neuronal populations has been a fundamental biological problem since the 19th century. Classical histological methods revealed that post-mitotic neurons are produced in a precise temporal and spatial order from germinal cells lining the cerebral ventricles. In the 20th century DNA labeling and histo- and immuno-histochemistry helped to distinguish the subtypes of dividing cells and delineate their locations in the ventricular and subventricular zones. Recently, genetic and cell biological methods have provided insights into sequential gene expression and molecular and cellular interactions that generate heterogeneous populations of NSCs leading to specific neuronal classes. This precisely regulated developmental process does not tolerate significant in vivo deviation, making replacement of adult neurons by NSCs during pathology a colossal challenge. In contrast, utilizing the trophic factors emanating from the NSC or their derivatives to slow down deterioration or prevent death of degenerating neurons may be a more feasible strategy. PMID:21609820
Runge Kutta Algorithm applied to a Hydrology Problem
NASA Astrophysics Data System (ADS)
Narayanan, M.
2003-12-01
In this paper, the author utilizes a fourth order Runge Kutta Algorithm technique to solve a design problem in Hydrology and Fluid Mechanics. Principles of Fuzzy Logic Design methodologies were utilized to analyze the problem and arrive at an appropriate solution. The problem posed was to examine the depletion of water from a reservoir. A suitable model was to be created to represent different parameters that contributed to the depletion, such as evaporation, drainage and seepage, irrigation channels, city water supply pipes, etc. The reservoir was being fed via natural resources such as rain, streams, rivers, etc. A model of a catchment area and a reservoir lake is simulated as a tank and exit discharge is represented as fluid output via a long pipe. The Input to the reservoir is assumed to be continuous-time and time varying. In other words, the flow rate of fluid input is presumed to change with time. The required objective is to maintain a predetermined level of water in the reservoir, regardless of input conditions. This is accomplished by adjusting the depletion rate. This means that some of the Irrigation channels may have to be closed or some of the city water supply lines need to be shut off. The differential equation governing the system can be easily derived using Bernoulli's' equation. If hd is the desired height of water in the reservoir and h(t) represents the height of water in the reservoir at any given time, K represents a positive constant. (dh/dt) + K [ h(t) - hd ] = 0 The closed loop system is simulated by using fourth-order Runge-Kutta algorithm. The controller output u(t) can be calculated using the above equation. The Runge-Kutta algorithm is a very popular method, which is widely used for obtaining a numerical solution to a given differential equation. The Runge-Kutta algorithm is considered to be quite accurate for a broad range of scientific and engineering applications, and as such, the method is heavily used by many scholars and researchers. In summary, Runge-Kutta is a common method of solving ordinary differential equations using numerical integration techniques. The principle is to use a trial step at the midpoint of an interval to cancel out lower-order error terms. Suppose that hn is the value of the variable at time tn. The Runge-Kutta formula takes hn and tn and calculates an approximation for hn+1 at a brief time later, tn+Âä. It uses a weighted average of approximated values of f(t, h) at several times within the interval (tn, tn+Âä). hn+1 = hn + (1/6) [ k1 + 2k2 + 2k3 + k4 ] k1, k2, k3 & k4 are four gradient terms. Fuzzy logic FLC rule base can be developed based on the above derivations and equations. Further, a graphical representation of water level over a time step period can be obtained. References : Nguyen, Hung T.; Prasad, Nadipuram R.; Walker, Carol L. and Walker, Elbert A. (2003). A First Course in Fuzzy and Neural Control. Boca Raton, Florida : Chapman & Hall / CRC. Yager, R. R., and Zadeh, L. A. (1991). An Introduction to Fuzzy Logic Applications in Intelligent Systems. New York : Kluwer Academic Publishers
Evaluating Teachers of Writing.
ERIC Educational Resources Information Center
Hult, Christine A., Ed.
Describing the various forms evaluation can take, this book delineates problems in evaluating writing faculty and sets the stage for reconsidering the entire process to produce a fair, equitable, and appropriate system. The book discusses evaluation through real-life examples: evaluation of writing faculty by literature faculty, student…
Geography: Key to World Understanding.
ERIC Educational Resources Information Center
Dando, William A.
1990-01-01
Delineates the nature of applied geography, asserting that geography links the natural and social sciences. Underscores geography's role in data analysis and problem solving on a global scale. Traces the discipline's history. Maps geography's status in higher education institutions. Discusses new technologies used by geographers. Summarizes career…
Evaluation of Innovative Approaches to Curve Delineation for Two-Lane Rural Roads
DOT National Transportation Integrated Search
2018-06-01
Run-off-road crashes are a major problem for rural roads. These roads tend to be unlit, and drivers may have difficulty seeing or correctly predicting the curvature of horizontal curves. This leads to vehicles entering horizontal curves at speeds tha...
ERIC Educational Resources Information Center
Lee, Essie E.
1978-01-01
Suicide among young people is increasing at phenomenal rates. This article examines the problem of adolescent suicide and suicide attempts in relation to cultural factors, sex differences, and probable causes. The importance of parents, teachers, and counselors in becoming alert to conflict and stress situations in youths is delineated. (Author)
Professional-Development Systems: The State of the States.
ERIC Educational Resources Information Center
Gannett, Ellen; Nee, Judy; Smith, Darci
2001-01-01
Describes states' efforts to implement a school-age credentialing system for child caregivers. Identifies basic problems hindering progress: readiness, infrastructure, and sustainability of infrastructure. Delineates implications for school-age care of significant initiatives in California, Florida, and New York. Suggests that there is no…
Modeling the missile-launch tube problem in DYSCO
NASA Technical Reports Server (NTRS)
Berman, Alex; Gustavson, Bruce A.
1989-01-01
DYSCO is a versatile, general purpose dynamic analysis program which assembles equations and solves dynamics problems. The executive manages a library of technology modules which contain routines that compute the matrix coefficients of the second order ordinary differential equations of the components. The executive performs the coupling of the equations of the components and manages the solution of the coupled equations. Any new component representation may be added to the library if, given the state vector, a FORTRAN program can be written to compute M, C, K, and F. The problem described demonstrates the generality of this statement.
Vella, Laura J; Cappai, Roberto
2012-07-01
Alzheimer's disease (AD) is a neurodegenerative disorder of the central nervous system. The proteolytic processing of the amyloid precursor protein (APP) into the β-amyloid (Aβ) peptide is a central event in AD. While the pathway that generates Aβ is well described, many questions remain concerning general APP metabolism and its metabolites. It is becoming clear that the amino-terminal region of APP can be processed to release small N-terminal fragments (NTFs). The purpose of this study was to investigate the occurrence and generation of APP NTFs in vivo and in cell culture (SH-SY5Y) in order to delineate the cellular pathways implicated in their generation. We were able to detect 17- to 28-kDa APP NTFs in human and mouse brain tissue that are distinct from N-APP fragments previously reported. We show that the 17- to 28-kDa APP NTFs were highly expressed in mice from the age of 2 wk to adulthood. SH-SY5Y studies indicate the generation of APP NTFs involves a novel APP processing pathway, regulated by protein kinase C, but independent of α-secretase or β-secretase 1 (BACE) activity. These results identify a novel, developmentally regulated APP processing pathway that may play an important role in the physiological function of APP.
NASA Astrophysics Data System (ADS)
Ruskey, Frank; Williams, Aaron
In the classic Josephus problem, elements 1, 2,...,n are placed in order around a circle and a skip value k is chosen. The problem proceeds in n rounds, where each round consists of traveling around the circle from the current position, and selecting the kth remaining element to be eliminated from the circle. After n rounds, every element is eliminated. Special attention is given to the last surviving element, denote it by j. We generalize this popular problem by introducing a uniform number of lives ℓ, so that elements are not eliminated until they have been selected for the ℓth time. We prove two main results: 1) When n and k are fixed, then j is constant for all values of ℓ larger than the nth Fibonacci number. In other words, the last surviving element stabilizes with respect to increasing the number of lives. 2) When n and j are fixed, then there exists a value of k that allows j to be the last survivor simultaneously for all values of ℓ. In other words, certain skip values ensure that a given position is the last survivor, regardless of the number of lives. For the first result we give an algorithm for determining j (and the entire sequence of selections) that uses O(n 2) arithmetic operations.
Sensitivity kernels for viscoelastic loading based on adjoint methods
NASA Astrophysics Data System (ADS)
Al-Attar, David; Tromp, Jeroen
2014-01-01
Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity kernel' Kη determines the linearized sensitivity of J to viscosity perturbations defined with respect to a laterally heterogeneous reference earth model, while the `rate-of-loading kernel' K_{dot{σ }} determines the sensitivity to variations in the time derivative of the surface load. By restricting attention to spherically symmetric viscosity perturbations, we also obtain a `radial viscosity kernel' overline{K}_{η } such that the associated contribution to δJ can be written int _{IS}overline{K}_{η }δ ln η dr, where IS denotes the subset of radii lying in solid regions. In order to illustrate this theory, we describe its numerical implementation in the case of a spherically symmetric earth model using a 1-D spectral element method, and calculate sensitivity kernels for a range of realistic observables.
Scattering Matrix for the Interaction between Solar Acoustic Waves and Sunspots. I. Measurements
NASA Astrophysics Data System (ADS)
Yang, Ming-Hsu; Chou, Dean-Yi; Zhao, Hui
2017-01-01
Assessing the interaction between solar acoustic waves and sunspots is a scattering problem. The scattering matrix elements are the most commonly used measured quantities to describe scattering problems. We use the wavefunctions of scattered waves of NOAAs 11084 and 11092 measured in the previous study to compute the scattering matrix elements, with plane waves as the basis. The measured scattered wavefunction is from the incident wave of radial order n to the wave of another radial order n‧, for n=0{--}5. For a time-independent sunspot, there is no mode mixing between different frequencies. An incident mode is scattered into various modes with different wavenumbers but the same frequency. Working in the frequency domain, we have the individual incident plane-wave mode, which is scattered into various plane-wave modes with the same frequency. This allows us to compute the scattering matrix element between two plane-wave modes for each frequency. Each scattering matrix element is a complex number, representing the transition from the incident mode to another mode. The amplitudes of diagonal elements are larger than those of the off-diagonal elements. The amplitude and phase of the off-diagonal elements are detectable only for n-1≤slant n\\prime ≤slant n+1 and -3{{Δ }}k≤slant δ {k}x≤slant 3{{Δ }}k, where δ {k}x is the change in the transverse component of the wavenumber and Δk = 0.035 rad Mm-1.
1981-06-15
relationships 5 3. Normalized energy in ambiguity function for i = 0 14 k ilI SACLANTCEN SR-50 A RESUME OF STOCHASTIC, TIME-VARYING, LINEAR SYSTEM THEORY WITH...the order in which systems are concatenated is unimportant. These results are exactly analogous to the results of time-invariant linear system theory in...REFERENCES 1. MEIER, L. A rdsum6 of deterministic time-varying linear system theory with application to active sonar signal processing problems, SACLANTCEN
1986-05-01
neighborhood of the Program PROBE of Noetic Technologies, St. Louis. corners of the domain, place where the type of the boundary condition changes, etc...is studied . , r ° -. o. - *- . ,. .- -*. ... - - . . . ’ , ..- , .- *- , . --s,." . ",-:, "j’ . ], k i-, j!3 ,, :,’ - .A L...Manual. Noetic Technologies Corp., St. Louis, Missouri (1985). 318] Szab’, B. A.: Implementation of a Finite Element Software System with h and p
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamba, O.S.; Badola, Richa; Baloda, Suman
The paper describes voltage break down phenomenon and preventive measures in components of 250 KW CW, C band Klystron under development at CEERI Pilani. The Klystron operates at a beam voltage of 50 kV and delivers 250 kW RF power at 5 GHz frequency. The Klystron consists of several key components and regions, which are subject to high electrical stress. The most important regions of electrical breakdown are electron gun, the RF ceramic window and output cavity gap area. In the critical components voltage breakdown considered at design stage by proper gap and other techniques. All these problems discussed, asmore » well as solution to alleviate this problem. The electron gun consists basically of cathode, BFE and anode. The cathode is operated at a voltage of 50 kV. In order to maintain the voltage standoff between cathode and anode a high voltage alumina seal and RF window have been designed developed and successfully used in the tube. (author)« less
NASA Astrophysics Data System (ADS)
Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens
2018-06-01
Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.
NASA Astrophysics Data System (ADS)
Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael
2012-02-01
Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.
Self-energy of an impurity in an ideal Fermi gas to second order in the interaction strength
NASA Astrophysics Data System (ADS)
Trefzger, Christian; Castin, Yvan
2014-09-01
We study in three dimensions the problem of a spatially homogeneous zero-temperature ideal Fermi gas of spin-polarized particles of mass m perturbed by the presence of a single distinguishable impurity of mass M. The interaction between the impurity and the fermions involves only the partial s wave through the scattering length a and has negligible range b compared to the inverse Fermi wave number 1/kF of the gas. Through the interactions with the Fermi gas the impurity gives birth to a quasiparticle, which will be here a Fermi polaron (or more precisely a monomeron). We consider the general case of an impurity moving with wave vector K ≠0: Then the quasiparticle acquires a finite lifetime in its initial momentum channel because it can radiate particle-hole pairs in the Fermi sea. A description of the system using a variational approach, based on a finite number of particle-hole excitations of the Fermi sea, then becomes inappropriate around K =0. We rely thus upon perturbation theory, where the small and negative parameter kFa→0- excludes any branches other than the monomeronic one in the ground state (as, e.g., the dimeronic one), and allows us a systematic study of the system. We calculate the impurity self-energy Σ(2)(K,ω) up to second order included in a. Remarkably, we obtain an analytical explicit expression for Σ(2)(K,ω), allowing us to study its derivatives in the plane (K,ω). These present interesting singularities, which in general appear in the third-order derivatives ∂3Σ(2)(K,ω). In the special case of equal masses, M =m, singularities appear already in the physically more accessible second-order derivatives ∂2Σ(2)(K,ω); using a self-consistent heuristic approach based on Σ(2) we then regularize the divergence of the second-order derivative ∂K2ΔE(K) of the complex energy of the quasiparticle found in Trefzger and Castin [Europhys. Lett. 104, 50005 (2013), 10.1209/0295-5075/104/50005] at K =kF, and we predict an interesting scaling law in the neighborhood of K =kF. As a by product of our theory we have access to all moments of the momentum of the particle-hole pair emitted by the impurity while damping its motion in the Fermi sea at the level of Fermi's golden rule.
Limits of the copper decoration technique for delineating of the V I boundary
NASA Astrophysics Data System (ADS)
Válek, L.; Stehlík, Š.; Orava, J.; Ďurík, M.; Šik, J.; Wágner, T.
2007-05-01
Copper decoration technique was used for detection of the vacancy interstitial (V I) boundary in Czochralski silicon crystal. We used the technique for delineating defects in silicon previously reported by Mule’Stagno [Solid State Phenom. 82 84 (2002) 753] and we enriched it by an upgraded application of copper on the silicon surface. The new procedure is based on the deposition of elementary copper on the silicon surface from the copper nitrate solution. The new method is more efficient contrary to Mule’Stagno (2002) and it also decreases environmental drain. We compared five etchants in order to optimize the delineation of the V I boundary. A defect region of the same diameter was detected by all the used etchants, supreme sensitivity was obtained with Wright's etchant. The outer diameter of the defect region observed by the copper decoration technique coincides with the V I boundary diameter measured by OISF testing and approximately coincides with the V I boundary diameter measured by COP testing. We found that the copper decoration technique delineates oxygen precipitates in silicon and we observed the dependence of V I boundary detectability on the size of the oxygen precipitates.
Kalpathy-Cramer, Jayashree; Awan, Musaddiq; Bedrick, Steven; Rasch, Coen R N; Rosenthal, David I; Fuller, Clifton D
2014-02-01
Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications.
Remote sensing and GIS approach for water-well site selection, southwest Iran
Rangzan, K.; Charchi, A.; Abshirini, E.; Dinger, J.
2008-01-01
The Pabdeh-Lali Anticline of northern Khuzestan province is located in southwestern Iran and occupies 790 km2. This structure is situated in the Zagros folded belt. As a result of well-developed karst systems in the anticlinal axis, the water supply potential is high and is drained by many peripheral springs. However, there is a scarcity of water for agriculture and population centers on the anticlinal flanks, which imposes a severe problem in terms of area development. This study combines remotely sensed (RS) data and a geographical information system (GIS) into a RSGIS technique to delineate new areas for groundwater development and specific sites for drilling productive water wells. Toward these goals, RS data were used to develop GIS layers for lithology, structural geology, topographic slope, elevation, and drainage density. Field measurements were made to create spring-location and groundwater-quality GIS layers. Subsequently, expert choice and relational methods were used in a GIS environment to conjunctively analyze all layers to delineate preferable regions and 43 individual sites in which to drill water wells. Results indicate that the most preferred areas are, in preferential order, within recent alluvial deposits, the Bakhtiyari Conglomerates, and the Aghajari Sandstone. The Asmari Limestone and other units have much lower potential for groundwater supplies. Potential usefulness of the RSGIS method was indicated when six out of nine producing wells recently drilled by the Khozestan Water and Power Authority (which had no knowledge of this study) were located in areas preferentially selected by this technique.
Taylor, Cliff D.; Giles, Stuart A.
2015-01-01
Although mineral occurrence data and descriptive geological information are adequate to delineate areas favorable for sediment-hosted copper deposits, this review indicates that potential for this type of deposit in Mauritania is low.
Infrared instrument support for HyspIRI-TIR
NASA Astrophysics Data System (ADS)
Johnson, William R.; Hook, Simon J.; Foote, Marc; Eng, Bjorn T.; Jau, Bruno
2012-10-01
The Jet Propulsion Laboratory is currently developing an end-to-end instrument which will provide a proof of concept prototype vehicle for a high data rate, multi-channel, thermal instrument in support of the Hyperspectral Infrared Imager (HyspIRI)-Thermal Infrared (TIR) space mission. HyspIRI mission was recommended by the National Research Council Decadal Survey (DS). The HyspIRI mission includes a visible shortwave infrared (SWIR) pushboom spectrometer and a multispectral whiskbroom thermal infrared (TIR) imager. The prototype testbed instrument addressed in this effort will only support the TIR. Data from the HyspIRI mission will be used to address key science questions related to the Solid Earth and Carbon Cycle and Ecosystems focus areas of the NASA Science Mission Directorate. Current designs for the HyspIRI-TIR space borne imager utilize eight spectral bands delineated with filters. The system will have 60m ground resolution, 200mK NEDT, 0.5C absolute temperature resolution with a 5-day repeat from LEO orbit. The prototype instrument will use mercury cadmium telluride (MCT) technology at the focal plane array in time delay integration mode. A custom read out integrated circuit (ROIC) will provide the high speed readout hence high data rates needed for the 5 day repeat. The current HyspIRI requirements dictate a ground knowledge measurement of 30m, so the prototype instrument will tackle this problem with a newly developed interferometeric metrology system. This will provide an absolute measurement of the scanning mirror to an order of magnitude better than conventional optical encoders. This will minimize the reliance on ground control points hence minimizing post-processing (e.g. geo-rectification computations).
High speed, multi-channel, thermal instrument development in support of HyspIRI-TIR
NASA Astrophysics Data System (ADS)
Johnson, William R.; Hook, Simon J.; Foote, Marc; Eng, Bjorn T.; Jau, Bruno
2011-10-01
The Jet Propulsion Laboratory is currently developing an end-to-end instrument which will provide a proof of concept prototype vehicle for a high data rate, multi-channel, thermal instrument in support of the Hyperspectral Infrared Imager (HyspIRI)-Thermal Infrared (TIR) space mission. HyspIRI mission was recommended by the National Research Council Decadal Survey (DS). The HyspIRI mission includes a visible shortwave infrared (SWIR) pushboom spectrometer and a multispectral whiskbroom thermal infrared (TIR) imager. The prototype testbed instrument addressed in this effort will only support the TIR. Data from the HyspIRI mission will be used to address key science questions related to the Solid Earth and Carbon Cycle and Ecosystems focus areas of the NASA Science Mission Directorate. Current designs for the HyspIRI-TIR space borne imager utilize eight spectral bands delineated with filters. The system will have 60m ground resolution, 200mK NEDT, 0.5C absolute temperature resolution with a 5-day repeat from LEO orbit. The prototype instrument will use mercury cadmium telluride (MCT) technology at the focal plane array in time delay integration mode. A custom read out integrated circuit (ROIC) will provide the high speed readout hence high data rates needed for the 5 day repeat. The current HyspIRI requirements dictate a ground knowledge measurement of 30m, so the prototype instrument will tackle this problem with a newly developed interferometeric metrology system. This will provide an absolute measurement of the scanning mirror to an order of magnitude better than conventional optical encoders. This will minimize the reliance on ground control points hence minimizing postprocessing (e.g. geo-rectification computations).
ERIC Educational Resources Information Center
McClung, Merle
2013-01-01
The economic purpose of getting a job, or getting into college in order to get a better job, has evolved into the de facto primary purpose of K-12 (and higher) education. Business model solutions are seen by businessmen as the answer to education problems. But the business models they advocate and help fund are not a good fit for education…
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.
2016-12-01
A computationally efficient, semi-distributed hydrologic modeling framework is developed to simulate water balance at a catchment scale. The Soil Moisture and Runoff simulation Toolkit (SMART) is based upon the delineation of contiguous and topologically connected Hydrologic Response Units (HRUs). In SMART, HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are distributed cross sections or equivalent cross sections (ECS) delineated in first order sub-basins. ECSs are formulated by aggregating topographic and physiographic properties of the part or entire first order sub-basins to further reduce computational time in SMART. Previous investigations using SMART have shown that temporal dynamics of soil moisture are well captured at a HRU level using the ECS delineation approach. However, spatial variability of soil moisture within a given HRU is ignored. Here, we examined a number of disaggregation schemes for soil moisture distribution in each HRU. The disaggregation schemes are either based on topographic based indices or a covariance matrix obtained from distributed soil moisture simulations. To assess the performance of the disaggregation schemes, soil moisture simulations from an integrated land surface-groundwater model, ParFlow.CLM in Baldry sub-catchment, Australia are used. ParFlow is a variably saturated sub-surface flow model that is coupled to the Common Land Model (CLM). Our results illustrate that the statistical disaggregation scheme performs better than the methods based on topographic data in approximating soil moisture distribution at a 60m scale. Moreover, the statistical disaggregation scheme maintains temporal correlation of simulated daily soil moisture while preserves the mean sub-basin soil moisture. Future work is focused on assessing the performance of this scheme in catchments with various topographic and climate settings.
NASA Technical Reports Server (NTRS)
Mayo, L. H.
1975-01-01
The contextual approach is discussed which undertakes to demonstrate that technology assessment assists in the identification of the full range of implications of taking a particular action and facilitates the consideration of alternative means by which the total affected social problem context might be changed by available project options. It is found that the social impacts of an application on participants, institutions, processes, and social interests, and the accompanying interactions may not only induce modifications in the problem contest delineated for examination with respect to the design, operations, regulation, and use of the posited application, but also affect related social problem contexts.
Fixing health care before it fixes us.
Kotlikoff, Laurence J
2009-02-01
The current American health care system is beyond repair. The problems of the health care system are delineated in this discussion. The current health care system needs to be replaced in its entirety with a new system that provides every American with first-rate, first-tier medicine and that doesn't drive our nation broke. The author describes a 10-point Medical Security System, which he proposes will address the problems of the current health care system.
Mind over matter? I: philosophical aspects of the mind-brain problem.
Schimmel, P
2001-08-01
To conceptualize the essence of the mind-body or mind-brain problem as one of metaphysics rather than science, and to propose a formulation of the problem in the context of current scientific knowledge and its limitations. The background and conceptual parameters of the mind-body problem are delineated, and the limitations of brain research in formulating a solution identified. The problem is reformulated and stated in terms of two propositions. These constitute a 'double aspect theory'. The problem appears to arise as a consequence of the conceptual limitations of the human mind, and hence remains essentially a metaphysical one. A 'double aspect theory' recognizes the essential unity of mind and brain, while remaining consistent with the dualism inherent in human experience.
Kim, Hahnsung; Park, Suhyung; Kim, Eung Yeop; Park, Jaeseok
2018-09-01
To develop a novel, retrospective multi-phase non-contrast-enhanced MRA (ROMANCE MRA) in a single acquisition for robust angiogram separation even in the presence of cardiac arrhythmia. In the proposed ROMANCE MRA, data were continuously acquired over all cardiac phases using retrospective, multi-phase flow-sensitive single-slab 3D fast spin echo (FSE) with variable refocusing flip angles, while an external pulse oximeter was in sync with pulse repetitions in FSE to record real-time information on cardiac cycles. Data were then sorted into k-bin space using the real-time cardiac information. Angiograms were reconstructed directly from k-bin space by solving a constrained optimization problem with both subtraction-induced sparsity and low rank priors. Peripheral MRA was performed in normal volunteers with/without caffeine consumption and a volunteer with cardiac arrhythmia using conventional fresh blood imaging (FBI) and the proposed ROMANCE MRA for comparison. The proposed ROMANCE MRA shows superior performance in accurately delineating both major and small vessel branches with robust background suppression if compared with conventional FBI. Even in the presence of irregular heartbeats, the proposed method exhibits clear depiction of angiograms over conventional methods within clinically reasonable imaging time. We successfully demonstrated the feasibility of the proposed ROMANCE MRA in generating robust angiograms with background suppression. © 2018 International Society for Magnetic Resonance in Medicine.
Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method.
Han, Dongfeng; Bayouth, John; Song, Qi; Taurani, Aakant; Sonka, Milan; Buatti, John; Wu, Xiaodong
2011-01-01
Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.
GROUND WATER SAMPLING FOR VERTICAL PROFILING OF CONTAMINANTS
Accurate delineation of plume boundaries and vertical contaminant distribution are necessary in order to adequately characterize waste sites and determine remedial strategies to be employed. However, it is important to consider the sampling objectives, sampling methods, and sampl...
Recent work and results on sparrow project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Neal R
2010-12-23
This briefing describes recent work undertaken on the Sparrow Project and results of this work. It describes experiments comparing the use of Genie with 2 classes with 3 classes for the problem of ship delineation. It also describes some preliminary work in the area of the optimization of segmentation techniques.
Community Psychology, Planning, and Learning: A U.S. Perspective on Sustainable Development.
ERIC Educational Resources Information Center
Perkins, Douglas D.
An ecological framework for predicting citizen participation in grassroots community organizations and predicting community disorder problems (such as crime and fear) was developed and tested. The framework, which is called an ecological framework for sustainable community learning and development, delineates the relevant economic, political,…
Undergraduate Black Student Retention Revisited
ERIC Educational Resources Information Center
Garcia, Sandra A.; Seligsohn, Harriet C.
1978-01-01
It is contended that until now colleges and universities have been reacting defensively to the problem of affirmative action. They must now set realistic goals in recruitment and retention, commit financial and human resources to these goals, and set up contractual agreements that clearly delineate the rights and obligation of the student as well…
ERIC Educational Resources Information Center
Tackett, Jennifer L.; Kushner, Shauna C.; De Fruyt, Filip; Mervielde, Ivan
2013-01-01
The current investigation addressed several questions in the burgeoning area of child personality assessment. Specifically, the present study examined overlapping and nonoverlapping variance in two prominent measures of child personality assessment, followed by tests of convergent and divergent validity with child temperament and psychopathology.…
Sectarian Universities, Federal Funding, and the Question of Academic Freedom.
ERIC Educational Resources Information Center
Zagano, Phyllis
1990-01-01
Addresses the question of sectarianism and its relationship to academic freedom. Provides a case history of U.S. Roman Catholic education, examining the financial problems of Catholic universities denied GI Bill monies. Defines the parameters of the Catholic college. Delineates the relationship between the Vatican's control of Catholic…
Genetic Engineering of Plants. Agricultural Research Opportunities and Policy Concerns.
ERIC Educational Resources Information Center
Roberts, Leslie
Plant scientists and science policymakers from government, private companies, and universities met at a convocation on the genetic engineering of plants. During the convocation, researchers described some of the ways genetic engineering may be used to address agricultural problems. Policymakers delineated and debated changes in research funding…
The Retarded Adult in the Community.
ERIC Educational Resources Information Center
Katz, Elias
The discussion of a series of questions with case illustrations delineates the problems and possibilities of helping retarded adults become valuable, productive members of society. Among topics considered are the definition of retarded adults in the community, the need for concern, and community evaluation and needs of the retarded adult. Also…
Israeli Adolescents and Military Service: Encounters.
ERIC Educational Resources Information Center
Levy, Amihay; And Others
1987-01-01
Asserts that inadequate attention has been paid to the problems of the young soldier entering army life in Israel. Delineates some areas of friction and vulnerability between the worlds of the youth and the military. Describes the systematization of these encounters into groups, creating the "Binary Model," which helps in locating and…
Advances in Environmental Science and Technology, Volume Two.
ERIC Educational Resources Information Center
Pitts, James N., Jr., Ed.; Metcalf, Robert L., Ed.
The aim of this volume is to help delineate and solve the multitude of environmental problems our technology has created. Representing a diversity of notable approaches to crucial environmental issues, it features eight self-contained chapters by noted scientists. Topics range from broad considerations of air pollution and specific techniques for…
Outcomes of Children Adopted from Eastern Europe
ERIC Educational Resources Information Center
Miller, Laurie; Chan, Wilma; Tirella, Linda; Perrin, Ellen
2009-01-01
Behavioral problems are frequent among post-institutionalized Eastern European adoptees. However, risk factors related to outcomes have not been fully delineated. We evaluated 50 Eastern European adoptees, age 8-10 years, with their adoptive families for more than five years. Cognitive and behavioral outcomes and parenting stress were evaluated in…
Weapons in Schools. NSSC Resource Paper.
ERIC Educational Resources Information Center
Butterfield, George E., Ed.; Turner, Brenda, Ed.
More than ever, our public school system must confront weapons in schools and become aware of steadily rising statistics on youth homicide and suicide. This report delineates the problem, discusses why children carry weapons to school, and outlines strategies for keeping weapons out of schools and for improving school safety. Although some…
Decision-Making Theory Applied to Architectural Programming: Some Research Implications.
ERIC Educational Resources Information Center
Green, Meg
The implications of delineating and determining the sequence of programming decisions are shown in the selection of building committee membership. The role relationships of client and architect are discussed in terms of decision-making function. Decision tables are described as aids in problem analysis. Other topics include information and…
GaAs Quantum Dot Thermometry Using Direct Transport and Charge Sensing
NASA Astrophysics Data System (ADS)
Maradan, D.; Casparis, L.; Liu, T.-M.; Biesinger, D. E. F.; Scheller, C. P.; Zumbühl, D. M.; Zimmerman, J. D.; Gossard, A. C.
2014-06-01
We present measurements of the electron temperature using gate-defined quantum dots formed in a GaAs 2D electron gas in both direct transport and charge sensing mode. Decent agreement with the refrigerator temperature was observed over a broad range of temperatures down to 10 mK. Upon cooling nuclear demagnetization stages integrated into the sample wires below 1 mK, the device electron temperature saturates, remaining close to 10 mK. The extreme sensitivity of the thermometer to its environment as well as electronic noise complicates temperature measurements but could potentially provide further insight into the device characteristics. We discuss thermal coupling mechanisms, address possible reasons for the temperature saturation and delineate the prospects of further reducing the device electron temperature.
A structure-activity analysis of the variation in oxime efficacy against nerve agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Donald M.; Koplovitz, Irwin; Worek, Franz
2008-09-01
A structure-activity analysis was used to evaluate the variation in oxime efficacy of 2-PAM, obidoxime, HI-6 and ICD585 against nerve agents. In vivo oxime protection and in vitro oxime reactivation were used as indicators of oxime efficacy against VX, sarin, VR and cyclosarin. Analysis of in vivo oxime protection was conducted with oxime protective ratios (PR) from guinea pigs receiving oxime and atropine therapy after sc administration of nerve agent. Analysis of in vitro reactivation was conducted with second-order rate contants (k{sub r2}) for oxime reactivation of agent-inhibited acetylcholinesterase (AChE) from guinea pig erythrocytes. In vivo oxime PR and inmore » vitro k{sub r2} decreased as the volume of the alkylmethylphosphonate moiety of nerve agents increased from VX to cyclosarin. This effect was greater with 2-PAM and obidoxime (> 14-fold decrease in PR) than with HI-6 and ICD585 (< 3.7-fold decrease in PR). The decrease in oxime PR and k{sub r2} as the volume of the agent moiety conjugated to AChE increased was consistent with a steric hindrance mechanism. Linear regression of log (PR-1) against log (k{sub r2} {center_dot} [oxime dose]) produced two offset parallel regression lines that delineated a significant difference between the coupling of oxime reactivation and oxime protection for HI-6 and ICD585 compared to 2-PAM and obidoxime. HI-6 and ICD585 appeared to be 6.8-fold more effective than 2-PAM and obidoxime at coupling oxime reactivation to oxime protection, which suggested that the isonicotinamide group that is common to both of these oximes, but absent from 2-PAM and obidoxime, is important for oxime efficacy.« less
NASA Astrophysics Data System (ADS)
Beskow, Samuel; de Mello, Carlos Rogério; Vargas, Marcelle M.; Corrêa, Leonardo de L.; Caldeira, Tamara L.; Durães, Matheus F.; de Aguiar, Marilton S.
2016-10-01
Information on stream flows is essential for water resources management. The stream flow that is equaled or exceeded 90% of the time (Q90) is one the most used low stream flow indicators in many countries, and its determination is made from the frequency analysis of stream flows considering a historical series. However, stream flow gauging network is generally not spatially sufficient to meet the necessary demands of technicians, thus the most plausible alternative is the use of hydrological regionalization. The objective of this study was to couple the artificial intelligence techniques (AI) K-means, Partitioning Around Medoids (PAM), K-harmonic means (KHM), Fuzzy C-means (FCM) and Genetic K-means (GKA), with measures of low stream flow seasonality, for verification of its potential to delineate hydrologically homogeneous regions for the regionalization of Q90. For the performance analysis of the proposed methodology, location attributes from 108 watersheds situated in southern Brazil, and attributes associated with their seasonality of low stream flows were considered in this study. It was concluded that: (i) AI techniques have the potential to delineate hydrologically homogeneous regions in the context of Q90 in the study region, especially the FCM method based on fuzzy logic, and GKA, based on genetic algorithms; (ii) the attributes related to seasonality of low stream flows added important information that increased the accuracy of the grouping; and (iii) the adjusted mathematical models have excellent performance and can be used to estimate Q90 in locations lacking monitoring.
NASA Astrophysics Data System (ADS)
Ahani Amineh, Zainab Banoo; Hashemian, Seyyed Jamal Al-Din; Magholi, Alireza
2017-08-01
Hamoon-Jazmoorian plain is located in southeast of Iran. Overexploitation of groundwater in this plain has led to water level decline and caused serious problems such as land subsidence, aquifer destruction and water quality degradation. The increasing population and agricultural development along with drought and climate change, have further increased the pressure on water resources in this region over the last years. In order to overcome such crisis, introduction of surface water into an aquifer at particular locations can be a suitable solution. A wide variety of methods have been developed to recharge groundwater, one of which is aquifer storage and recovery (ASR). One of the fundamental principles of making such systems is delineation of suitable areas based on scientific and natural facts in order to achieve relevant objectives. To that end, the Multi Criteria Decision Making (MCDM) in conjunction with the Geographic Information Systems (GIS) was applied in this study. More specifically, nine main parameters including depth of runoff as the considered source of water, morphology of the earth surface features such as geology, geomorphology, land use and land cover, drainage and aquifer characteristics along with quality of water in the aquifer were considered as the main layers in GIS. The runoff water available for artificial recharge in the basin was estimated through Soil Conservation Service (SCS) curve number method. The weighted curve number for each watershed was derived through spatial intersection of land use and hydrological soil group layers. Other thematic layers were extracted from satellite images, topographical map, and other collateral data sources, then weighed according to their influence in locating process. The Analytical Hierarchy Process (AHP) method was then used to calculate weights of individual parameters. The normalized weighted layers were then overlaid to build up the recharge potential map. The results revealed that 34% of the total area is suitable and very suitable for groundwater recharge.
State of the Science Meeting: Burn Care: Goals for Treatment and Research
2006-11-01
nutrition/metabolism, wound management, and care of children and the elderly), rehabilitative care (the hand, psychological health, scar, community ...reconstruction j. Psychologic health k. Community reintegration l. Restoration of function 4. Identification of burn research needs from perspective of...of the burn community to define the research priorities for burns. These priorities have been clearly delineated and will be published in the
Saber y conocer: Un plan para su ensenaza (To know and to be acquainted with: A teaching plan).
ERIC Educational Resources Information Center
Lizardi-Rivera, Carmen M.
1995-01-01
Focuses on how to teach English-speaking students of Spanish the practical distinction between the verbs, "saber" (to be cognizant of) and "conocer" (to be acquainted with). This article describes a solution proposed by K. Taylor for explaining the limits of the two verbs and examines similar proposals delineated in three other Spanish textbooks.…
NASA Astrophysics Data System (ADS)
Sun, Xiao-Dong; Ge, Zhong-Hui; Li, Zhen-Chun
2017-09-01
Although conventional reverse time migration can be perfectly applied to structural imaging it lacks the capability of enabling detailed delineation of a lithological reservoir due to irregular illumination. To obtain reliable reflectivity of the subsurface it is necessary to solve the imaging problem using inversion. The least-square reverse time migration (LSRTM) (also known as linearized reflectivity inversion) aims to obtain relatively high-resolution amplitude preserving imaging by including the inverse of the Hessian matrix. In practice, the conjugate gradient algorithm is proven to be an efficient iterative method for enabling use of LSRTM. The velocity gradient can be derived from a cross-correlation between observed data and simulated data, making LSRTM independent of wavelet signature and thus more robust in practice. Tests on synthetic and marine data show that LSRTM has good potential for use in reservoir description and four-dimensional (4D) seismic images compared to traditional RTM and Fourier finite difference (FFD) migration. This paper investigates the first order approximation of LSRTM, which is also known as the linear Born approximation. However, for more complex geological structures a higher order approximation should be considered to improve imaging quality.
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Chuan, He; Dishan, Qiu; Jin, Liu
2012-01-01
The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522
On two mathematical problems of canonical quantization. IV
NASA Astrophysics Data System (ADS)
Kirillov, A. I.
1992-11-01
A method for solving the problem of reconstructing a measure beginning with its logarithmic derivative is presented. The method completes that of solving the stochastic differential equation via Dirichlet forms proposed by S. Albeverio and M. Rockner. As a result one obtains the mathematical apparatus for the stochastic quantization. The apparatus is applied to prove the existence of the Feynman-Kac measure of the sine-Gordon and λφ2n/(1 + K2φ2n)-models. A synthesis of both mathematical problems of canonical quantization is obtained in the form of a second-order martingale problem for vacuum noise. It is shown that in stochastic mechanics the martingale problem is an analog of Newton's second law and enables us to find the Nelson's stochastic trajectories without determining the wave functions.
NASA Astrophysics Data System (ADS)
Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.
2017-05-01
The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.
Boughton, David A.; Adams, P.B.; Anderson, E.; Fusaro, Craig; Keller, E.; Kelley, Elsie; Lentsch, Leo; Nielsen, J. L.; Perry, Katie; Regan, Helen; Swift, C.; Watson, Fred
2006-01-01
This report by the National Marine Fisheries Service applies a formal evaluation framework to the problem of delineating Oncorhynchus mykiss populations in the South-Central/Southern California Coast recovery domain, in support of recovery planning under the Endangered Species Act.
Motivation and Productivity as a Function of Corporate Climate.
ERIC Educational Resources Information Center
Hellweg, Susan A.
The current status of productivity and motivation research, particularly as they relate to communication studies and climate studies, is delineated in this paper, largely by a review of literature in these areas. In the section following the introduction, the problems of defining productivity and its relation to performance and communication are…
THE SCHOOL AND THE MIGRANT CHILD--A SURVEY INTERPRETED.
ERIC Educational Resources Information Center
National Committee on the Education of Migrant Children, Washington, DC.
A SURVEY CONDUCTED TO SECURE INFORMATION ON CONDITIONS AFFECTING MIGRANT CHILDREN IS PRESENTED. A FIVE-PART QUESTIONNAIRE DELINEATES THE NUMBER OF MIGRANT CHILDREN IN A GIVEN STATE, THEIR PARTICIPATION IN REGULAR AND SUMMER TERMS, AND NEEDS AND PROBLEMS CONNECTED WITH THEIR CLASSROOM ATTENDANCE. THE QUESTIONNAIRE HAS BEEN SENT TO DEPARTMENTS OF…
ERIC Educational Resources Information Center
Van Hulle, Carol A.; Schmidt, Nicole L.; Goldsmith, H. Hill
2012-01-01
Background: Although impaired sensory processing accompanies various clinical conditions, the question of its status as an independent disorder remains open. Our goal was to delineate the comorbidity (or lack thereof) between childhood psychopathology and sensory over-responsivity (SOR) in middle childhood using phenotypic and behavior-genetic…
Intruder or Resource? The Family's Influence in College Counseling Centers
ERIC Educational Resources Information Center
Haber, Russell; Merck, Rhea A.
2010-01-01
College can provide a transition from interdependence to differentiation in the family. With recent trends and legal cases that document increasing complexity and severity of mental health problems in college, it is important to consider the family as a partner in the therapeutic process. This article delineates a rationale, guidelines, and…
ERIC Educational Resources Information Center
Martorana, S. V., Ed.; And Others
This publication contains the text of the main presentations and the highlights of discussion groups from the Ninth Annual Pennsylvania Conference on Postsecondary Occupational Education. The conference theme was "Programming Postsecondary Occupational Education." Ewald Nyquist, the first speaker, delineated the problems faced by…
Curricula as Spaces of Interruption?
ERIC Educational Resources Information Center
Savin-Baden, Maggi
2011-01-01
This paper suggests that there has been a move away from teaching as a means of transmitting information, towards supporting learning as a student-generated activity. There has been much work relating to this in the arena of problem-based learning, which to date has been seen as a relatively stable approach to learning, delineated by particular…
ERIC Educational Resources Information Center
Gallant, Tricia Bertram; Drinan, Patrick
2008-01-01
The strategic choices facing higher education in confronting problems of academic misconduct need to be rethought. Using institutional theory, a model of academic integrity institutionalization is proposed that delineates four stages and a pendulum metaphor. A case study is provided to illustrate how the model can be used by postsecondary…
Through the Lens of Sensory Integration: A Different Way of Analyzing Challenging Behavior.
ERIC Educational Resources Information Center
Bakley, Sue
2001-01-01
Examines how sensory integration disorders contribute to behavioral difficulties in young children and how considering the neurological underpinnings to behavior problems can help to clarify their origins and lead to obtaining appropriate and effective help. Lists signs of sensory integration disorders. Delineates techniques to use when a child…
Higher Education Studies in Japan
ERIC Educational Resources Information Center
Kaneko, Motohisa
2010-01-01
The rapid development of higher education in the postwar period has given rise to various problems, and higher education studies in Japan have developed in response to them. What have been the major issues, and how did academic research respond to them, in postwar Japan? This article delineates an outline of higher education studies in general,…
Digital Assessment: A Picture Is Worth 1,000 Surveys
ERIC Educational Resources Information Center
Jackson, Michael W.; Rodgers, Jacci L.
2012-01-01
The role of accountability is becoming increasingly complex. Regional, state, programmatic, and national accreditors, as well as the constituents, demand to know why a problem exists, what the underlying causes are, and how schools are going to fix it. From a proactive standpoint, institutions want to delineate, and in some cases are required to…
30 CFR 282.20 - Obligations and responsibilities of lessees.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the approved Delineation, Testing, or Mining Plans; and other written or oral orders or instructions issued by the Director when performing exploration, testing, development, and production activities..., testing, development, and production operations on the lease available to the Director for examination and...
32 CFR 218.2 - General procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.2 General procedures. The following procedures govern the... exposure. (c) Qualitatively assess the radiation environment in order to delineate contaminated areas. If...
32 CFR 218.2 - General procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.2 General procedures. The following procedures govern the... exposure. (c) Qualitatively assess the radiation environment in order to delineate contaminated areas. If...
32 CFR 218.2 - General procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.2 General procedures. The following procedures govern the... exposure. (c) Qualitatively assess the radiation environment in order to delineate contaminated areas. If...
32 CFR 218.2 - General procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.2 General procedures. The following procedures govern the... exposure. (c) Qualitatively assess the radiation environment in order to delineate contaminated areas. If...
2010-08-25
Behavioral Decision Making, 22(2), 191-208. doi:10.1002/bdm.621 Blackmore, S. (1998). Imitation and the definition of a meme. Journal of Memetics...Maat and order in African cosmology : A conceptual tool for understanding indigenous knowledge. Journal of Black Studies, 38(6), 951-967. doi
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.
Multivariate Non-Symmetric Stochastic Models for Spatial Dependence Models
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Bárdossy, A.
2017-12-01
A copula based multivariate framework allows more flexibility to describe different kind of dependences than what is possible using models relying on the confining assumption of symmetric Gaussian models: different quantiles can be modelled with a different degree of dependence; it will be demonstrated how this can be expected given process understanding. maximum likelihood based multivariate quantitative parameter estimation yields stable and reliable results; not only improved results in cross-validation based measures of uncertainty are obtained but also a more realistic spatial structure of uncertainty compared to second order models of dependence; as much information as is available is included in the parameter estimation: incorporation of censored measurements (e.g., below detection limit, or ones that are above the sensitive range of the measurement device) yield to more realistic spatial models; the proportion of true zeros can be jointly estimated with and distinguished from censored measurements which allow estimates about the age of a contaminant in the system; secondary information (categorical and on the rational scale) has been used to improve the estimation of the primary variable; These copula based multivariate statistical techniques are demonstrated based on hydraulic conductivity observations at the Borden (Canada) site, the MADE site (USA), and a large regional groundwater quality data-set in south-west Germany. Fields of spatially distributed K were simulated with identical marginal simulation, identical second order spatial moments, yet substantially differing solute transport characteristics when numerical tracer tests were performed. A statistical methodology is shown that allows the delineation of a boundary layer separating homogenous parts of a spatial data-set. The effects of this boundary layer (macro structure) and the spatial dependence of K (micro structure) on solute transport behaviour is shown.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
NASA Astrophysics Data System (ADS)
Endo, M.; Hori, T.; Koyama, K.; Yamaguchi, I.; Arai, K.; Kaiho, K.; Yanabu, S.
2008-02-01
Using a high temperature superconductor, we constructed and tested a model Superconducting Fault Current Limiter (SFCL). SFCL which has a vacuum interrupter with electromagnetic repulsion mechanism. We set out to construct high voltage class SFCL. We produced the electromagnetic repulsion switch equipped with a 24kV vacuum interrupter(VI). There are problems that opening speed becomes late. Because the larger vacuum interrupter the heavier weight of its contact. For this reason, the current which flows in a superconductor may be unable to be interrupted within a half cycles of current. In order to solve this problem, it is necessary to change the design of the coil connected in parallel and to strengthen the electromagnetic repulsion force at the time of opening the vacuum interrupter. Then, the design of the coil was changed, and in order to examine whether the problem is solvable, the current limiting test was conducted. We examined current limiting test using 4 series and 2 parallel-connected YBCO thin films. We used 12-centimeter-long YBCO thin film. The parallel resistance (0.1Ω) is connected with each YBCO thin film. As a result, we succeed in interrupting the current of superconductor within a half cycle of it. Furthermore, series and parallel-connected YBCO thin film could limit without failure.
Document retrieval on repetitive string collections.
Gagie, Travis; Hartikainen, Aleksi; Karhu, Kalle; Kärkkäinen, Juha; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni
2017-01-01
Most of the fastest-growing string collections today are repetitive, that is, most of the constituent documents are similar to many others. As these collections keep growing, a key approach to handling them is to exploit their repetitiveness, which can reduce their space usage by orders of magnitude. We study the problem of indexing repetitive string collections in order to perform efficient document retrieval operations on them. Document retrieval problems are routinely solved by search engines on large natural language collections, but the techniques are less developed on generic string collections. The case of repetitive string collections is even less understood, and there are very few existing solutions. We develop two novel ideas, interleaved LCPs and precomputed document lists , that yield highly compressed indexes solving the problem of document listing (find all the documents where a string appears), top- k document retrieval (find the k documents where a string appears most often), and document counting (count the number of documents where a string appears). We also show that a classical data structure supporting the latter query becomes highly compressible on repetitive data. Finally, we show how the tools we developed can be combined to solve ranked conjunctive and disjunctive multi-term queries under the simple [Formula: see text] model of relevance. We thoroughly evaluate the resulting techniques in various real-life repetitiveness scenarios, and recommend the best choices for each case.
NASA Astrophysics Data System (ADS)
Gusev, Aleksandr I.
2000-01-01
Data on order-disorder phase transformations in strongly nonstoichiometric carbides and nitrides MXy (X=C, N) of Group IV and V transition metals at temperatures below 1300-1400 K are reviewed. The order-parameter functional method as applied to atomic and vacancy ordering in strongly nonstoichiometric MXy compounds and to phase equilibrium calculations for M-X systems is discussed. Phase diagram calculations for the Ti-C, Zr-C, Hf-C, V-C, Nb-C, Ta-C, Ti-N, and Ti-B-C systems (with the inclusion of the ordering of nonstoichiometric carbides and nitrides) and those for pseudobinary carbide M(1)C-M(2)C systems are presented. Heat capacity, electrical resistivity and magnetic susceptibility changes at reversible order-disorder phase transformations in nonstoichiometric carbides are considered.
Mapping advanced argillic alteration at Cuprite, Nevada, using imaging spectroscopy
Swayze, Gregg A.; Clark, Roger N.; Goetz, Alexander F.H.; Livo, K. Eric; Breit, George N.; Kruse, Fred A.; Sutley, Stephen J.; Snee, Lawrence W.; Lowers, Heather A.; Post, James L.; Stoffregen, Roger E.; Ashley, Roger P.
2014-01-01
Mineral maps based on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were used to study late Miocene advanced argillic alteration at Cuprite, Nevada. Distributions of Fe-bearing minerals, clays, micas, sulfates, and carbonates were mapped using the Tetracorder spectral-shape matching system. The Al content of white micas increases toward altered areas and near intrusive rocks. Alunite composition varies from pure K to intimate mixtures of Na-K endmembers with subpixel occurrences of huangite, the Ca analogue of alunite. Intimately mixed Na-K alunite marks areas of relatively lower alteration temperature, whereas co-occurring Na-alunite and dickite may delineate relict hydrothermal conduits. The presence of dickite, halloysite, and well-ordered kaolinite, but absence of disordered kaolinite, is consistent with acidic conditions during hydrothermal alteration. Partial lichen cover on opal spectrally mimics chalcedony, limiting its detection to lichen-free areas. Pods of buddingtonite are remnants of initial quartz-adularia-smectite alteration. Thus, spectral maps provide a synoptic view of the surface mineralogy, and define a previously unrecognized early steam-heated hydrothermal event.Faulting and episodes of hydrothermal alteration at Cuprite were intimately linked to upper plate movements above the Silver Peak-Lone Mountain detachment and growth, collapse, and resurgence of the nearby Stonewall Mountain volcanic complex between 8 and 5 Ma. Isotopic dating indicates that hydrothermal activity started at least by 7.61 Ma and ended by about 6.2 Ma. Spectral and stable isotope data suggest that Cuprite is a late Miocene low-sulfidation adularia-sericite type hot spring deposit overprinted by late-stage, steam-heated advanced argillic alteration formed along the margin of the Stonewall Mountain caldera.
Jang, K L; Vernon, P A; Livesley, W J
2000-06-01
This study seeks to estimate the extent to which a common genetic and environmental basis is shared between (i) traits delineating specific aspects of antisocial personality and alcohol misuse, and (ii) childhood family environments, traits delineating broad domains of personality pathology and alcohol misuse. Postal survey data were collected from monozygotic and dizygotic twin pairs. Twin pairs were recruited from Vancouver, British Columbia and London, Ontario, Canada using newspaper advertisements, media stories and twin clubs. Data obtained from 324 monozygotic and 335 dizygotic twin pairs were used to estimate the extent to which traits delineating specific antisocial personality traits and alcohol misuse shared a common genetic and environmental aetiology. Data from 81 monozygotic and 74 dizygotic twin pairs were used to estimate the degree to which traits delineating personality pathology, childhood family environment and alcohol misuse shared a common aetiology. Current alcohol misuse and personality pathology were measured using scales contained in the self-report Dimensional Assessment of Personality Pathology. Perceptions of childhood family environment were measured using the self-report Family Environment Scale. Multivariate genetic analyses showed that a subset of traits delineating components of antisocial personality (i.e. grandiosity, attention-seeking, failure to adopt social norms, interpersonal violence and juvenile antisocial behaviours) are influenced by genetic factors in common to alcohol misuse. Genetically based perceptions of childhood family environment had little relationship with alcohol misuse. Heritable personality factors that influence the perception of childhood family environment play only a small role in the liability to alcohol misuse. Instead, liability to alcohol misuse is related to genetic factors common a specific subset of antisocial personality traits describing conduct problems, narcissistic and stimulus-seeking behaviour.
Ultrarelativistic bound states in the spherical well
DOE Office of Scientific and Technical Information (OSTI.GOV)
Żaba, Mariusz; Garbaczewski, Piotr
2016-07-15
We address an eigenvalue problem for the ultrarelativistic (Cauchy) operator (−Δ){sup 1/2}, whose action is restricted to functions that vanish beyond the interior of a unit sphere in three spatial dimensions. We provide high accuracy spectral data for lowest eigenvalues and eigenfunctions of this infinite spherical well problem. Our focus is on radial and orbital shapes of eigenfunctions. The spectrum consists of an ordered set of strictly positive eigenvalues which naturally splits into non-overlapping, orbitally labelled E{sub (k,l)} series. For each orbital label l = 0, 1, 2, …, the label k = 1, 2, … enumerates consecutive lth seriesmore » eigenvalues. Each of them is 2l + 1-degenerate. The l = 0 eigenvalues series E{sub (k,0)} are identical with the set of even labeled eigenvalues for the d = 1 Cauchy well: E{sub (k,0)}(d = 3) = E{sub 2k}(d = 1). Likewise, the eigenfunctions ψ{sub (k,0)}(d = 3) and ψ{sub 2k}(d = 1) show affinity. We have identified the generic functional form of eigenfunctions of the spherical well which appear to be composed of a product of a solid harmonic and of a suitable purely radial function. The method to evaluate (approximately) the latter has been found to follow the universal pattern which effectively allows to skip all, sometimes involved, intermediate calculations (those were in usage, while computing the eigenvalues for l ≤ 3).« less
1983-03-01
RESEARCH Title Author S Report 1: Mlcoravirmetrlc and Magnetic Surveys: Medford Cave Dwain K~ Sufer Sit Florida Report 2: Seismic Methodology. Medford...ERORMNGORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASKS. PRFORINGAREA & WORK UNIT NUMBERS U. S. Army Engineer Waterways Experiment...Station *Geotechnical Laboratory CWIS Work Unit 31150 * P.O. Box 631, Vicksburg, Miss. 39180 11. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE
The delineation and interpretation of the Earth's gravity field
NASA Technical Reports Server (NTRS)
Marsh, Bruce D.
1987-01-01
The geoid and topographic fields of the central Pacific were delineated and shown to correlate closely at intermediate wavelengths (500 to 2500 km). The associated admittance shows that anomalies having wavelengths less than about 1000 km are probably supported by the elastic strength of the lithosphere. Larger wavelength anomalies are due to dynamic effects in the sublithosphere. Direct modeling of small scale convection in the asthenosphere shows that the amplitudes of observed geoid and topographic anomalies can be independently matched, but that the observed admittance cannot. Only by imposing an initial regional variation in the thermal regime is it possible to match the admittance. It is proposed that this variation may be due to differences in the onset time of convection beneath the lithosphere of different ages. That is, convection beneath thickening lithosphere is strongly dependent on the rate of thickening (V) relative to the rise time for convection. The critical Rayleigh number contains the length scale K/V, where K is thermal diffusivity. Young, fast growing lithosphere stabilizes the underlying asthenosphere unless it has an unusually low viscosity. Lithosphere of different age, separated by fracture zones, will go unstable at different times, producing regional horizontal temperature gradient that may strongly influence convection. Laboratory and numerical experiments are proposed to study this form of convection and its influence on the geoid.
Invariant models in the inversion of gravity and magnetic fields and their derivatives
NASA Astrophysics Data System (ADS)
Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni
2014-11-01
In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.
Lucas-Neto, Lia; Reimão, Sofia; Oliveira, Edson; Rainha-Campos, Alexandre; Sousa, João; Nunes, Rita G; Gonçalves-Ferreira, António; Campos, Jorge G
2015-07-01
The human nucleus accumbens (Acc) has become a target for deep brain stimulation (DBS) in some neuropsychiatric disorders. Nonetheless, even with the most recent advances in neuroimaging it remains difficult to accurately delineate the Acc and closely related subcortical structures, by conventional MRI sequences. It is our purpose to perform a MRI study of the human Acc and to determine whether there are reliable anatomical landmarks that enable the precise location and identification of the nucleus and its core/shell division. For the Acc identification and delineation, based on anatomical landmarks, T1WI, T1IR and STIR 3T-MR images were acquired in 10 healthy volunteers. Additionally, 32-direction DTI was obtained for Acc segmentation. Seed masks for the Acc were generated with FreeSurfer and probabilistic tractography was performed using FSL. The probability of connectivity between the seed voxels and distinct brain areas was determined and subjected to k-means clustering analysis, defining 2 different regions. With conventional T1WI, the Acc borders are better defined through its surrounding anatomical structures. The DTI color-coded vector maps and IR sequences add further detail in the Acc identification and delineation. Additionally, using probabilistic tractography it is possible to segment the Acc into a core and shell division and establish its structural connectivity with different brain areas. Advanced MRI techniques allow in vivo delineation and segmentation of the human Acc and represent an additional guiding tool in the precise and safe target definition for DBS. © 2015 International Neuromodulation Society.
Towards machine ecoregionalization of Earth's landmass using pattern segmentation method
NASA Astrophysics Data System (ADS)
Nowosad, Jakub; Stepinski, Tomasz F.
2018-07-01
We present and evaluate a quantitative method for delineation of ecophysiographic regions throughout the entire terrestrial landmass. The method uses the new pattern-based segmentation technique which attempts to emulate the qualitative, weight-of-evidence approach to a delineation of ecoregions in a computer code. An ecophysiographic region is characterized by homogeneous physiography defined by the cohesiveness of patterns of four variables: land cover, soils, landforms, and climatic patterns. Homogeneous physiography is a necessary but not sufficient condition for a region to be an ecoregion, thus machine delineation of ecophysiographic regions is the first, important step toward global ecoregionalization. In this paper, we focus on the first-order approximation of the proposed method - delineation on the basis of the patterns of the land cover alone. We justify this approximation by the existence of significant spatial associations between various physiographic variables. Resulting ecophysiographic regionalization (ECOR) is shown to be more physiographically homogeneous than existing global ecoregionalizations (Terrestrial Ecoregions of the World (TEW) and Bailey's Ecoregions of the Continents (BEC)). The presented quantitative method has an advantage of being transparent and objective. It can be verified, easily updated, modified and customized for specific applications. Each region in ECOR contains detailed, SQL-searchable information about physiographic patterns within it. It also has a computer-generated label. To give a sense of how ECOR compares to TEW and, in the U.S., to EPA Level III ecoregions, we contrast these different delineations using two specific sites as examples. We conclude that ECOR yields regionalization somewhat similar to EPA level III ecoregions, but for the entire world, and by automatic means.
Iterative Otsu's method for OCT improved delineation in the aorta wall
NASA Astrophysics Data System (ADS)
Alonso, Daniel; Real, Eusebio; Val-Bernal, José F.; Revuelta, José M.; Pontón, Alejandro; Calvo Díez, Marta; Mayorga, Marta; López-Higuera, José M.; Conde, Olga M.
2015-07-01
Degradation of human ascending thoracic aorta has been visualized with Optical Coherence Tomography (OCT). OCT images of the vessel wall exhibit structural degradation in the media layer of the artery, being this disorder the final trigger of the pathology. The degeneration in the vessel wall appears as low-reflectivity areas due to different optical properties of acidic polysaccharides and mucopolysaccharides in contrast with typical ordered structure of smooth muscle cells, elastin and collagen fibers. An OCT dimension indicator of wall degradation can be generated upon the spatial quantification of the extension of degraded areas in a similar way as conventional histopathology. This proposed OCT marker can offer in the future a real-time clinical perception of the vessel status to help cardiovascular surgeons in vessel repair interventions. However, the delineation of degraded areas on the B-scan image from OCT is sometimes difficult due to presence of speckle noise, variable signal to noise ratio (SNR) conditions on the measurement process, etc. Degraded areas can be delimited by basic thresholding techniques taking advantage of disorders evidences in B-scan images, but this delineation is not optimum in the aorta samples and requires complex additional processing stages. This work proposes an optimized delineation of degraded areas within the aorta wall, robust to noisy environments, based on the iterative application of Otsu's thresholding method. Results improve the delineation of wall anomalies compared with the simple application of the algorithm. Achievements could be also transferred to other clinical scenarios: carotid arteries, aorto-iliac or ilio-femoral sections, intracranial, etc.
Liu, Huanjun; Huffman, Ted; Liu, Jiangui; Li, Zhe; Daneshfar, Bahram; Zhang, Xinle
2015-01-01
Understanding agricultural ecosystems and their complex interactions with the environment is important for improving agricultural sustainability and environmental protection. Developing the necessary understanding requires approaches that integrate multi-source geospatial data and interdisciplinary relationships at different spatial scales. In order to identify and delineate landscape units representing relatively homogenous biophysical properties and eco-environmental functions at different spatial scales, a hierarchical system of uniform management zones (UMZ) is proposed. The UMZ hierarchy consists of seven levels of units at different spatial scales, namely site-specific, field, local, regional, country, continent, and globe. Relatively few studies have focused on the identification of the two middle levels of units in the hierarchy, namely the local UMZ (LUMZ) and the regional UMZ (RUMZ), which prevents true eco-environmental studies from being carried out across the full range of scales. This study presents a methodology to delineate LUMZ and RUMZ spatial units using land cover, soil, and remote sensing data. A set of objective criteria were defined and applied to evaluate the within-zone homogeneity and between-zone separation of the delineated zones. The approach was applied in a farming and forestry region in southeastern Ontario, Canada, and the methodology was shown to be objective, flexible, and applicable with commonly available spatial data. The hierarchical delineation of UMZs can be used as a tool to organize the spatial structure of agricultural landscapes, to understand spatial relationships between cropping practices and natural resources, and to target areas for application of specific environmental process models and place-based policy interventions.
NASA Astrophysics Data System (ADS)
Schlögel, R.; Marchesini, I.; Alvioli, M.; Reichenbach, P.; Rossi, M.; Malet, J.-P.
2018-01-01
We perform landslide susceptibility zonation with slope units using three digital elevation models (DEMs) of varying spatial resolution of the Ubaye Valley (South French Alps). In so doing, we applied a recently developed algorithm automating slope unit delineation, given a number of parameters, in order to optimize simultaneously the partitioning of the terrain and the performance of a logistic regression susceptibility model. The method allowed us to obtain optimal slope units for each available DEM spatial resolution. For each resolution, we studied the susceptibility model performance by analyzing in detail the relevance of the conditioning variables. The analysis is based on landslide morphology data, considering either the whole landslide or only the source area outline as inputs. The procedure allowed us to select the most useful information, in terms of DEM spatial resolution, thematic variables and landslide inventory, in order to obtain the most reliable slope unit-based landslide susceptibility assessment.
1950-12-01
Potentiometer Loading Compensation K. Limiting an Integral - - - L. Deadspace and Backlash - - - M. Accuracy IV. Plugboard Wiring - - - - - - - 110 113... plugboard is the major modification made on the REAC and as a copsequence will receive the major emphasis. This manual demands of the reader a...weeks depending on the problem complexity, while individual runs require the order of a minute once the plugboard has been wired. However, altering
Doctor, Daniel H.; Young, John A.
2013-01-01
LiDAR (Light Detection and Ranging) surveys of karst terrains provide high-resolution digital elevation models (DEMs) that are particularly useful for mapping sinkholes. In this study, we used automated processing tools within ArcGIS (v. 10.0) operating on a 1.0 m resolution LiDAR DEM in order to delineate sinkholes and closed depressions in the Boyce 7.5 minute quadrangle located in the northern Shenandoah Valley of Virginia. The results derived from the use of the automated tools were then compared with depressions manually delineated by a geologist. Manual delineation of closed depressions was conducted using a combination of 1.0 m DEM hillshade, slopeshade, aerial imagery, and Topographic Position Index (TPI) rasters. The most effective means of visualizing depressions in the GIS was using an overlay of the partially transparent TPI raster atop the slopeshade raster at 1.0 m resolution. Manually identified depressions were subsequently checked using aerial imagery to screen for false positives, and targeted ground-truthing was undertaken in the field. The automated tools that were utilized include the routines in ArcHydro Tools (v. 2.0) for prescreening, evaluating, and selecting sinks and depressions as well as thresholding, grouping, and assessing depressions from the TPI raster. Results showed that the automated delineation of sinks and depressions within the ArcHydro tools was highly dependent upon pre-conditioning of the DEM to produce "hydrologically correct" surface flow routes. Using stream vectors obtained from the National Hydrologic Dataset alone to condition the flow routing was not sufficient to produce a suitable drainage network, and numerous artificial depressions were generated where roads, railways, or other manmade structures acted as flow barriers in the elevation model. Additional conditioning of the DEM with drainage paths across these barriers was required prior to automated 2delineation of sinks and depressions. In regions where the DEM had been properly conditioned, the tools for automated delineation performed reasonably well as compared to the manually delineated depressions, but generally overestimated the number of depressions thus necessitating manual filtering of the final results. Results from the TPI thresholding analysis were not dependent on DEM pre-conditioning, but the ability to extract meaningful depressions depended on careful assessment of analysis scale and TPI thresholding.
NASA Astrophysics Data System (ADS)
Enzenhöfer, R.; Geiges, A.; Nowak, W.
2011-12-01
Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill-delineated fractions of protection zones. Within an illustrative simplified 2D synthetic test case, we demonstrate our concept, involving synthetic transmissivity and head measurements for conditioning. We demonstrate the worth of optimally collected data in the context of protection zone delineation by assessing the reduced areal demand of delineated area at user-specified risk acceptance level. Results indicate that, thanks to optimally collected data, risk-aware delineation can be made at low to moderate additional costs compared to conventional delineation strategies.
Sokolov, Alexander; Louhi-Kultanen, Marjatta
2018-06-07
The increase in volume and variety of pharmaceuticals found in natural water bodies has become an increasingly serious environmental problem. The implementation of cold plasma technology, specifically gas-phase pulsed corona discharge (PCD), for sulfamethizole abatement was studied in the present work. It was observed that sulfamethizole is easily oxidized by PCD. The flow rate and pH of the solution have no significant effect on the oxidation. Treatment at low pulse repetition frequency is preferable from the energy efficiency point of view but is more time-consuming. The maximum energy efficiency was around 120 g/kWh at half-life and around 50 g/kWh at the end of the treatment. Increasing the solution temperature from room temperature to 50 °C led to a significant reaction retardation of the process and decrease in energy efficiency. The pseudo-first order reaction rate constant (k 1 ) grows with increase in pulse repetition frequency and does not depend on pH. By contrast, decreasing frequency leads to a reduction of the second order reaction rate constant (k 2 ). At elevated temperature of 50 °C, the k 1 , k 2 values decrease 2 and 2.9 times at 50 pps and 500 pps respectively. Lower temperature of 10 °C had no effect on oxidation efficiency compared with room temperature.
Unified Program Design: Organizing Existing Programming Models, Delivery Options, and Curriculum
ERIC Educational Resources Information Center
Rubenstein, Lisa DaVia; Ridgley, Lisa M.
2017-01-01
A persistent problem in the field of gifted education has been the lack of categorization and delineation of gifted programming options. To address this issue, we propose Unified Program Design as a structural framework for gifted program models. This framework defines gifted programs as the combination of delivery methods and curriculum models.…
Human relationships to fire prone ecosystems: Mapping values at risk on contested landscapes
Kari Gunderson; Steve Carver; Brett H. Davis
2011-01-01
A key problem in developing a better understanding of different responses to landscape level management actions, such as fuel treatments, is being able to confidently record and accurately spatially delineate the meanings stakeholders ascribe to the landscape. To more accurately understand these relationships with the Bitterroot National Forest, Montana, U.S.A., local...
An Inventory of Natural, Human, and Social Overhead Capital Resources in North-Central New Mexico.
ERIC Educational Resources Information Center
Carruthers, Garrey; Eastman, Clyde
Concerned with the north-central area of New Mexico (Rio Arriba, Taos, Colfax, Mora, Santa Fe, and San Miguel counties), this inventory describes the situation and delineation of the region, the natural resources (physical characteristics, land, land-ownership patterns, land-use patterns, land-title problems, water resources, and minerals); human…
ERIC Educational Resources Information Center
Davies, Patrick T.; Martin, Meredith J.; Cicchetti, Dante
2012-01-01
We examined the joint role of constructive and destructive interparental conflict in predicting children's emotional insecurity and psychological problems. In Study 1, 250 early adolescents (M = 12.6 years) and their primary caregivers completed assessments of family and child functioning. In Study 2, 201 mothers and their 2-year-old children…
Sociocultural Theory as an Approach to Aid EFL Learners
ERIC Educational Resources Information Center
Behroozizad, Sorayya; Nambiar, Radha M. K.; Amir, Zaini
2014-01-01
Learning English as a foreign language (EFL) has long been regarded a challenging task. Said challenge is clearly evident in the many studies attempting to delineate some of the major problems faced by EFL learners while trying to uncover both the sources and the solutions. This paper turns to the Vygotskian approach to language learning, in…
Art and Cognition: Integrating the Visual Arts in the Curriculum.
ERIC Educational Resources Information Center
Efland, Arthur D.
This book not only sheds lights on the problems inhibiting art education, but also demonstrates how art contributes to the overall development of the mind. Delineating how the development of artistic interests and ability is an important aspect of cognition and learning, the book aims to show how art helps individuals construct cultural meaning, a…
1982-02-01
1968, 1969 and 1972 Confereaces. Zec -tain items at. the list delineate problems needing research (reattachment zones, *iv/bou j.Azy layer interactions...viscous energy equation--each in unaveraged form. As Peter Bradshaw has put it, God gave us one good model. Why should there be another model that is
High Resolution, Low Altitude Aeromagnetic and Electromagnetic Survey of Mt Rainier
Rystrom, V.L.; Finn, C.; Deszcz-Pan, Maryla
2000-01-01
In October 1996, the USGS conducted a high resolution airborne magnetic and electromagnetic survey in order to discern through-going sections of exposed altered rocks and those obscured beneath snow, vegetation and surficial unaltered rocks. Hydrothermally altered rocks weaken volcanic edifices, creating the potential for catastrophic sector collapses and ensuing formation of destructive volcanic debris flows. This data once compiled and interpreted, will be used to examine the geophysical properties of the Mt. Rainier volcano, and help assist the USGS in its Volcanic Hazards Program and at its Cascades Volcano Observatory. Aeromagnetic and electromagnetic data provide a means for seeing through surficial layers and have been tools for delineating structures within volcanoes. However, previously acquired geophysical data were not useful for small-scale geologic mapping. In this report, we present the new aeromagnetic and electromagnetic data, compare results from previously obtained, low-resolution aeromagnetic data with new data collected at a low-altitude and closely spaced flightlines, and provide information on potential problems with using high-resolution data.
Winnowing sequences from a database search.
Berman, P; Zhang, Z; Wolf, Y I; Koonin, E V; Miller, W
2000-01-01
In database searches for sequence similarity, matches to a distinct sequence region (e.g., protein domain) are frequently obscured by numerous matches to another region of the same sequence. In order to cope with this problem, algorithms are developed to discard redundant matches. One model for this problem begins with a list of intervals, each with an associated score; each interval gives the range of positions in the query sequence that align to a database sequence, and the score is that of the alignment. If interval I is contained in interval J, and I's score is less than J's, then I is said to be dominated by J. The problem is then to identify each interval that is dominated by at least K other intervals, where K is a given level of "tolerable redundancy." An algorithm is developed to solve the problem in O(N log N) time and O(N*) space, where N is the number of intervals and N* is a precisely defined value that never exceeds N and is frequently much smaller. This criterion for discarding database hits has been implemented in the Blast program, as illustrated herein with examples. Several variations and extensions of this approach are also described.
Design controls for large order systems
NASA Technical Reports Server (NTRS)
Doane, George B., III
1991-01-01
The output of this task will be a program plan which will delineate how MSFC will support and implement its portion of the Inter-Center Computational Controls Program Plan. Another output will be the results of looking at various multibody/multidegree of freedom computer programs in various environments.
Schimek-Jasch, Tanja; Troost, Esther G C; Rücker, Gerta; Prokic, Vesna; Avlar, Melanie; Duncker-Rohr, Viola; Mix, Michael; Doll, Christian; Grosu, Anca-Ligia; Nestle, Ursula
2015-06-01
Interobserver variability in the definition of target volumes (TVs) is a well-known confounding factor in (multicentre) clinical studies employing radiotherapy. Therefore, detailed contouring guidelines are provided in the prospective randomised multicentre PET-Plan (NCT00697333) clinical trial protocol. This trial compares strictly FDG-PET-based TV delineation with conventional TV delineation in patients with locally advanced non-small cell lung cancer (NSCLC). Despite detailed contouring guidelines, their interpretation by different radiation oncologists can vary considerably, leading to undesirable discrepancies in TV delineation. Considering this, as part of the PET-Plan study quality assurance (QA), a contouring dummy run (DR) consisting of two phases was performed to analyse the interobserver variability before and after teaching. In the first phase of the DR (DR1), radiation oncologists from 14 study centres were asked to delineate TVs as defined by the study protocol (gross TV, GTV; and two clinical TVs, CTV-A and CTV-B) in a test patient. A teaching session was held at a study group meeting, including a discussion of the results focussing on discordances in comparison to the per-protocol solution. Subsequently, the second phase of the DR (DR2) was performed in order to evaluate the impact of teaching. Teaching after DR1 resulted in a reduction of absolute TVs in DR2, as well as in better concordance of TVs. The Overall Kappa(κ) indices increased from 0.63 to 0.71 (GTV), 0.60 to 0.65 (CTV-A) and from 0.59 to 0.63 (CTV-B), demonstrating improvements in overall interobserver agreement. Contouring DRs and study group meetings as part of QA in multicentre clinical trials help to identify misinterpretations of per-protocol TV delineation. Teaching the correct interpretation of protocol contouring guidelines leads to a reduction in interobserver variability and to more consistent contouring, which should consequently improve the validity of the overall study results.
Bayesian linkage and segregation analysis: factoring the problem.
Matthysse, S
2000-01-01
Complex segregation analysis and linkage methods are mathematical techniques for the genetic dissection of complex diseases. They are used to delineate complex modes of familial transmission and to localize putative disease susceptibility loci to specific chromosomal locations. The computational problem of Bayesian linkage and segregation analysis is one of integration in high-dimensional spaces. In this paper, three available techniques for Bayesian linkage and segregation analysis are discussed: Markov Chain Monte Carlo (MCMC), importance sampling, and exact calculation. The contribution of each to the overall integration will be explicitly discussed.
Thermomechanical Fatigue of Ductile Cast Iron and Its Life Prediction
NASA Astrophysics Data System (ADS)
Wu, Xijia; Quan, Guangchun; MacNeil, Ryan; Zhang, Zhong; Liu, Xiaoyang; Sloss, Clayton
2015-06-01
Thermomechanical fatigue (TMF) behaviors of ductile cast iron (DCI) were investigated under out-of-phase (OP), in-phase (IP), and constrained strain-control conditions with temperature hold in various temperature ranges: 573 K to 1073 K, 723 K to 1073 K, and 433 K to 873 K (300 °C to 800 °C, 450 °C to 800 °C, and 160 °C to 600 °C). The integrated creep-fatigue theory (ICFT) model was incorporated into the finite element method to simulate the hysteresis behavior and predict the TMF life of DCI under those test conditions. With the consideration of four deformation/damage mechanisms: (i) plasticity-induced fatigue, (ii) intergranular embrittlement, (iii) creep, and (iv) oxidation, as revealed from the previous study on low cycle fatigue of the material, the model delineates the contributions of these physical mechanisms in the asymmetrical hysteresis behavior and the damage accumulation process leading to final TMF failure. This study shows that the ICFT model can simulate the stress-strain response and life of DCI under complex TMF loading profiles (OP and IP, and constrained with temperature hold).
Crack Turning and Arrest Mechanisms for Integral Structure
NASA Technical Reports Server (NTRS)
Pettit, Richard; Ingraffea, Anthony
1999-01-01
In the course of several years of research efforts to predict crack turning and flapping in aircraft fuselage structures and other problems related to crack turning, the 2nd order maximum tangential stress theory has been identified as the theory most capable of predicting the observed test results. This theory requires knowledge of a material specific characteristic length, and also a computation of the stress intensity factors and the T-stress, or second order term in the asymptotic stress field in the vicinity of the crack tip. A characteristic length, r(sub c), is proposed for ductile materials pertaining to the onset of plastic instability, as opposed to the void spacing theories espoused by previous investigators. For the plane stress case, an approximate estimate of r(sub c), is obtained from the asymptotic field for strain hardening materials given by Hutchinson, Rice and Rosengren (HRR). A previous study using of high order finite element methods to calculate T-stresses by contour integrals resulted in extremely high accuracy values obtained for selected test specimen geometries, and a theoretical error estimation parameter was defined. In the present study, it is shown that a large portion of the error in finite element computations of both K and T are systematic, and can be corrected after the initial solution if the finite element implementation utilizes a similar crack tip discretization scheme for all problems. This scheme is applied for two-dimensional problems to a both a p-version finite element code, showing that sufficiently accurate values of both K(sub I) and T can be obtained with fairly low order elements if correction is used. T-stress correction coefficients are also developed for the singular crack tip rosette utilized in the adaptive mesh finite element code FRANC2D, and shown to reduce the error in the computed T-stress significantly. Stress intensity factor correction was not attempted for FRANC2D because it employs a highly accurate quarter-point scheme to obtain stress intensity factors.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
A new approach to treat discontinuities in multi-layered soils
NASA Astrophysics Data System (ADS)
Berardi, Marco; Difonzo, Fabio; Caputo, Maria; Vurro, Michele; Lopez, Luciano
2017-04-01
The water infiltration into two (or more) layered soils can give rise to preferential flow paths at the interface between different soils. The deep understanding of this phenomenon can be of great interest in modeling different environmental problems in geosciences and hydrology. Flow through layered soils arises naturally in agriculture, and layered soils are also engineered as cover liners for landfills. In particular, the treatment of the soil discontinuity is of great interest from the modeling and the numerical point of view, and is still an open problem.% (see, for example, te{Matthews_et_al,Zha_vzj_2013,DeLuca_Cepeda_ASCE_2016}). Assuming to approximate the soils with different porous media, the governing equation for this phenomenon is Richards' equation, in the following form: {eq:different_Richards_1} C_1(ψ) partial ψ/partial t = partial /partial z [ K_1(ψ) ( partial ψ/partial z - 1 ) ], \\quad if \\quad z < \\overline{z}, C_2(ψ) partial ψ/partial t = partial /partial z [ K_2(ψ) ( partial ψ/partial z - 1 ) ], \\quad if \\quad z > \\overline{z}, where \\overline{z} is the spatial threshold that identifies the change in soil structure, and C1 C_2, K_1, K_2, the hydraulic functions that describe the upper and the lower soil, respectively. The ψ-based form is used, in this work. Here we have used the Filippov's theory in order to deal with discontinuous differential systems, and we handled opportunely the numerical discretization in order to treat the abovementioned system by means of this theory, letting the discontinuity depend on the state variable. The advantage of this technique is a better insight on the solution behavior on the discontinuity surface, and the no-need to average the hydraulic conductivity field on the threshold itself, as in the existing literature.
Vegetation Sampling for Wetland Delineation: A Review and Synthesis of Methods and Sampling Issues
2010-07-01
different combination of characteristics. Wetlands may exhibit sharp boundaries between plant communities ( ecotones ), a gradual boundary (ecocline), or...and weighted averages as a component of assessment. Wetlands 13(3): 185–193. Attrill, M. J., and S. D. Rundle. 2002. Ecotone or ecocline...Gammon, and M. K. Garrett. 1994. Ecotone dynamics and boundary determination in the Great Dismal Swamp. Ecological Applications 4(1): 189– 203. ERDC
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.
Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm
Baig, Fahd; Little, Max A.
2016-01-01
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525
Semantic Segmentation of Forest Stands of Pure Species as a Global Optimization Problem
NASA Astrophysics Data System (ADS)
Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet-Brunet, V.
2017-05-01
Forest stand delineation is a fundamental task for forest management purposes, that is still mainly manually performed through visual inspection of geospatial (very) high spatial resolution images. Stand detection has been barely addressed in the literature which has mainly focused, in forested environments, on individual tree extraction and tree species classification. From a methodological point of view, stand detection can be considered as a semantic segmentation problem. It offers two advantages. First, one can retrieve the dominant tree species per segment. Secondly, one can benefit from existing low-level tree species label maps from the literature as a basis for high-level object extraction. Thus, the semantic segmentation issue becomes a regularization issue in a weakly structured environment and can be formulated in an energetical framework. This papers aims at investigating which regularization strategies of the literature are the most adapted to delineate and classify forest stands of pure species. Both airborne lidar point clouds and multispectral very high spatial resolution images are integrated for that purpose. The local methods (such as filtering and probabilistic relaxation) are not adapted for such problem since the increase of the classification accuracy is below 5%. The global methods, based on an energy model, tend to be more efficient with an accuracy gain up to 15%. The segmentation results using such models have an accuracy ranging from 96% to 99%.
Zhang, Zhiqing; Kuzmin, Nikolay V; Groot, Marie Louise; de Munck, Jan C
2017-06-01
The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering, segmentation and validation, to extract this information challenging. We developed a salient edge-enhancing model of anisotropic diffusion for image filtering, based on higher order statistics. We split the intrinsic 3-phase segmentation problem into two 2-phase segmentation problems, each of which we solved with a dedicated model, active contour weighted by prior extreme. We applied the novel proposed algorithms to THG images of structurally normal ex-vivo human brain tissue, revealing key tissue components-brain cells, microvessels and neuropil, enabling statistical characterization of these components. Comprehensive comparison to manually delineated ground truth validated the proposed algorithms. Quantitative comparison to second harmonic generation/auto-fluorescence images, acquired simultaneously from the same tissue area, confirmed the correctness of the main THG features detected. The software and test datasets are available from the authors. z.zhang@vu.nl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Methods for Data-based Delineation of Spatial Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, John E.
In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from amore » kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.« less
Modeling the urban boundary layer
NASA Technical Reports Server (NTRS)
Bergstrom, R. W., Jr.
1976-01-01
A summary and evaluation is given of the Workshop on Modeling the Urban Boundary Layer; held in Las Vegas on May 5, 1975. Edited summaries from each of the session chairpersons are also given. The sessions were: (1) formulation and solution techniques, (2) K-theory versus higher order closure, (3) surface heat and moisture balance, (4) initialization and boundary problems, (5) nocturnal boundary layer, and (6) verification of models.
30 CFR 582.20 - Obligations and responsibilities of lessees.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 2 2013-07-01 2013-07-01 false Obligations and responsibilities of lessees. 582.20 Section 582.20 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR... the approved Delineation, Testing, or Mining Plans; and other written or oral orders or instructions...
30 CFR 582.20 - Obligations and responsibilities of lessees.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 2 2012-07-01 2012-07-01 false Obligations and responsibilities of lessees. 582.20 Section 582.20 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR... the approved Delineation, Testing, or Mining Plans; and other written or oral orders or instructions...
30 CFR 582.20 - Obligations and responsibilities of lessees.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 2 2014-07-01 2014-07-01 false Obligations and responsibilities of lessees. 582.20 Section 582.20 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR... the approved Delineation, Testing, or Mining Plans; and other written or oral orders or instructions...
Effective Factors in Interactions within Japanese EFL Classrooms
ERIC Educational Resources Information Center
Maftoon, Parviz; Ziafar, Meisam
2013-01-01
Classroom interactional patterns depend on some contextual, cultural and local factors in addition to the methodologies employed in the classroom. In order to delineate such factors, the focus of classroom interaction research needs to shift from the observables to the unobservables like teachers' and learners' psychological states and cultural…
School Psychology Services: Community-Based, First-Order Crisis Intervention during the Gulf War.
ERIC Educational Resources Information Center
Klingman, Avigdor
1992-01-01
Examines the community-based mental health preventive measures undertaken by the school psychology services in response to the missile attacks on Israel during the Gulf War. Attempts to report and delineate the major assumptions and components of some of the key interventions. (Author/NB)
Learning Styles and Continuing Medical Education.
ERIC Educational Resources Information Center
Van Voorhees, Curtis; And Others
1988-01-01
The Gregorc Style Delineator--Word Matrix was administered to 2,060 physicians in order to gain a better understanding of their participation in continuing medical education. The study showed that 63 percent preferred the concrete sequential learning style. Different style preferences may account for some of the apparent disparity between…
CONSTRUCTION OF EDUCATIONAL THEORY MODELS.
ERIC Educational Resources Information Center
MACCIA, ELIZABETH S.; AND OTHERS
THIS STUDY DELINEATED MODELS WHICH HAVE POTENTIAL USE IN GENERATING EDUCATIONAL THEORY. A THEORY MODELS METHOD WAS FORMULATED. BY SELECTING AND ORDERING CONCEPTS FROM OTHER DISCIPLINES, THE INVESTIGATORS FORMULATED SEVEN THEORY MODELS. THE FINAL STEP OF DEVISING EDUCATIONAL THEORY FROM THE THEORY MODELS WAS PERFORMED ONLY TO THE EXTENT REQUIRED TO…
37 CFR 202.10 - Pictorial, graphic, and sculptural works.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sculptural works. 202.10 Section 202.10 Patents, Trademarks, and Copyrights COPYRIGHT OFFICE, LIBRARY OF... Pictorial, graphic, and sculptural works. (a) In order to be acceptable as a pictorial, graphic, or sculptural work, the work must embody some creative authorship in its delineation or form. The registrability...
Fuzzy Kernel k-Medoids algorithm for anomaly detection problems
NASA Astrophysics Data System (ADS)
Rustam, Z.; Talita, A. S.
2017-07-01
Intrusion Detection System (IDS) is an essential part of security systems to strengthen the security of information systems. IDS can be used to detect the abuse by intruders who try to get into the network system in order to access and utilize the available data sources in the system. There are two approaches of IDS, Misuse Detection and Anomaly Detection (behavior-based intrusion detection). Fuzzy clustering-based methods have been widely used to solve Anomaly Detection problems. Other than using fuzzy membership concept to determine the object to a cluster, other approaches as in combining fuzzy and possibilistic membership or feature-weighted based methods are also used. We propose Fuzzy Kernel k-Medoids that combining fuzzy and possibilistic membership as a powerful method to solve anomaly detection problem since on numerical experiment it is able to classify IDS benchmark data into five different classes simultaneously. We classify IDS benchmark data KDDCup'99 data set into five different classes simultaneously with the best performance was achieved by using 30 % of training data with clustering accuracy reached 90.28 percent.
Reduced Diversity of Life around Proxima Centauri and TRAPPIST-1
NASA Astrophysics Data System (ADS)
Lingam, Manasvi; Loeb, Abraham
2017-09-01
The recent discovery of potentially habitable exoplanets around Proxima Centauri and TRAPPIST-1 has attracted much attention due to their potential for hosting life. We delineate a simple model that accurately describes the evolution of biological diversity on Earth. Combining this model with constraints on atmospheric erosion and the maximal evolutionary timescale arising from the star’s lifetime, we arrive at two striking conclusions: (I) Earth-analogs orbiting low-mass M-dwarfs are unlikely to be inhabited, and (II) K-dwarfs and some G-type stars are potentially capable of hosting more complex biospheres than the Earth. Hence, future searches for biosignatures may have higher chances of success when targeting planets around K-dwarf stars.
NASA Astrophysics Data System (ADS)
Minguzzi, E.
2010-09-01
Every time function on spacetime gives a (continuous) total preordering of the spacetime events which respects the notion of causal precedence. The problem of the existence of a (semi-)time function on spacetime and the problem of recovering the causal structure starting from the set of time functions are studied. It is pointed out that these problems have an analog in the field of microeconomics known as utility theory. In a chronological spacetime the semi-time functions correspond to the utilities for the chronological relation, while in a K-causal (stably causal) spacetime the time functions correspond to the utilities for the K + relation (Seifert’s relation). By exploiting this analogy, we are able to import some mathematical results, most notably Peleg’s and Levin’s theorems, to the spacetime framework. As a consequence, we prove that a K-causal (i.e. stably causal) spacetime admits a time function and that the time or temporal functions can be used to recover the K + (or Seifert) relation which indeed turns out to be the intersection of the time or temporal orderings. This result tells us in which circumstances it is possible to recover the chronological or causal relation starting from the set of time or temporal functions allowed by the spacetime. Moreover, it is proved that a chronological spacetime in which the closure of the causal relation is transitive (for instance a reflective spacetime) admits a semi-time function. Along the way a new proof avoiding smoothing techniques is given that the existence of a time function implies stable causality, and a new short proof of the equivalence between K-causality and stable causality is given which takes advantage of Levin’s theorem and smoothing techniques.
Hierarchal Genetic Stratigraphy: A Framework for Paleoceanography
NASA Astrophysics Data System (ADS)
Busch, R. M.; West, R. R.
1987-04-01
A detailed, genetic stratigraphic framework for paleoceanographic studies can be derived by describing, correlating, interpreting, and predicting stratigraphic sequences relative to a hierarchy of their constituent time-stratigraphic transgressive-regressive units ("T-R units"). T-R unit hierarchies are defined and correlated using lithostratigraphic and paleoecologic data, but correlations can be enhanced or "checked" (tested to confirm or deny) with objective biostratigraphic, magnetostratigraphic, or chemostratigraphic data. Such chronostratigraphies can then be bracketed by radiometric ages, so that average periodicities for T-R units can be calculated and a hierarchal geochronology derived. T-R units are inferred to be the net depositional result of eustatic cycles of sea level change and can be differentiated from autocyclic deepening-shallowing units because the latter are noncorrelative intrabasinally. Boundaries between T-R units are conformable or unconformable "genetic surfaces" of two types: transgressive surfaces and "climate change surfaces". The latter are useful for correlating minor transgressive phases through nonmarine intervals, thereby deriving information linking paleoclimatic and paleoceanographic processes. Permo-Carboniferous sequences can be analyzed relative to a hierarchy of six scales of genetic T-R units having periodicities of 225-300 m.y. (first order), 20-90 m.y. (second order), 7-13 m.y. (third-order), 0.6-3.6 m.y. (fourth order), 300-500 × 10³ years (fifth order), and 50-130 × 10³ years or less (sixth-order). Paleogeographic maps for the time of maximum transgression ("transgressive apex") of successive fifth-order T-R units (5-25 m thick) in the Glenshaw Formation (Upper Pennsylvanian, Northern Appalachian Basin) delineate delta lobes, embayments, islands, and linear seaways. Relative extent of marine inundation on the fifth-order maps was used to delineate fourth-order T-R units, and the fourth-order T-R units constitute the transgressive half of a third-order T-R unit. This third-, fourth-, and fifth-order hierarchy is correlated more than 1200 km (750 miles) to the Western Interior "Basin," and is confirmed with limited objective biostratigraphy.
An Assessment of Fine Particulate (PM2.5) Air Pollution in Jeddah, Saudi Arabia
NASA Astrophysics Data System (ADS)
Nayebare, S. R.; Khwaja, H. A.; Aburizaiza, O. S.; Siddique, A.; Zeb, J.; Hussain, M. M.; Khatib, F.; Blake, D. R.; Carpenter, D. O.
2017-12-01
We assessed the levels, chemical composition and delineated the sources of PM2.5 in Jeddah, to estimate the anthropogenic influence. Sampling was done from April 8th 2013 to February 18th, 2014 in four cycles. PM2.5 samples were analyzed for black carbon (BC), trace elements (TEs) and water-soluble ionic species (IS). Delineation of sources was by mass reconstruction, enrichment factor (EF), and positive matrix factorization (PMF). The 24-h PM2.5 levels showed seasonal variabilities with mean PM2.5 per cycle (cycle 1: 58.8±25.0, cycle 2: 36.2±12.3, cycle 3: 33.9±9.1, and cycle 4: 38.0±17.7µg/m3) exceeding the WHO guideline (25.0 µg/m3). Overall, BC explained 3.61%, 5.92%, 7.15% and 6.51% of PM2.5 during cycles 1-4, respectively but with delta-C levels below zero. This excluded bio-mass burning as a PM2.5 source. IS were mostly SO42-, NO3-, NH4+, Na+ and K+, characteristic of industrial and vehicular emissions. From mass reconstruction, BC, TEs and IS collectively explained 73.6 - 89.5% of PM2.5. EF analysis defined two broad categories of TEs as; anthropogenic (Ni, V, Cu, Zn, Cl, Pb, S, Lu and Br), and earth-crust derived (Al, Si, Ti, Mg, K, Fe, Sr, Mn, Ca, Na and Cr) TEs. These anthropogenic TEs are mostly of industrial and vehicular origins. PMF broadly defined 4 major sources of PM2.5; fossil fuels combustion (36.0%), soil (34.1%), sea-spray (15.4%) and vehicular emissions (14.5%). Results show a major anthropogenic influence related to vehicular and industrial emissions, and further stress the need for more research to fully delineate PM2.5 sources in Jeddah.
Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations
2016-01-01
Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT) of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping), thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT) ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB). As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3) classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The KLT and LPT present new possibilities for human-expert diagnostics, and for automated ischaemia detection. PMID:26863140
Biochemical Imaging of Gliomas Using MR Spectroscopic Imaging for Radiotherapy Treatment Planning
NASA Astrophysics Data System (ADS)
Heikal, Amr Ahmed
This thesis discusses the main obstacles facing wide clinical implementation of magnetic resonance spectroscopic imaging (MRSI) as a tumor delineation tool for radiotherapy treatment planning, particularly for gliomas. These main obstacles are identified as 1. observer bias and poor interpretational reproducibility of the results of MRSI scans, and 2. the long scan times required to conduct MRSI scans. An examination of an existing user-independent MRSI tumor delineation technique known as the choline-to-NAA index (CNI) is conducted to assess its utility in providing a tool for reproducible interpretation of MRSI results. While working with spatial resolutions typically twice those on which the CNI model was originally designed, a region of statistical uncertainty was discovered between the tumor and normal tissue populations and as such a modification to the CNI model was introduced to clearly identify that region. To address the issue of long scan times, a series of studies were conducted to adapt a scan acceleration technique, compressed sensing (CS), to work with MRSI and to quantify the effects of such a novel technique on the modulation transfer function (MTF), an important quantitative imaging metric. The studies included the development of the first phantom based method of measuring the MTF for MRSI data, a study of the correlation between the k-space sampling patterns used for compressed sensing and the resulting MTFs, and the introduction of a technique circumventing some of side-effects of compressed sensing by exploiting the conjugate symmetry property of k-space. The work in this thesis provides two essential steps towards wide clinical implementation of MRSI-based tumor delineation. The proposed modifications to the CNI method coupled with the application of CS to MRSI address the two main obstacles outlined. However, there continues to be room for improvement and questions that need to be answered by future research.
Schneiderman, M A; Sharma, A K; Mahanama, K R; Locke, D C
1988-01-01
Vitamin K1 (phylloquinone) is extracted from commercial soy protein-based and milk-based powdered infant formulas by using supercritical fluid extraction with CO2 at 8000 psi and 60 degrees C. Quantitative extraction requires only 15 min, and does not suffer from the problems associated with conventional solvent extraction of lipophilic materials from media such as formulas. Vitamin K1 is determined in the extracts by using reverse-phase liquid chromatography (LC) with reductive mode electrochemical detection at a silver electrode polarized at -1.1 V vs SCE. LC run time is 9 min. The minimum detectable quantity is 80 pg, and response is linear over at least 5 orders of magnitude. Recovery of vitamin K1 from a milk-based powdered formula was 95.6% with RSD of 7.4%, and from a soy protein-based product, 94.4% recovery with RSD of 6.5%.
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
Some Properties and Uses of Torsional Overlap Integrals
NASA Astrophysics Data System (ADS)
Mekhtiev, Mirza A.; Hougen, Jon T.
1998-01-01
The first diagonalization step in a rho-axis-method treatment of methyl-top internal rotation problems involves finding eigenvalues and eigenvectors of a torsional Hamiltonian, which depends on the rotational projection quantum numberKas a parameter. Traditionally the torsional quantum numbervt= 0, 1, 2···is assigned to eigenfunctions of givenKin order of increasing energy. In this paper we propose an alternative labeling scheme, using the torsional quantum numbervT, which is based on properties of theK-dependent torsional overlap integrals
Reijnders, Margot R F; Janowski, Robert; Alvi, Mohsan; Self, Jay E; van Essen, Ton J; Vreeburg, Maaike; Rouhl, Rob P W; Stevens, Servi J C; Stegmann, Alexander P A; Schieving, Jolanda; Pfundt, Rolph; van Dijk, Katinke; Smeets, Eric; Stumpel, Connie T R M; Bok, Levinus A; Cobben, Jan Maarten; Engelen, Marc; Mansour, Sahar; Whiteford, Margo; Chandler, Kate E; Douzgou, Sofia; Cooper, Nicola S; Tan, Ene-Choo; Foo, Roger; Lai, Angeline H M; Rankin, Julia; Green, Andrew; Lönnqvist, Tuula; Isohanni, Pirjo; Williams, Shelley; Ruhoy, Ilene; Carvalho, Karen S; Dowling, James J; Lev, Dorit L; Sterbova, Katalin; Lassuthova, Petra; Neupauerová, Jana; Waugh, Jeff L; Keros, Sotirios; Clayton-Smith, Jill; Smithson, Sarah F; Brunner, Han G; van Hoeckel, Ceciel; Anderson, Mel; Clowes, Virginia E; Siu, Victoria Mok; DDD study, The; Selber, Paulo; Leventer, Richard J; Nellaker, Christoffer; Niessing, Dierk; Hunt, David; Baralle, Diana
2018-01-01
Background De novo mutations in PURA have recently been described to cause PURA syndrome, a neurodevelopmental disorder characterised by severe intellectual disability (ID), epilepsy, feeding difficulties and neonatal hypotonia. Objectives To delineate the clinical spectrum of PURA syndrome and study genotype-phenotype correlations. Methods Diagnostic or research-based exome or Sanger sequencing was performed in individuals with ID. We systematically collected clinical and mutation data on newly ascertained PURA syndrome individuals, evaluated data of previously reported individuals and performed a computational analysis of photographs. We classified mutations based on predicted effect using 3D in silico models of crystal structures of Drosophila-derived Pur-alpha homologues. Finally, we explored genotype-phenotype correlations by analysis of both recurrent mutations as well as mutation classes. Results We report mutations in PURA (purine-rich element binding protein A) in 32 individuals, the largest cohort described so far. Evaluation of clinical data, including 22 previously published cases, revealed that all have moderate to severe ID and neonatal-onset symptoms, including hypotonia (96%), respiratory problems (57%), feeding difficulties (77%), exaggerated startle response (44%), hypersomnolence (66%) and hypothermia (35%). Epilepsy (54%) and gastrointestinal (69%), ophthalmological (51%) and endocrine problems (42%) were observed frequently. Computational analysis of facial photographs showed subtle facial dysmorphism. No strong genotype-phenotype correlation was identified by subgrouping mutations into functional classes. Conclusion We delineate the clinical spectrum of PURA syndrome with the identification of 32 additional individuals. The identification of one individual through targeted Sanger sequencing points towards the clinical recognisability of the syndrome. Genotype-phenotype analysis showed no significant correlation between mutation classes and disease severity. PMID:29097605
NASA Technical Reports Server (NTRS)
Swedlow, J. L.
1976-01-01
An approach is described for singularity computations based on a numerical method for elastoplastic flow to delineate radial and angular distribution of field quantities and measure the intensity of the singularity. The method is applicable to problems in solid mechanics and lends itself to certain types of heat flow and fluid motion studies. Its use is not limited to linear, elastic, small strain, or two-dimensional situations.
Treatment of Pediculosis Capitis
Verma, Prashant; Namdeo, Chaitanya
2015-01-01
An endeavour to delineate the salient details of the treatment of head lice infestation has been made in the present article. Treatment modalities including over the counter permethrin and pyrethrin, and prescription medicines, including malathion, lindane, benzyl alcohol, spinosad are discussed. Salient features of alternative medicine and physical treatment modalities are outlined. The problem of resistance to treatment has also been taken cognizance of. PMID:26120148
Meeting the Mental Health Needs of Children and Youth: Using Evidence-Based Education Worldwide
ERIC Educational Resources Information Center
Bullock, Lyndal M.; Zolkoski, Staci M.; Estes, Mary Bailey
2015-01-01
In this paper, we review the factors that impact the mental health of children and youth, highlight the magnitude of the mental health problem based on data from selected countries, emphasise the influence that culture has on the development of children and youth, and delineate several strategies and programmes proven to be effective when working…
ERIC Educational Resources Information Center
Andrews, Philippa, Ed.
This conference brought together a wide range of staff interested in the teaching of educational management. The contributing lecturers were chosen to highlight problems of management rather than of teaching. Lord Morris of Grasmere delineated the market need while J. M. Fearn outlined the Scottish system from the perspective of central…
Using structure locations as a basis for mapping the wildland urban interface
Avi Bar-Massada; Susan I. Stewart; Roger B. Hammer; Miranda H. Mockrin; Volker C. Radeloff
2013-01-01
The wildland urban interface (WUI) delineates the areas where wildland fire hazard most directly impacts human communities and threatens lives and property, and where houses exert the strongest influence on the natural environment. Housing data are a major problem for WUI mapping. When housing data are zonal, the concept of a WUI neighborhood can be captured easily in...
Saunders, John B; Hao, Wei; Long, Jiang; King, Daniel L; Mann, Karl; Fauth-Bühler, Mira; Rumpf, Hans-Jürgen; Bowden-Jones, Henrietta; Rahimi-Movaghar, Afarin; Chung, Thomas; Chan, Elda; Bahar, Norharlina; Achab, Sophia; Lee, Hae Kook; Potenza, Marc; Petry, Nancy; Spritzer, Daniel; Ambekar, Atul; Derevensky, Jeffrey; Griffiths, Mark D; Pontes, Halley M; Kuss, Daria; Higuchi, Susumu; Mihara, Satoko; Assangangkornchai, Sawitri; Sharma, Manoj; Kashef, Ahmad El; Ip, Patrick; Farrell, Michael; Scafato, Emanuele; Carragher, Natacha; Poznyak, Vladimir
2017-09-01
Online gaming has greatly increased in popularity in recent years, and with this has come a multiplicity of problems due to excessive involvement in gaming. Gaming disorder, both online and offline, has been defined for the first time in the draft of 11th revision of the International Classification of Diseases (ICD-11). National surveys have shown prevalence rates of gaming disorder/addiction of 10%-15% among young people in several Asian countries and of 1%-10% in their counterparts in some Western countries. Several diseases related to excessive gaming are now recognized, and clinics are being established to respond to individual, family, and community concerns, but many cases remain hidden. Gaming disorder shares many features with addictions due to psychoactive substances and with gambling disorder, and functional neuroimaging shows that similar areas of the brain are activated. Governments and health agencies worldwide are seeking for the effects of online gaming to be addressed, and for preventive approaches to be developed. Central to this effort is a need to delineate the nature of the problem, which is the purpose of the definitions in the draft of ICD-11.
Gaming disorder: Its delineation as an important condition for diagnosis, management, and prevention
Saunders, John B.; Hao, Wei; Long, Jiang; King, Daniel L.; Mann, Karl; Fauth-Bühler, Mira; Rumpf, Hans-Jürgen; Bowden-Jones, Henrietta; Rahimi-Movaghar, Afarin; Chung, Thomas; Chan, Elda; Bahar, Norharlina; Achab, Sophia; Lee, Hae Kook; Potenza, Marc; Petry, Nancy; Spritzer, Daniel; Ambekar, Atul; Derevensky, Jeffrey; Griffiths, Mark D.; Pontes, Halley M.; Kuss, Daria; Higuchi, Susumu; Mihara, Satoko; Assangangkornchai, Sawitri; Sharma, Manoj; Kashef, Ahmad El; Ip, Patrick; Farrell, Michael; Scafato, Emanuele; Carragher, Natacha; Poznyak, Vladimir
2017-01-01
Online gaming has greatly increased in popularity in recent years, and with this has come a multiplicity of problems due to excessive involvement in gaming. Gaming disorder, both online and offline, has been defined for the first time in the draft of 11th revision of the International Classification of Diseases (ICD-11). National surveys have shown prevalence rates of gaming disorder/addiction of 10%–15% among young people in several Asian countries and of 1%–10% in their counterparts in some Western countries. Several diseases related to excessive gaming are now recognized, and clinics are being established to respond to individual, family, and community concerns, but many cases remain hidden. Gaming disorder shares many features with addictions due to psychoactive substances and with gambling disorder, and functional neuroimaging shows that similar areas of the brain are activated. Governments and health agencies worldwide are seeking for the effects of online gaming to be addressed, and for preventive approaches to be developed. Central to this effort is a need to delineate the nature of the problem, which is the purpose of the definitions in the draft of ICD-11. PMID:28816494
Pilot Fatigue and Circadian Desynchronosis
NASA Technical Reports Server (NTRS)
1981-01-01
Pilot fatigue and circadian desynchronosis, its significance to air transport safety, and research approaches, were examined. There is a need for better data on sleep, activity, and other pertinent factors from pilots flying a variety of demanding schedules. Simulation studies of flight crew performance should be utilized to determine the degree of fatigue induced by demanding schedules and to delineate more precisely the factors responsible for performance decrements in flight and to test solutions proposed to resolve problems induced by fatigue and desynchronosis. It was concluded that there is a safety problem of uncertain magnitude due to transmeridian flying and a potential problem due to fatigue associated with various factors found in air transport operations.
Johnson, Oliver K.; Kurniawan, Christian
2018-02-03
Properties closures delineate the theoretical objective space for materials design problems, allowing designers to make informed trade-offs between competing constraints and target properties. In this paper, we present a new algorithm called hierarchical simplex sampling (HSS) that approximates properties closures more efficiently and faithfully than traditional optimization based approaches. By construction, HSS generates samples of microstructure statistics that span the corresponding microstructure hull. As a result, we also find that HSS can be coupled with synthetic polycrystal generation software to generate diverse sets of microstructures for subsequent mesoscale simulations. Finally, by more broadly sampling the space of possible microstructures, itmore » is anticipated that such diverse microstructure sets will expand our understanding of the influence of microstructure on macroscale effective properties and inform the construction of higher-fidelity mesoscale structure-property models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Oliver K.; Kurniawan, Christian
Properties closures delineate the theoretical objective space for materials design problems, allowing designers to make informed trade-offs between competing constraints and target properties. In this paper, we present a new algorithm called hierarchical simplex sampling (HSS) that approximates properties closures more efficiently and faithfully than traditional optimization based approaches. By construction, HSS generates samples of microstructure statistics that span the corresponding microstructure hull. As a result, we also find that HSS can be coupled with synthetic polycrystal generation software to generate diverse sets of microstructures for subsequent mesoscale simulations. Finally, by more broadly sampling the space of possible microstructures, itmore » is anticipated that such diverse microstructure sets will expand our understanding of the influence of microstructure on macroscale effective properties and inform the construction of higher-fidelity mesoscale structure-property models.« less
Does STES-Oriented Science Education Promote 10th-Grade Students' Decision-Making Capability?
NASA Astrophysics Data System (ADS)
Levy Nahum, Tami; Ben-Chaim, David; Azaiza, Ibtesam; Herskovitz, Orit; Zoller, Uri
2010-07-01
Today's society is continuously coping with sustainability-related complex issues in the Science-Technology-Environment-Society (STES) interfaces. In those contexts, the need and relevance of the development of students' higher-order cognitive skills (HOCS) such as question-asking, critical-thinking, problem-solving and decision-making capabilities within science teaching have been argued by several science educators for decades. Three main objectives guided this study: (1) to establish "base lines" for HOCS capabilities of 10th grade students (n = 264) in the Israeli educational system; (2) to delineate within this population, two different groups with respect to their decision-making capability, science-oriented (n = 142) and non-science (n = 122) students, Groups A and B, respectively; and (3) to assess the pre-post development/change of students' decision-making capabilities via STES-oriented HOCS-promoting curricular modules entitled Science, Technology and Environment in Modern Society (STEMS). A specially developed and validated decision-making questionnaire was used for obtaining a research-based response to the guiding research questions. Our findings suggest that a long-term persistent application of purposed decision-making, promoting teaching strategies, is needed in order to succeed in affecting, positively, high-school students' decision-making ability. The need for science teachers' involvement in the development of their students' HOCS capabilities is thus apparent.
Thermal State-of-Charge in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Hall, Carsie A., Jr.; Glakpe, Emmanuel K.; Cannon, Joseph N.; Kerslake, Thomas W.
1998-01-01
A theoretical framework is developed to determine the so-called thermal state-of-charge (SOC) in solar heat receivers employing encapsulated phase change materials (PCMS) that undergo cyclic melting and freezing. The present problem is relevant to space solar dynamic power systems that would typically operate in low-Earth-orbit (LEO). The solar heat receiver is integrated into a closed-cycle Brayton engine that produces electric power during sunlight and eclipse periods of the orbit cycle. The concepts of available power and virtual source temperature, both on a finite-time basis, are used as the basis for determining the SOC. Analytic expressions for the available power crossing the aperture plane of the receiver, available power stored in the receiver, and available power delivered to the working fluid are derived, all of which are related to the SOC through measurable parameters. Lower and upper bounds on the SOC are proposed in order to delineate absolute limiting cases for a range of input parameters (orbital, geometric, etc.). SOC characterization is also performed in the subcooled, two-phase, and superheat regimes. Finally, a previously-developed physical and numerical model of the solar heat receiver component of NASA Lewis Research Center's Ground Test Demonstration (GTD) system is used in order to predict the SOC as a function of measurable parameters.
Optical observational programs at the Indian Institute of Astrophysics
NASA Astrophysics Data System (ADS)
Singh, Jagdev; Ravindra, B.
The Indian Institute of Astrophysics has been making optical observations of the sun for more than a century by taking images of the sun in continuum to study the photosphere, Ca-K line and H-alpha line in order to study the chromosphere by using the same instruments which are used to study the long term variations of the magnetic fields on the sun. The digitizers have been developed using uniform light sources, imaging optics without any vignetting in the required FOV and large format 4K×4K CCD cameras to digitize the data for scientific studies. At the Solar Tower Telescope we have performed very high resolution spectroscopic observations around Ca-K line to investigate the variations and delineate the contribution of various features to the solar cycle variations. Solar coronal studies have been done during the occurrence of total solar eclipses and with a coronagraph to study the coronal heating. Here we discuss the systematic temporal variations observed in the green and red emission profiles using high spectral and temporal observations during the 2006, 2009 and 2010 total solar eclipses. The TWIN telescope a new facility has been fabricated and installed at Kodaikanal observatory to continue the synoptic observations of the sun and a space-based coronagraph is also being designed and fabricated in collaboration with various laboratories of ISRO (LEOS, ISAC and SAC) and USO. In this article we present the summary of results of optical observational programs carried out at Kodaikanal Observatory and during the eclipse expeditions where authors have played a leading role. Furthermore, this review is not complete in all respects of all the observational programs carried out at the Kodaikanal observatory.
Ganguly, Sreerupa; Mukherjee, Amarshi; Mazumdar, Budhaditya; Ghosh, Amar N.; Banerjee, Kalyan K.
2014-01-01
Vibrio cholerae cytolysin/hemolysin (VCC) is an amphipathic 65-kDa β-pore-forming toxin with a C-terminal β-prism lectin domain. Because deletion or point mutation of the lectin domain seriously compromises hemolytic activity, it is thought that carbohydrate-dependent interactions play a critical role in membrane targeting of VCC. To delineate the contributions of the cytolysin and lectin domains in pore formation, we used wild-type VCC, 50-kDa VCC (VCC50) without the lectin domain, and mutant VCCD617A with no carbohydrate-binding activity. VCC and its two variants with no carbohydrate-binding activity moved to the erythrocyte stroma with apparent association constants on the order of 107 m−1. However, loss of the lectin domain severely reduced the efficiency of self-association of the VCC monomer with the β-barrel heptamer in the synthetic lipid bilayer from ∼83 to 27%. Notably, inactivation of the carbohydrate-binding activity by the D617A mutation marginally reduced oligomerization to ∼77%. Oligomerization of VCC50 was temperature-insensitive; by contrast, VCC self-assembly increased with increasing temperature, suggesting that the process is driven by entropy and opposed by enthalpy. Asialofetuin, the β1-galactosyl-terminated glycoprotein inhibitor of VCC-induced hemolysis, promoted oligomerization of 65-kDa VCC to a species that resembled the membrane-inserted heptamer in stoichiometry and morphology but had reduced global amphipathicity. In conclusion, we propose (i) that the β-prism lectin domain facilitated toxin assembly by producing entropy during relocation in the heptamer and (ii) that glycoconjugates inhibited VCC by promoting its assembly to a water-soluble, less amphipathic oligomer variant with reduced ability to penetrate the bilayer. PMID:24356964
The removal efficiency of heavy metal ions (cadmium(II) – Cd(II), cobalt(II) – Co(II), nickel(II) – Ni(II), and copper(II) – Cu(II)) by potassium ferrate(VI) (K2FeO4, Fe(VI)), was studied as a function of added amount of Fe(VI) (or Fe) and varying pH. At pH = 6.6, the effective r...
Yeast Genetics for Delineating Bax/Bcl Pathway of Cell Death Regulation.
1998-07-01
differences in tosol. The cytosol also became electron dense ("cyto- the copy number of the episomal plasmid from which solic condensation"), similar to...Cell Death & Differ . 3, 229-236. (1993). The C. eheans cell death gene ccd-3 encodes a protein similar ¶Xhitc. K., Tahaoglu, E., and Steller, H. (1996...components may be used in different functional contexts. Similar modules might exist in metazoan apoptotic pathways. Even though yeast does not contain
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Lerch, Bradley A.; Sellers, Cory
2013-01-01
In this paper time and/or rate dependent deformation regions are experimentally mapped out as a function of temperature. It is clearly demonstrated that the concept of a threshold stress (a stress that delineate reversible and irreversible behavior) is valid and necessary at elevated temperatures and corresponds to the classical yield stress at lower temperatures. Also the infinitely slow modulus, (Es) i.e. the elastic modulus of the material if it was loaded at an infinitely slow strain rate, and the "dynamic modulus", modulus, Ed, which represents the modulus of the material if it is loaded at an infinitely fast rate are used to delineate rate dependent from rate independent regions. As demonstrated at elevated temperatures there is a significant difference between the two modulus values, thus indicating both significant time-dependence and rate dependence. In the case of the nickel-based super alloy, ME3, this behavior is also shown to be grain size specific. Consequently, at higher temperatures viscoelastic behavior exist below k (i.e., the threshold stress) and at stresses above k the behavior is viscoplastic. Finally a multi-mechanism, stress partitioned viscoelastic model, capable of being consistently coupled to a viscoplastic model is characterized over the full temperature range investigated for Ti-6-4 and ME3.
Flight control systems properties and problems, volume 1
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Johnston, D. E.
1975-01-01
This volume contains a delineation of fundamental and mechanization-specific flight control characteristics and problems gleaned from many sources and spanning a period of over two decades. It is organized to present and discuss first some fundamental, generic problems of closed-loop flight control systems involving numerator characteristics (quadratic dipoles, non-minimum phase roots, and intentionally introduced zeros). Next the principal elements of the largely mechanical primary flight control system are reviewed with particular emphasis on the influence of nonlinearities. The characteristics and problems of augmentation (damping, stability, and feel) system mechanizations are then dealt with. The particular idiosyncracies of automatic control actuation and command augmentation schemes are stressed, because they constitute the major interfaces with the primary flight control system and an often highly variable vehicle response.
Launch Vehicle Design Process: Characterization, Technical Integration, and Lessons Learned
NASA Technical Reports Server (NTRS)
Blair, J. C.; Ryan, R. S.; Schutzenhofer, L. A.; Humphries, W. R.
2001-01-01
Engineering design is a challenging activity for any product. Since launch vehicles are highly complex and interconnected and have extreme energy densities, their design represents a challenge of the highest order. The purpose of this document is to delineate and clarify the design process associated with the launch vehicle for space flight transportation. The goal is to define and characterize a baseline for the space transportation design process. This baseline can be used as a basis for improving effectiveness and efficiency of the design process. The baseline characterization is achieved via compartmentalization and technical integration of subsystems, design functions, and discipline functions. First, a global design process overview is provided in order to show responsibility, interactions, and connectivity of overall aspects of the design process. Then design essentials are delineated in order to emphasize necessary features of the design process that are sometimes overlooked. Finally the design process characterization is presented. This is accomplished by considering project technical framework, technical integration, process description (technical integration model, subsystem tree, design/discipline planes, decision gates, and tasks), and the design sequence. Also included in the document are a snapshot relating to process improvements, illustrations of the process, a survey of recommendations from experienced practitioners in aerospace, lessons learned, references, and a bibliography.
Differentiating Instruction: As Easy as One, Two, Three
ERIC Educational Resources Information Center
Shepherd, Carol; Acosta-Tello, Enid
2015-01-01
Using the "Three Phase Lesson" model, teachers identify prior knowledge that the student must possess in order to be successful in learning the new concepts. Teachers then delineate specific components inherent in the concepts that need to be mastered and identify tasks which will enable the student to practice these new concepts.
The Notion of Adminstrative Transparency among Academic Leaderships at Jordanian Universities
ERIC Educational Resources Information Center
Jaradat, Mohammed Hasan
2013-01-01
The study aims at identifying the notion of transparency among academic leaderships at Jordanian universities. To this effect, the interview-based approach was used in order to delineate the concept of transparency. Eighty individual academic leaderships were interviewed across various schools in Jordan. Upon collection of data and information,…
Stop Disease: Diapering Procedures = Alto a las Enfermedades: Procedimientos para Cambiar Panales.
ERIC Educational Resources Information Center
California Child Care Health Program, Oakland.
In order to prevent the occurrence and spread of disease in California child care programs, this set of laminated procedure pages, in English and Spanish versions, details infant and child care procedures for safe diapering. The document delineates important rules about diapering, gives directions for making a disinfecting solution, and provides…
DOT National Transportation Integrated Search
2012-05-01
Pavement markings play an important role in providing visual guidance to motorists. They are used to delineate the intended travel path and guide drivers regarding their location on the road. In order to function properly, pavement markings must be v...
Gilbert, Stéphane; Loranger, Anne; Omary, M. Bishr
2016-01-01
ABSTRACT Keratins are epithelial cell intermediate filament (IF) proteins that are expressed as pairs in a cell-differentiation-regulated manner. Hepatocytes express the keratin 8 and 18 pair (denoted K8/K18) of IFs, and a loss of K8 or K18, as in K8-null mice, leads to degradation of the keratin partner. We have previously reported that a K8/K18 loss in hepatocytes leads to altered cell surface lipid raft distribution and more efficient Fas receptor (FasR, also known as TNFRSF6)-mediated apoptosis. We demonstrate here that the absence of K8 or transgenic expression of the K8 G62C mutant in mouse hepatocytes reduces lipid raft size. Mechanistically, we find that the lipid raft size is dependent on acid sphingomyelinase (ASMase, also known as SMPD1) enzyme activity, which is reduced in absence of K8/K18. Notably, the reduction of ASMase activity appears to be caused by a less efficient redistribution of surface membrane PKCδ toward lysosomes. Moreover, we delineate the lipid raft volume range that is required for an optimal FasR-mediated apoptosis. Hence, K8/K18-dependent PKCδ- and ASMase-mediated modulation of lipid raft size can explain the more prominent FasR-mediated signaling resulting from K8/K18 loss. The fine-tuning of ASMase-mediated regulation of lipid rafts might provide a therapeutic target for death-receptor-related liver diseases. PMID:27422101
NASA Astrophysics Data System (ADS)
Biggerstaff, Michael I.; Zounes, Zackery; Addison Alford, A.; Carrie, Gordon D.; Pilkey, John T.; Uman, Martin A.; Jordan, Douglas M.
2017-08-01
A series of vertical cross sections taken through a small mesoscale convective system observed over Florida by the dual-polarimetric SMART radar were combined with VHF radiation source locations from a lightning mapping array (LMA) to examine the lightning channel propagation paths relative to the radar-observed ice alignment signatures associated with regions of negative specific differential phase (
Cha, Jihoon; Kim, Hyung-Jin; Kim, Sung Tae; Kim, Yi Kyung; Kim, Ha Youn; Park, Gyeong Min
2017-11-01
Background Metallic dental prostheses may degrade image quality on head and neck computed tomography (CT). However, there is little information available on the use of dual-energy CT (DECT) and metal artifact reduction software (MARS) in the head and neck regions to reduce metallic dental artifacts. Purpose To assess the usefulness of DECT with virtual monochromatic imaging and MARS to reduce metallic dental artifacts. Material and Methods DECT was performed using fast kilovoltage (kV)-switching between 80-kV and 140-kV in 20 patients with metallic dental prostheses. CT data were reconstructed with and without MARS, and with synthesized monochromatic energy in the range of 40-140-kiloelectron volt (keV). For quantitative analysis, the artifact index of the tongue, buccal, and parotid areas was calculated for each scan. For qualitative analysis, two radiologists evaluated 70-keV and 100-keV images with and without MARS for tongue, buccal, parotid areas, and metallic denture. The locations and characteristics of the MARS-related artifacts, if any, were also recorded. Results DECT with MARS markedly reduced metallic dental artifacts and improved image quality in the buccal area ( P < 0.001) and the tongue ( P < 0.001), but not in the parotid area. The margin and internal architecture of the metallic dentures were more clearly delineated with MARS ( P < 0.001) and in the higher-energy images than in the lower-energy images ( P = 0.042). MARS-related artifacts most commonly occurred in the deep center of the neck. Conclusion DECT with MARS can reduce metallic dental artifacts and improve delineation of the metallic prosthesis and periprosthetic region.
NASA Astrophysics Data System (ADS)
Pasternack, G. B.; Hopkins, C.
2017-12-01
A river channel and its associated riparian corridor exhibit a pattern of nested, geomorphically imprinted, lateral inundation zones (IZs). Each zone plays a key role in fluvial geomorphic processes and ecological functions. Within each zone, distinct landforms (aka geomorphic or morphological units, MUs) reside at the 0.1-10 channel width scale. These features are basic units linking river corridor morphology with local ecosystem services. Objective, automated delineation of nested inundation zones and morphological units remains a significant scientific challenge. This study describes and demonstrates new, objective methods for solving this problem, using the 35-km alluvial lower Yuba River as a testbed. A detrended, high-resolution digital elevation model constructed from near-census topographic and bathymetric data was produced and used in a hypsograph analysis, a commonly used method in oceanographic studies capable of identifying slope breaks at IZ transitions. Geomorphic interpretation mindful of the river's setting was required to properly describe each IZ identified by the hypsograph analysis. Then, a 2D hydrodynamic model was used to determine what flow yields the wetted area that most closely matches each IZ domain. The model also provided meter-scale rasters of depth and velocity useful for MU mapping. Even though MUs are discharge-independent landforms, they can be revealed by analyzing their overlying hydraulics at low flows. Baseflow depth and velocity rasters are used along with a hydraulic landform classification system to quantitatively delineate in-channel bed MU types. In-channel bar and off-channel flood and valley MUs are delineated using a combination of hydraulic and geomorphic indicators, such as depth and velocity rasters for different discharges, topographic contours, NAIP imagery, and a raster of vegetation. The ability to objectively delineate inundation zones and morphological units in tandem allows for better informed river management and restoration strategies as well as scientific studies about abiotic-biotic linkages.
Mineralogical and geochemical anomalous data of the K-T boundary samples
NASA Technical Reports Server (NTRS)
Miura, Y.; Shibya, G.; Imai, M.; Takaoka, N.; Saito, S.
1988-01-01
Cretaceous-Tertiary boundary problem has been discussed previously from the geological research, mainly by fossil changes. Although geochemical bulk data of Ir anomaly suggest the extraterrestrial origin of the K-T boundary, the exact formation process discussed mainly by mineralogical and geochemical study has been started recently, together with noble gas contents. The K-T boundary sample at Kawaruppu River, Hokkaido was collected, in order to compare with the typical K-T boundary samples of Bubbio, Italy, Stevns Klint, Denmark, and El Kef, Tunisia. The experimental data of the silicas and calcites in these K-T boundary samples were obtained from the X-ray unit-cell dimension (i.e., density), ESR signal and total linear absorption coefficient, as well as He and Ne contents. The K-T boundary samples are usually complex mixture of the terrestrial activities after the K-T boundary event. The mineralogical and geochemical anomalous data indicate special terrestrial atmosphere at the K-T boundary formation probably induced by asteroid impact, followed the many various terrestrial activities (especially the strong role of sea-water mixture, compared with terrestrial highland impact and impact craters in the other earth-type planetary bodies).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noid, G; Tai, A; Liu, Y
Purpose: It is desirable to increase CT soft-tissue contrast to improve delineation of tumor target and/or surrounding organs at risk (OAR) in RT planning and delivery guidance. The purpose of this work is to investigate the use of monoenergetic decompositions obtained from dual energy (DE) CT to improve soft-tissue contrast. Methods: CT data were acquired for 5 prostate and 5 pancreas patients and a phantom with a CT Scanner (Definition AS Open, Siemens) using both sequential DE protocols and standard protocols. For the DE protocols, the scanner rapidly performs two acquisitions at 80 kVp and 140 kVp. The CT numbersmore » of soft tissue inserts in the phantom (CTED/Gammex) were measured across the spectrum of available monoenergetic decompositions (40 to 140 keV) and compared to the standard protocol (120 kVp, 0.6 pitch, 18 mGy CTDIvol). Contrast, defined as the difference in the average CT number between target and OAR, was measured for all subjects and compared between the DE and standard protocols. Results: Mono-energetic decompositions of the phantom demonstrate an enhancement of soft-tissue contrast as the energy is decreased. For instance, relative to the 120 kVp scans the Liver ED insert increased in CT number by 25 HU while the adipose ED insert decreased by 50 HU. The lowest energy decompositions featured the highest contrast between target and OAR. For every patient, the contrast increased by decomposing at 40 keV. The average increase in contrast relative to a 120 kVp scan for prostate patients at 40 keV was 25.05±17.28 HU while for pancreas patients it was 19.21±17.39 HU. Conclusion: Low energy monoenergetic decompositions from dual-energy CT substantially increase soft-tissue contrast. At the lowest achievable monoenergetic decompositions the maximum soft-tissue contrast is achieved and the delineation of target and OAR is improved. Thus it is beneficial to use DECT in radiation oncology. Supported by Siemens.« less
Forest Stand Segmentation Using Airborne LIDAR Data and Very High Resolution Multispectral Imagery
NASA Astrophysics Data System (ADS)
Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet, Valérie; Hervieu, Alexandre
2016-06-01
Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., ≥ 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with α-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).
Detection of Head and Neck Cancer in Surgical Specimens Using Quantitative Hyperspectral Imaging.
Lu, Guolan; Little, James V; Wang, Xu; Zhang, Hongzheng; Patel, Mihir R; Griffith, Christopher C; El-Deiry, Mark W; Chen, Amy Y; Fei, Baowei
2017-09-15
Purpose: This study intends to investigate the feasibility of using hyperspectral imaging (HSI) to detect and delineate cancers in fresh, surgical specimens of patients with head and neck cancers. Experimental Design: A clinical study was conducted in order to collect and image fresh, surgical specimens from patients ( N = 36) with head and neck cancers undergoing surgical resection. A set of machine-learning tools were developed to quantify hyperspectral images of the resected tissue in order to detect and delineate cancerous regions which were validated by histopathologic diagnosis. More than two million reflectance spectral signatures were obtained by HSI and analyzed using machine-learning methods. The detection results of HSI were compared with autofluorescence imaging and fluorescence imaging of two vital-dyes of the same specimens. Results: Quantitative HSI differentiated cancerous tissue from normal tissue in ex vivo surgical specimens with a sensitivity and specificity of 91% and 91%, respectively, and which was more accurate than autofluorescence imaging ( P < 0.05) or fluorescence imaging of 2-NBDG ( P < 0.05) and proflavine ( P < 0.05). The proposed quantification tools also generated cancer probability maps with the tumor border demarcated and which could provide real-time guidance for surgeons regarding optimal tumor resection. Conclusions: This study highlights the feasibility of using quantitative HSI as a diagnostic tool to delineate the cancer boundaries in surgical specimens, and which could be translated into the clinic application with the hope of improving clinical outcomes in the future. Clin Cancer Res; 23(18); 5426-36. ©2017 AACR . ©2017 American Association for Cancer Research.
Kurimoto, Kazuki; Yabuta, Yukihiro; Hayashi, Katsuhiko; Ohta, Hiroshi; Kiyonari, Hiroshi; Mitani, Tadahiro; Moritoki, Yoshinobu; Kohri, Kenjiro; Kimura, Hiroshi; Yamamoto, Takuya; Katou, Yuki; Shirahige, Katsuhiko; Saitou, Mitinori
2015-05-07
Germ cell specification is accompanied by epigenetic remodeling, the scale and specificity of which are unclear. Here, we quantitatively delineate chromatin dynamics during induction of mouse embryonic stem cells (ESCs) to epiblast-like cells (EpiLCs) and from there into primordial germ cell-like cells (PGCLCs), revealing large-scale reorganization of chromatin signatures including H3K27me3 and H3K9me2 patterns. EpiLCs contain abundant bivalent gene promoters characterized by low H3K27me3, indicating a state primed for differentiation. PGCLCs initially lose H3K4me3 from many bivalent genes but subsequently regain this mark with concomitant upregulation of H3K27me3, particularly at developmental regulatory genes. PGCLCs progressively lose H3K9me2, including at lamina-associated perinuclear heterochromatin, resulting in changes in nuclear architecture. T recruits H3K27ac to activate BLIMP1 and early mesodermal programs during PGCLC specification, which is followed by BLIMP1-mediated repression of a broad range of targets, possibly through recruitment and spreading of H3K27me3. These findings provide a foundation for reconstructing regulatory networks of the germline epigenome. Copyright © 2015 Elsevier Inc. All rights reserved.
Efficient FPT Algorithms for (Strict) Compatibility of Unrooted Phylogenetic Trees.
Baste, Julien; Paul, Christophe; Sau, Ignasi; Scornavacca, Celine
2017-04-01
In phylogenetics, a central problem is to infer the evolutionary relationships between a set of species X; these relationships are often depicted via a phylogenetic tree-a tree having its leaves labeled bijectively by elements of X and without degree-2 nodes-called the "species tree." One common approach for reconstructing a species tree consists in first constructing several phylogenetic trees from primary data (e.g., DNA sequences originating from some species in X), and then constructing a single phylogenetic tree maximizing the "concordance" with the input trees. The obtained tree is our estimation of the species tree and, when the input trees are defined on overlapping-but not identical-sets of labels, is called "supertree." In this paper, we focus on two problems that are central when combining phylogenetic trees into a supertree: the compatibility and the strict compatibility problems for unrooted phylogenetic trees. These problems are strongly related, respectively, to the notions of "containing as a minor" and "containing as a topological minor" in the graph community. Both problems are known to be fixed parameter tractable in the number of input trees k, by using their expressibility in monadic second-order logic and a reduction to graphs of bounded treewidth. Motivated by the fact that the dependency on k of these algorithms is prohibitively large, we give the first explicit dynamic programming algorithms for solving these problems, both running in time [Formula: see text], where n is the total size of the input.
The classical dynamic symmetry for the U(1) -Kepler problems
NASA Astrophysics Data System (ADS)
Bouarroudj, Sofiane; Meng, Guowu
2018-01-01
For the Jordan algebra of hermitian matrices of order n ≥ 2, we let X be its submanifold consisting of rank-one semi-positive definite elements. The composition of the cotangent bundle map πX: T∗ X → X with the canonical map X → CP n - 1 (i.e., the map that sends a given hermitian matrix to its column space), pulls back the Kähler form of the Fubini-Study metric on CP n - 1 to a real closed differential two-form ωK on T∗ X. Let ωX be the canonical symplectic form on T∗ X and μ a real number. A standard fact says that ωμ ≔ωX + 2 μωK turns T∗ X into a symplectic manifold, hence a Poisson manifold with Poisson bracket {,}μ. In this article we exhibit a Poisson realization of the simple real Lie algebra su(n , n) on the Poisson manifold (T∗ X ,{,}μ) , i.e., a Lie algebra homomorphism from su(n , n) to (C∞(T∗ X , R) ,{,}μ). Consequently one obtains the Laplace-Runge-Lenz vector for the classical U(1) -Kepler problem of level n and magnetic charge μ. Since the McIntosh-Cisneros-Zwanziger-Kepler problems (MICZ-Kepler Problems) are the U(1) -Kepler problems of level 2, the work presented here is a direct generalization of the work by A. Barut and G. Bornzin (1971) on the classical dynamic symmetry for the MICZ-Kepler problems.
Smith, Bruce D.; Thamke, Joanna N.; Cain, Michael J.; Tyrrell, Christa; Hill, Patricia L.
2006-01-01
This report is a data release for a helicopter electromagnetic and magnetic survey that was conducted during August 2004 in a 275-square-kilometer area that includes the East Poplar oil field on the Fort Peck Indian Reservation. The electromagnetic equipment consisted of six different coil-pair orientations that measured resistivity at separate frequencies from about 400 hertz to about 140,000 hertz. The electromagnetic resistivity data were converted to six electrical conductivity grids, each representing different approximate depths of investigation. The range of subsurface investigation is comparable to the depth of shallow aquifers. Areas of high conductivity in shallow aquifers in the East Poplar oil field area are being delineated by the U.S. Geological Survey, in cooperation with the Fort Peck Assiniboine and Sioux Tribes, in order to map areas of saline-water plumes. Ground electromagnetic methods were first used during the early 1990s to delineate more than 31 square kilometers of high conductivity saline-water plumes in a portion of the East Poplar oil field area. In the 10 years since the first delineation, the quality of water from some wells completed in the shallow aquifers in the East Poplar oil field changed markedly. The extent of saline-water plumes in 2004 likely differs from that delineated in the early 1990s. The geophysical and hydrologic information from U.S. Geological Survey studies is being used by resource managers to develop ground-water resource plans for the area.
Stephen M. Ogle; Grant Domke; Werner A. Kurz; Marcelo T. Rocha; Ted Huffman; Amy Swan; James E. Smith; Christopher Woodall; Thelma Krug
2018-01-01
Land use and management activities have a substantial impact on carbon stocks and associated greenhouse gas emissions and removals. However, it is challenging to discriminate between anthropogenic and non-anthropogenic sources and sinks from land. To address this problem, the Intergovernmental Panel on Climate Change developed a managed land proxy to determine which...
ERIC Educational Resources Information Center
Sedere, Upali M.
2008-01-01
School based general education aught to be a future oriented subject. However, over the years, due to parental and grand-parental generations setting policies of education for the younger generation, education is always more past oriented than future oriented. This trend did not cause much of a problem when the change over time was moderate. As…
An abstract specification language for Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, R. W.
1985-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
An abstract language for specifying Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1986-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
ERIC Educational Resources Information Center
1964
IN AN EFFORT TO STATE THE PROBLEMS OF CONTINUING EDUCATION FOR THE MINISTRY, DESCRIBE ITS AIMS, DELINEATE AN ADEQUATE PROGRAM, DEFINE THE ROLES OF SPONSORS, EVALUATE THE CONCEPTS EMERGING FROM OTHER FIELDS OF CONTINUING EDUCATION, AND ADVISE ON COORDINATION OF PROGRAMS, CONSULTATION SPEAKERS DISCUSSED THE TASK OF THE MINISTER IN THE CHANGING…
Mariner Mars 1971 attitude control subsystem flight performance
NASA Technical Reports Server (NTRS)
Schumacher, L.
1973-01-01
The flight performance of the Mariner 71 attitude control subsystem is discussed. Each phase of the mission is delineated and the attitude control subsystem is evaluated within the observed operational environment. Performance anomalies are introduced and discussed within the context of general performance. Problems such as the sun sensor interface incompatibility, gas valve leaks, and scan platform dynamic coupling effects are given analytical considerations.
Hofheinz, Frank; Hoff, Jörg van den; Steffen, Ingo G; Lougovski, Alexandr; Ego, Kilian; Amthauer, Holger; Apostolova, Ivayla
2016-12-01
We have demonstrated recently that the tumor-to-blood standard uptake ratio (SUR) is superior to tumor standardized uptake value (SUV) as a surrogate of the metabolic uptake rate K m of fluorodeoxyglucose (FDG), overcoming several of the known shortcomings of the SUV approach: excellent linear correlation of SUR and K m from Patlak analysis was found using dynamic imaging of liver metastases. However, due to the perfectly standardized uptake period used for SUR determination and the comparatively short uptake period, these results are not automatically valid and applicable for clinical whole-body examinations in which the uptake periods (T) are distinctly longer and can vary considerably. Therefore, the aim of this work was to investigate the correlation between SUR derived from clinical static whole-body scans and K m-surrogate derived from dual time point (DTP) measurements. DTP (18)F-FDG PET/CT was performed in 90 consecutive patients with histologically proven non-small cell lung cancer (NSCLC). In the PET images, the primary tumor was delineated with an adaptive threshold method. For determination of the blood SUV, an aorta region of interest (ROI) was delineated manually in the attenuation CT and transferred to the PET image. Blood SUV was computed as the mean value of the aorta ROI. SUR values were computed as ratio of tumor SUV and blood SUV. SUR values from the early time point of each DTP measurement were scan time corrected to 75 min postinjection (SURtc). As surrogate of K m, we used the SUR(T) slope, K slope, derived from DTP measurements since it is proportional to the latter under the given circumstances. The correlation of SUV and SURtc with K slope was investigated. The prognostic value of SUV, SURtc, and K slope for overall survival (OS) and progression-free survival (PFS) was investigated with univariate Cox regression in a homogeneous subgroup (N=31) treated with primary chemoradiation. Correlation analysis revealed for both, SUV and SURtc, a clear linear correlation with K slope (P<0.001). Correlation SUR vs. K slope was considerably stronger than correlation SUV vs. K slope (R (2)=0.92 and R (2)=0.69, respectively, P<0.001). Univariate Cox regression revealed SURtc and K slope as significant prognostic factors for PFS (hazard ratio (HR) =3.4/ P=0.017 and HR =4.3/ P=0.020, respectively). For SUV, no significant effect was found. None of the investigated parameters was prognostic for OS. Scan-time-corrected SUR is a significantly better surrogate of tumor FDG metabolism in clinical whole-body PET compared to SUV. The very high linear correlation of SUR and DTP-derived K slope (which is proportional to actual K m) implies that for histologically proven malignant lesions, FDG-DTP does not provide added value in comparison to the SUR approach in NSCLC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, Najeeb; Toth, Robert; Chappelow, Jonathan
2012-04-15
Purpose: Prostate gland segmentation is a critical step in prostate radiotherapy planning, where dose plans are typically formulated on CT. Pretreatment MRI is now beginning to be acquired at several medical centers. Delineation of the prostate on MRI is acknowledged as being significantly simpler to perform, compared to delineation on CT. In this work, the authors present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model (SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This framework is particularly relevant in scenarios where accurate boundary delineations ofmore » the SOI on one of the modalities may not be readily available, or difficult to obtain, for training a SSM. In this work the authors apply the LSSM in the context of multimodal prostate segmentation for radiotherapy planning, where the prostate is concurrently segmented on MRI and CT. Methods: The framework comprises a number of logically connected steps. The first step utilizes multimodal registration of MRI and CT to map 2D boundary delineations of the prostate from MRI onto corresponding CT images, for a set of training studies. Hence, the scheme obviates the need for expert delineations of the gland on CT for explicitly constructing a SSM for prostate segmentation on CT. The delineations of the prostate gland on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the building of the LSSM. In order to perform concurrent prostate MRI and CT segmentation using the LSSM, the authors employ a region-based level set approach where the authors deform the evolving prostate boundary to simultaneously fit to MRI and CT images in which voxels are classified to be either part of the prostate or outside the prostate. The classification is facilitated by using a combination of MRI-CT probabilistic spatial atlases and a random forest classifier, driven by gradient and Haar features. Results: The authors acquire a total of 20 MRI-CT patient studies and use the leave-one-out strategy to train and evaluate four different LSSMs. First, a fusion-based LSSM (fLSSM) is built using expert ground truth delineations of the prostate on MRI alone, where the ground truth for the gland on CT is obtained via coregistration of the corresponding MRI and CT slices. The authors compare the fLSSM against another LSSM (xLSSM), where expert delineations of the gland on both MRI and CT are employed in the model building; xLSSM representing the idealized LSSM. The authors also compare the fLSSM against an exclusive CT-based SSM (ctSSM), built from expert delineations of the gland on CT alone. In addition, two LSSMs trained using trainee delineations (tLSSM) on CT are compared with the fLSSM. The results indicate that the xLSSM, tLSSMs, and the fLSSM perform equivalently, all of them out-performing the ctSSM. Conclusions: The fLSSM provides an accurate alternative to SSMs that require careful expert delineations of the SOI that may be difficult or laborious to obtain. Additionally, the fLSSM has the added benefit of providing concurrent segmentations of the SOI on multiple imaging modalities.« less
Improving Student Achievement in Math and Science
NASA Technical Reports Server (NTRS)
Sullivan, Nancy G.; Hamsa, Irene Schulz; Heath, Panagiota; Perry, Robert; White, Stacy J.
1998-01-01
As the new millennium approaches, a long anticipated reckoning for the education system of the United States is forthcoming, Years of school reform initiatives have not yielded the anticipated results. A particularly perplexing problem involves the lack of significant improvement of student achievement in math and science. Three "Partnership" projects represent collaborative efforts between Xavier University (XU) of Louisiana, Southern University of New Orleans (SUNO), Mississippi Valley State University (MVSU), and the National Aeronautics and Space Administration (NASA), Stennis Space Center (SSC), to enhance student achievement in math and science. These "Partnerships" are focused on students and teachers in federally designated rural and urban empowerment zones and enterprise communities. The major goals of the "Partnerships" include: (1) The identification and dissemination of key indices of success that account for high performance in math and science; (2) The education of pre-service and in-service secondary teachers in knowledge, skills, and competencies that enhance the instruction of high school math and science; (3) The development of faculty to enhance the quality of math and science courses in institutions of higher education; and (4) The incorporation of technology-based instruction in institutions of higher education. These goals will be achieved by the accomplishment of the following objectives: (1) Delineate significant ?best practices? that are responsible for enhancing student outcomes in math and science; (2) Recruit and retain pre-service teachers with undergraduate degrees in Biology, Math, Chemistry, or Physics in a graduate program, culminating with a Master of Arts in Curriculum and Instruction; (3) Provide faculty workshops and opportunities for travel to professional meetings for dissemination of NASA resources information; (4) Implement methodologies and assessment procedures utilizing performance-based applications of higher order thinking via the incorporation of Global Learning Observations To Benefit the Environment (GLOBE), Mission to Planet Earth and the use of Geographic Imaging Systems into the K-12th grade curriculum.
The high voltage homopolar generator
NASA Astrophysics Data System (ADS)
Price, J. H.; Gully, J. H.; Driga, M. D.
1986-11-01
System and component design features of proposed high voltage homopolar generator (HVHPG) are described. The system is to have an open circuit voltage of 500 V, a peak output current of 500 kA, 3.25 MJ of stored inertial energy and possess an average magnetic-flux density of 5 T. Stator assembly components are discussed, including the stator, mount structure, hydrostatic bearings, main and motoring brushgears and rotor. Planned operational procedures such as monitoring the rotor to full speed and operation with a superconducting field coil are delineated.
Skylab 3,Astronaut Jack R. Lousma on EVA
1973-08-06
SL3-122-2612 (6 Aug. 1973) --- Astronaut Alan L. Bean, Skylab 3 commander, participates in the final Skylab 3 extravehicular activity (EVA), during which a variety of tasks were performed. Here, Bean is near the Apollo Telescope Mount (ATM) during final film change out for the giant telescope facility. Astronaut Owen K. Garriott, who took the picture, is reflected in Bean's helmet visor. The reflected Earth disk in Bean's visor is so clear that the Red Sea and Nile River area can delineated. Photo credit: NASA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Xuehang; Chen, Xingyuan; Ye, Ming
2015-07-01
This study develops a new framework of facies-based data assimilation for characterizing spatial distribution of hydrofacies and estimating their associated hydraulic properties. This framework couples ensemble data assimilation with transition probability-based geostatistical model via a parameterization based on a level set function. The nature of ensemble data assimilation makes the framework efficient and flexible to be integrated with various types of observation data. The transition probability-based geostatistical model keeps the updated hydrofacies distributions under geological constrains. The framework is illustrated by using a two-dimensional synthetic study that estimates hydrofacies spatial distribution and permeability in each hydrofacies from transient head data.more » Our results show that the proposed framework can characterize hydrofacies distribution and associated permeability with adequate accuracy even with limited direct measurements of hydrofacies. Our study provides a promising starting point for hydrofacies delineation in complex real problems.« less
Single-axis gyroscopic motion with uncertain angular velocity about spin axis
NASA Technical Reports Server (NTRS)
Singh, S. N.
1977-01-01
A differential game approach is presented for studying the response of a gyro by treating the controlled angular velocity about the input axis as the evader, and the bounded but uncertain angular velocity about the spin axis as the pursuer. When the uncertain angular velocity about the spin axis desires to force the gyro to saturation a differential game problem with two terminal surfaces results, whereas when the evader desires to attain the equilibrium state the usual game with single terminal manifold arises. A barrier, delineating the capture zone (CZ) in which the gyro can attain saturation and the escape zone (EZ) in which the evader avoids saturation is obtained. The CZ is further delineated into two subregions such that the states in each subregion can be forced on a definite target manifold. The application of the game theoretic approach to Control Moment Gyro is briefly discussed.
Frequency Management Engineering Principles--Spectrum Measurements (Reference Order 6050.23).
1982-08-01
Interference 22 (a) Dielectric Heater Example 22 (b) High Power FM Interference Examle 22 (c) Radar Interference Example 22 (d) ARSR Interference Example...Localizer 23 (i) Dielectric Heaters 23 (j) High Power TV/FM 23 (k) Power Line Noise 23 (1) Incidental Radiating Devices 23 (m) Super-regenerative...employing broad band power amplifiers or and random spectrum analyzer instabilities traveling wave tubes. The "cleanest" spectrums create drift problems
2007-08-01
antiplane eigenstrain . ASME Journal of Applied Mechanics (In press, to appear in the September issue). [4] Wang, X., Pan, E., Roy, A. K, 2007. Three...problem of a functionally graded plane with a circular inclusion under a uniform antiplane eigenstrain is investigated, where the shear modulus varies...strain and stress fields inside the circular inclusion under uniform antiplane eigenstrains are intrinsically nOliuniform. This phenomenon differs
Source Camera Identification and Blind Tamper Detections for Images
2007-04-24
measures and image quality measures in camera identification problem was studied using conjunction with a KNN classifier to identify the feature sets...shots varying from nature scenes .-.. motorala to close-ups of people. We experimented with the KNN *~. * ny classifier (K=5) as well SVM algorithm of...on Acoustic, Speech and Signal Processing (ICASSP), France, May 2006, vol. 5, pp. 401-404. [9] H. Farid and S. Lyu, "Higher-order wavelet statistics
Impact of 4D image quality on the accuracy of target definition.
Nielsen, Tine Bjørn; Hansen, Christian Rønn; Westberg, Jonas; Hansen, Olfred; Brink, Carsten
2016-03-01
Delineation accuracy of target shape and position depends on the image quality. This study investigates whether the image quality on standard 4D systems has an influence comparable to the overall delineation uncertainty. A moving lung target was imaged using a dynamic thorax phantom on three different 4D computed tomography (CT) systems and a 4D cone beam CT (CBCT) system using pre-defined clinical scanning protocols. Peak-to-peak motion and target volume were registered using rigid registration and automatic delineation, respectively. A spatial distribution of the imaging uncertainty was calculated as the distance deviation between the imaged target and the true target shape. The measured motions were smaller than actual motions. There were volume differences of the imaged target between respiration phases. Imaging uncertainties of >0.4 cm were measured in the motion direction which showed that there was a large distortion of the imaged target shape. Imaging uncertainties of standard 4D systems are of similar size as typical GTV-CTV expansions (0.5-1 cm) and contribute considerably to the target definition uncertainty. Optimising and validating 4D systems is recommended in order to obtain the most optimal imaged target shape.
NASA Astrophysics Data System (ADS)
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey
2012-12-01
This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.
Unitary easy quantum groups: Geometric aspects
NASA Astrophysics Data System (ADS)
Banica, Teodor
2018-03-01
We discuss the classification problem for the unitary easy quantum groups, under strong axioms, of noncommutative geometric nature. Our main results concern the intermediate easy quantum groups ON ⊂ G ⊂ UN+ . To any such quantum group we associate its Schur-Weyl twist G ¯ , two noncommutative spheres S , S ¯ , a noncommutative torus T, and a quantum reflection group K. Studying (S , S ¯ , T , K , G , G ¯) leads then to some natural axioms, which can be used in order to investigate G itself. We prove that the main examples are covered by our formalism, and we conjecture that in what concerns the case UN ⊂ G ⊂ UN+ , our axioms should restrict the list of known examples.
The computational complexity of elliptic curve integer sub-decomposition (ISD) method
NASA Astrophysics Data System (ADS)
Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza
2014-07-01
The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.
ERIC Educational Resources Information Center
Cutter, William
1989-01-01
Views teaching as a form of translation. Discusses the prospects of precise teaching and sets forth some thoughts concerning an ideal model. Delineates discussion from the literary, mystical, and philosophical dimensions in order to elucidate the instructional tasks of religious education. Points out the paradoxes of teaching. (KO)
Investigating Occipito-Temporal Contributions to Reading with TMS
ERIC Educational Resources Information Center
Duncan, Keith J.; Pattamadilok, Chotiga; Devlin, Joseph T.
2010-01-01
The debate regarding the role of ventral occipito-temporal cortex (vOTC) in visual word recognition arises, in part, from difficulty delineating the functional contributions of vOTC as separate from other areas of the reading network. Here, we investigated the feasibility of using TMS to interfere with vOTC processing in order to explore its…
Mineral deposits in western Saudi Arabia; a preliminary report
Roberts, Ralph Jackson; Greenwood, William R.; Worl, Ronald G.; Dodge, F.C.W.; Kiilsgaard, Thor H.
1975-01-01
In order to effectively carry on a search for new mineral deposits, the belts should be mapped in detail, with emphasis on the delineation of stratigraphic and structural features that control metallization. In addition, geochemical and geophysical studies should be made of promising areas to outline exploration targets. These targets could then be systematically explored.
Use of Business-Naming Practices to Delineate Vernacular Regions: A Michigan Example
ERIC Educational Resources Information Center
Liesch, Matthew; Dunklee, Linda M.; Legg, Robert J.; Feig, Anthony D.; Krause, Austin Jena
2015-01-01
This article provides a history of efforts to map vernacular regions as context for offering readers a way of using business directories in order to construct a GIS-based map of vernacular regions. With Michigan as a case study, the article discusses regional-naming conventions, boundaries, and inclusions and omissions of areas from regional…
Watershed condition [Chapter 4
Daniel G. Neary; Jonathan W. Long; Malchus B. Baker
2012-01-01
Managers of the Prescott National Forest are obliged to evaluate the conditions of watersheds under their jurisdiction in order to guide informed decisions concerning grazing allotments, forest and woodland management, restoration treatments, and other management initiatives. Watershed condition has been delineated by contrasts between âgoodâ and âpoorâ conditions (...
ERIC Educational Resources Information Center
Law, Dennis C. S.; Meyer, Jan H. F.
2011-01-01
The present study aims to analyse the complex relationships between the relevant constructs of students' demographic background, perceptions, learning patterns and (proxy measures of) learning outcomes in order to delineate the possible direct, indirect, or spurious effects among them. The analytical methodology is substantively framed against the…
Contingency contracting with school problems
Cantrell, Robert P.; Cantrell, Mary Lynn; Huddleston, Clifton M.; Wooldridge, Ralph L.
1969-01-01
Contingency contracting procedures used in managing problems with school-age children involved analyzing teacher and/or parental reports of behavior problem situations, isolating the most probable contingencies then in effect, the range of reinforcers presently available, and the ways in which they were obtained. The authors prepared written contracts delineating remediative changes in reinforcement contingencies. These contracts specified ways in which the child could obtain existing individualized reinforcers contingent upon approximations to desired appropriate behaviors chosen as incompatible with the referral problem behaviors. Contract procedures were administered by the natural contingency managers, parents and/or teachers, who kept daily records of contracted behaviors and reinforcers. These records were sent to the authors and provided feedback on the progress of the case. Initial results of this procedure have been sufficiently encouraging to warrant recommending an experimental analysis of contingency contracting as a clinical method. PMID:16795222
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucher, Laurel A.
Successful collaboration requires effective communication and collective problem solving. Regardless of the subject area --- environmental remediation, waste management, program planning and budgeting --- those involved must focus their efforts in an orderly and cooperative manner. A thinking tool is a technique used to get individuals to focus on specific components of the task at the same time and to eliminate the 'noise' that accompanies communications among individuals with different objectives and different styles of communicating. For example, one of these thinking tools is a technique which enables a working group to delineate its roles, responsibilities and communication protocols somore » that it can deliver the right information to the right people at the right time. Another enables a group to objectively and collectively evaluate and improve a policy, plan, or program. A third technique enables a group to clarify its purpose and direction while generating interest and buy-in. A fourth technique makes it possible for a group with polarized opinions to acknowledge their differences as well as what they have in common. A fifth technique enables a group to consider a subject of importance from all perspectives so as to produce a more comprehensive and sustainable solution. These thinking tools make effective communication and collective problem solving possible in radioactive waste management and remediation. They can be used by a wide spectrum of professionals including policy specialists, program administrators, program and project managers, and technical specialists. (author)« less
Problems with Excessive Residual Lower Leg Length in Pediatric Amputees
Osebold, William R; Lester, Edward L; Christenson, Donald M
2001-01-01
We studied six pediatric amputees with long below-knee residual limbs, in order to delineate their functional and prosthetic situations, specifically in relation to problems with fitting for dynamic-response prosthetic feet. Three patients had congenital pseudoarthrosis of the tibia secondary to neurofibromatosis, one had fibular hemimelia, one had a traumatic amputation, and one had amputation secondary to burns. Five patients had Syme's amputations, one had a Boyd amputation. Ages at amputation ranged from nine months to five years (average age 3 years 1 month). After amputation, the long residual below-knee limbs allowed fitting with only the lowest-profile prostheses, such as deflection plates. In three patients, the femoral dome to tibial plafond length was greater on the amputated side than on the normal side. To allow room for more dynamic-response (and larger) foot prostheses, two patients have undergone proximal and distal tibial-fibular epiphyseodeses (one at age 5 years 10 months, the other at 3 years 7 months) and one had a proximal tibial-fibular epiphyseodesis at age 7 years 10 months. (All three patients are still skeletally immature.) The families of two other patients are considering epiphyseodeses, and one patient is not a candidate (skeletally mature). Scanogram data indicate that at skeletal maturity the epiphyseodesed patients will have adequate length distal to their residual limbs to fit larger and more dynamic-response prosthetic feet. PMID:11813953
Gilbert, Stéphane; Loranger, Anne; Omary, M Bishr; Marceau, Normand
2016-09-01
Keratins are epithelial cell intermediate filament (IF) proteins that are expressed as pairs in a cell-differentiation-regulated manner. Hepatocytes express the keratin 8 and 18 pair (denoted K8/K18) of IFs, and a loss of K8 or K18, as in K8-null mice, leads to degradation of the keratin partner. We have previously reported that a K8/K18 loss in hepatocytes leads to altered cell surface lipid raft distribution and more efficient Fas receptor (FasR, also known as TNFRSF6)-mediated apoptosis. We demonstrate here that the absence of K8 or transgenic expression of the K8 G62C mutant in mouse hepatocytes reduces lipid raft size. Mechanistically, we find that the lipid raft size is dependent on acid sphingomyelinase (ASMase, also known as SMPD1) enzyme activity, which is reduced in absence of K8/K18. Notably, the reduction of ASMase activity appears to be caused by a less efficient redistribution of surface membrane PKCδ toward lysosomes. Moreover, we delineate the lipid raft volume range that is required for an optimal FasR-mediated apoptosis. Hence, K8/K18-dependent PKCδ- and ASMase-mediated modulation of lipid raft size can explain the more prominent FasR-mediated signaling resulting from K8/K18 loss. The fine-tuning of ASMase-mediated regulation of lipid rafts might provide a therapeutic target for death-receptor-related liver diseases. © 2016. Published by The Company of Biologists Ltd.
2007 Precision Strike PEO Summer Forum - Joint Perspectives on Precision Engagement
2007-07-11
Status,” Colonel Richard Justice, USAF—Commander of the Miniature Munitions Systems Group (MMSG), Eglin Air Force Base “Unmanned Systems (UAS) Roadmap...Role in the Roadmap Implementation Methods & Processes Working Group Issues delineated in Implementation Plan form basis for JTEM methodology...Test and Evaluation JMETC – Joint Mission Environment Test Capability WG – Working Group DOT&E AT&L DOT&E Unclassified 5 Background: JTEM Problem
Study of aerospace technology utilization in the civilian biomedical field
NASA Technical Reports Server (NTRS)
1976-01-01
The treatment of patients with acute pulmonary or cardiovascular diseases is used to demonstrate the benefits to be derived from a more extensive application of NASA technology in public health care. Significant and rather universal problems faced by the medical profession and supporting services are identified. The required technology and specifications for its development and evaluation are delineated. Institutional relationships and collaboration needed to accomplish technology transfer are developed.
ABM clinical protocol #4: Mastitis, revised March 2014.
Amir, Lisa H
2014-06-01
A central goal of The Academy of Breastfeeding Medicine is the development of clinical protocols for managing common medical problems that may impact breastfeeding success. These protocols serve only as guidelines for the care of breastfeeding mothers and infants and do not delineate an exclusive course of treatment or serve as standards of medical care. Variations in treatment may be appropriate according to the needs of an individual patient.
Environmental Toxicity and Poor Cognitive Outcomes in Children and Adults
Liu, Jianghong; Lewis, Gary
2014-01-01
Extensive literature has already documented the deleterious effects of heavy metal toxins on the human brain and nervous system. These toxins, however, represent only a fraction of the environmental hazards that may pose harm to cognitive ability in humans. Lead and mercury exposure, air pollution, and organic compounds all have the potential to damage brain functioning yet remain understudied. In order to provide comprehensive and effective public health and health care initiatives for prevention and treatment, we must first fully understand the potential risks, mechanisms of action, and outcomes surrounding exposure to these elements in the context of neurocognitive ability. This article provides a review of the negative effects on cognitive ability of these lesser-studied environmental toxins, with an emphasis on delineating effects observed in child versus adult populations. Possible differential effects across sociodemographic populations (e.g., urban versus rural residents; ethnic minorities) are discussed as important contributors to risk assessment and the development of prevention measures. The public health and clinical implications are significant and offer ample opportunities for clinicians and researchers to help combat this growing problem. PMID:24645424
Environmental toxicity and poor cognitive outcomes in children and adults.
Liu, Jianghong; Lewis, Gary
2014-01-01
Extensive literature has already documented the deleterious effects of heavy metal toxins on the human brain and nervous system. These toxins, however, represent only a fraction of the environmental hazards that may pose harm to cognitive ability in humans. Lead and mercury exposure, air pollution, and organic compounds all have the potential to damage brain functioning yet remain understudied. In order to provide comprehensive and effective public health and health care initiatives for prevention and treatment, we must first fully understand the potential risks, mechanisms of action, and outcomes surrounding exposure to these elements in the context of neurocognitive ability. This article provides a review of the negative effects on cognitive ability of these lesser-studied environmental toxins, with an emphasis on delineating effects observed in child versus adult populations. Possible differential effects across sociodemographic populations (e.g., urban versus rural residents; ethnic minorities) are discussed as important contributors to risk assessment and the development of prevention measures. The public health and clinical implications are significant and offer ample opportunities for clinicians and researchers to help combat this growing problem.
Combining Symbolic Computation and Theorem Proving: Some Problems of Ramanujan
1994-01-01
1994 CMU-CS--94- 103 ¶ DTIC MAY 0e o99 c -rnepe Combining symbolic computation and theorem proving: some problems of Ramanujan Edmund Clarke Xudong Zhao...Research and Development Center, Aeronautical Systems Division (AFSC), U.S. Air Force, Wright-Patterson AFB, Ohio 45433-6543 under Contract F33615-90- C ...Availability Codes n n = f Avail and Ior7. k= f(k) = _L k~of(nk Dist Special 8. =I f (k + c ) =_k=,+ I f (k) A .[ 3. List of problems The list of challenge
Partial Discharge Ultrasound Detection Using the Sagnac Interferometer System
Li, Xiaomin; Gao, Yan; Zhang, Hongjuan; Wang, Dong; Jin, Baoquan
2018-01-01
Partial discharge detection is crucial for electrical cable safety evaluation. The ultrasonic signals frequently generated in the partial discharge process contains important characteristic information. However, traditional ultrasonic transducers are easily subject to strong electromagnetic interference in environments with high voltages and strong magnetic fields. In order to overcome this problem, an optical fiber Sagnac interferometer system is proposed for partial discharge ultrasound detection. Optical fiber sensing and time-frequency analysis of the ultrasonic signals excited by the piezoelectric ultrasonic transducer is realized for the first time. The effective frequency band of the Sagnac interferometer system was up to 175 kHz with the help of a designed 10 kV partial discharge simulator device. Using the cumulative histogram method, the characteristic ultrasonic frequency band of the partial discharges was between 28.9 kHz and 57.6 kHz for this optical fiber partial discharge detection system. This new ultrasound sensor can be used as an ideal ultrasonic source for the intrinsically safe detection of partial discharges in an explosive environment. PMID:29734682
Gartner, Daniel; Zhang, Yiye; Padman, Rema
2018-06-01
Order sets are a critical component in hospital information systems that are expected to substantially reduce physicians' physical and cognitive workload and improve patient safety. Order sets represent time interval-clustered order items, such as medications prescribed at hospital admission, that are administered to patients during their hospital stay. In this paper, we develop a mathematical programming model and an exact and a heuristic solution procedure with the objective of minimizing physicians' cognitive workload associated with prescribing order sets. Furthermore, we provide structural insights into the problem which lead us to a valid lower bound on the order set size. In a case study using order data on Asthma patients with moderate complexity from a major pediatric hospital, we compare the hospital's current solution with the exact and heuristic solutions on a variety of performance metrics. Our computational results confirm our lower bound and reveal that using a time interval decomposition approach substantially reduces computation times for the mathematical program, as does a K -means clustering based decomposition approach which, however, does not guarantee optimality because it violates the lower bound. The results of comparing the mathematical program with the current order set configuration in the hospital indicates that cognitive workload can be reduced by about 20.2% by allowing 1 to 5 order sets, respectively. The comparison of the K -means based decomposition with the hospital's current configuration reveals a cognitive workload reduction of about 19.5%, also by allowing 1 to 5 order sets, respectively. We finally provide a decision support system to help practitioners analyze the current order set configuration, the results of the mathematical program and the heuristic approach.
NASA Astrophysics Data System (ADS)
Hasanov, Alemdar; Kawano, Alexandre
2016-05-01
Two types of inverse source problems of identifying asynchronously distributed spatial loads governed by the Euler-Bernoulli beam equation ρ (x){w}{tt}+μ (x){w}t+{({EI}(x){w}{xx})}{xx}-{T}r{u}{xx}={\\sum }m=1M{g}m(t){f}m(x), (x,t)\\in {{{Ω }}}T := (0,l)× (0,T), with hinged-clamped ends (w(0,t)={w}{xx}(0,t)=0,w(l,t) = {w}x(l,t)=0,t\\in (0,T)), are studied. Here {g}m(t) are linearly independent functions, describing an asynchronous temporal loading, and {f}m(x) are the spatial load distributions. In the first identification problem the values {ν }k(t),k=\\bar{1,K}, of the deflection w(x,t), are assumed to be known, as measured output data, in a neighbourhood of the finite set of points P:= \\{{x}k\\in (0,l),k=\\bar{1,K}\\}\\subset (0,l), corresponding to the internal points of a continuous beam, for all t\\in ]0,T[. In the second identification problem the values {θ }k(t),k=\\bar{1,K}, of the slope {w}x(x,t), are assumed to be known, as measured output data in a neighbourhood of the same set of points P for all t\\in ]0,T[. These inverse source problems will be defined subsequently as the problems ISP1 and ISP2. The general purpose of this study is to develop mathematical concepts and tools that are capable of providing effective numerical algorithms for the numerical solution of the considered class of inverse problems. Note that both measured output data {ν }k(t) and {θ }k(t) contain random noise. In the first part of the study we prove that each measured output data {ν }k(t) and {θ }k(t),k=\\bar{1,K} can uniquely determine the unknown functions {f}m\\in {H}-1(]0,l[),m=\\bar{1,M}. In the second part of the study we will introduce the input-output operators {{ K }}d :{L}2(0,T)\\mapsto {L}2(0,T),({{ K }}df)(t):= w(x,t;f),x\\in P, f(x) := ({f}1(x),\\ldots ,{f}M(x)), and {{ K }}s :{L}2(0,T)\\mapsto {L}2(0,T), ({{ K }}sf)(t):= {w}x(x,t;f), x\\in P , corresponding to the problems ISP1 and ISP2, and then reformulate these problems as the operator equations: {{ K }}df=ν and {{ K }}sf=θ , where ν (t):= ({ν }1(t),\\ldots ,{ν }K(t)) and {θ }k(t):= ({θ }1(t),\\ldots ,{θ }K(t)). Since both measured output data contain random noise, we use the most prominent regularisation method, Tikhonov regularisation, introducing the regularised cost functionals {J}1α (f):= (1/2)\\parallel {{ K }}df-ν {\\parallel }{L2(0,T)}2+(1/2)α \\parallel f{\\parallel }{L2(0,T)}2 and {J}2α (f):= (1/2)\\parallel {{ K }}sf-θ {\\parallel }{L2(0,T)}2+(1/2)α \\parallel f{\\parallel }{L2(0,T)}2. Using a priori estimates for the weak solution of the direct problem and the Tikhonov regularisation method combined with the adjoint problem approach, we prove that the Fréchet gradients {J}1\\prime (f) and {J}2\\prime (f) of both cost functionals can explicitly be derived via the corresponding weak solutions of adjoint problems and the known temporal loads {g}m(t). Moreover, we show that these gradients are Lipschitz continuous, which allows the use of gradient type iteration convergent algorithms. Two applications of the proposed theory are presented. It is shown that solvability results for inverse source problems related to the synchronous loading case, with a single interior measured data, are special cases of the obtained results for asynchronously distributed spatial load cases.
Multiple indicator cokriging with application to optimal sampling for environmental monitoring
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.
2005-02-01
A probabilistic solution to the problem of spatial interpolation of a variable at an unsampled location consists of estimating the local cumulative distribution function (cdf) of the variable at that location from values measured at neighbouring locations. As this distribution is conditional to the data available at neighbouring locations it incorporates the uncertainty of the value of the variable at the unsampled location. Geostatistics provides a non-parametric solution to such problems via the various forms of indicator kriging. In a least squares sense indicator cokriging is theoretically the best estimator but in practice its use has been inhibited by problems such as an increased number of violations of order relations constraints when compared with simpler forms of indicator kriging. In this paper, we describe a methodology and an accompanying computer program for estimating a vector of indicators by simple indicator cokriging, i.e. simultaneous estimation of the cdf for K different thresholds, {F(u,zk),k=1,…,K}, by solving a unique cokriging system for each location at which an estimate is required. This approach produces a variance-covariance matrix of the estimated vector of indicators which is used to fit a model to the estimated local cdf by logistic regression. This model is used to correct any violations of order relations and automatically ensures that all order relations are satisfied, i.e. the estimated cumulative distribution function, F^(u,zk), is such that: F^(u,zk)∈[0,1],∀zk,andF^(u,zk)⩽F^(u,z)forzk
H2/O2 three-body rates at high temperatures
NASA Technical Reports Server (NTRS)
Marinelli, William J.; Kessler, William J.; Carleton, Karen L.
1991-01-01
Hydrogen atoms are produced in the presence of excess O2, and the first-order decay are studied as a function of temperature and pressure in order to obtain the rate coefficient for the three-body reaction between H-atoms and O2. Attention is focused on the kinetic scheme employed as well as the reaction cell and photolysis and probe laser system. A two-photon laser-induced fluorescence technique is employed to detect H-atoms without optical-thickness or O2-absorption problems. Results confirm measurements reported previously for the H + O2 + N2 reaction at 300 K and extend these measurements to higher temperatures. Preliminary data indicate non-Arrehenius-type behavior of this reaction rate coefficient as a function of temperature. Measurements of the rate coefficient for H + O2 + Ar reaction at 300 K give a rate coefficient of 2.1 +/- 0.1 x 10 to the -32nd cm exp 6/molecule sec.
Boundary methods for mode estimation
NASA Astrophysics Data System (ADS)
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Abdel-Hafiez, M.; Zhao, X.-M.; Kordyuk, A. A.; Fang, Y.-W.; Pan, B.; He, Z.; Duan, C.-G.; Zhao, J.; Chen, X.-J.
2016-01-01
In low-dimensional electron systems, charge density waves (CDW) and superconductivity are two of the most fundamental collective quantum phenomena. For all known quasi-two-dimensional superconductors, the origin and exact boundary of the electronic orderings and superconductivity are still attractive problems. Through transport and thermodynamic measurements, we report on the field-temperature phase diagram in 2H-TaS2 single crystals. We show that the superconducting transition temperature (Tc) increases by one order of magnitude from temperatures at 0.98 K up to 9.15 K at 8.7 GPa when the Tc becomes very sharp. Additionally, the effects of 8.7 GPa illustrate a suppression of the CDW ground state, with critically small Fermi surfaces. Below the Tc the lattice of magnetic flux lines melts from a solid-like state to a broad vortex liquid phase region. Our measurements indicate an unconventional s-wave-like picture with two energy gaps evidencing its multi-band nature. PMID:27534898
Bathymetric Surveys of Lake Arthur and Raccoon Lake, Pennsylvania, June 2007
Hittle, Clinton D.; Ruby, A. Thomas
2008-01-01
In spring of 2007, bathymetric surveys of two Pennsylvania State Park lakes were performed to collect accurate data sets of lake-bed elevations and to develop methods and techniques to conduct similar surveys across the state. The lake-bed elevations and associated geographical position data can be merged with land-surface elevations acquired through Light Detection and Ranging (LIDAR) techniques. Lake Arthur in Butler County and Raccoon Lake in Beaver County were selected for this initial data-collection activity. In order to establish accurate water-surface elevations during the surveys, benchmarks referenced to NAVD 88 were established on land at each lake by use of differential global positioning system (DGPS) surveys. Bathymetric data were collected using a single beam, 210 kilohertz (kHz) echo sounder and were coupled with the DGPS position data utilizing a computer software package. Transects of depth data were acquired at predetermined intervals on each lake, and the shoreline was delineated using a laser range finder and compass module. Final X, Y, Z coordinates of the geographic positions and lake-bed elevations were referenced to NAD 83 and NAVD 88 and are available to create bathymetric maps of the lakes.
Wang, P; Chen, S-H; Hung, W-C; Paul, C; Zhu, F; Guan, P-P; Huso, DL; Kontrogianni-Konstantopoulos, A; Konstantopoulos, K
2015-01-01
Interstitial fluid flow in and around the tumor tissue is a physiologically relevant mechanical signal that regulates intracellular signaling pathways throughout the tumor. Yet, the effects of interstitial flow and associated fluid shear stress on the tumor cell function have been largely overlooked. Using in vitro bioengineering models in conjunction with molecular cell biology tools, we found that fluid shear (2 dyn/cm2) markedly upregulates matrix metalloproteinase 12 (MMP-12) expression and its activity in human chondrosarcoma cells. MMP-12 expression is induced in human chondrocytes during malignant transformation. However, the signaling pathway regulating MMP-12 expression and its potential role in human chondrosarcoma cell invasion and metastasis have yet to be delineated. We discovered that fluid shear stress induces the synthesis of insulin growth factor-2 (IGF-2) and vascular endothelial growth factor (VEGF) B and D, which in turn transactivate MMP-12 via PI3-K, p38 and JNK signaling pathways. IGF-2-, VEGF-B- or VEGF-D-stimulated chondrosarcoma cells display markedly higher migratory and invasive potentials in vitro, which are blocked by inhibiting MMP-12, PI3-K, p38 or JNK activity. Moreover, recombinant human MMP-12 or MMP-12 overexpression can potentiate chondrosarcoma cell invasion in vitro and the lung colonization in vivo. By reconstructing and delineating the signaling pathway regulating MMP-12 activation, potential therapeutic strategies that interfere with chondrosarcoma cell invasion may be identified. PMID:25435370
Wang, P; Chen, S-H; Hung, W-C; Paul, C; Zhu, F; Guan, P-P; Huso, D L; Kontrogianni-Konstantopoulos, A; Konstantopoulos, K
2015-08-27
Interstitial fluid flow in and around the tumor tissue is a physiologically relevant mechanical signal that regulates intracellular signaling pathways throughout the tumor. Yet, the effects of interstitial flow and associated fluid shear stress on the tumor cell function have been largely overlooked. Using in vitro bioengineering models in conjunction with molecular cell biology tools, we found that fluid shear (2 dyn/cm(2)) markedly upregulates matrix metalloproteinase 12 (MMP-12) expression and its activity in human chondrosarcoma cells. MMP-12 expression is induced in human chondrocytes during malignant transformation. However, the signaling pathway regulating MMP-12 expression and its potential role in human chondrosarcoma cell invasion and metastasis have yet to be delineated. We discovered that fluid shear stress induces the synthesis of insulin growth factor-2 (IGF-2) and vascular endothelial growth factor (VEGF) B and D, which in turn transactivate MMP-12 via PI3-K, p38 and JNK signaling pathways. IGF-2-, VEGF-B- or VEGF-D-stimulated chondrosarcoma cells display markedly higher migratory and invasive potentials in vitro, which are blocked by inhibiting MMP-12, PI3-K, p38 or JNK activity. Moreover, recombinant human MMP-12 or MMP-12 overexpression can potentiate chondrosarcoma cell invasion in vitro and the lung colonization in vivo. By reconstructing and delineating the signaling pathway regulating MMP-12 activation, potential therapeutic strategies that interfere with chondrosarcoma cell invasion may be identified.
Year 2000 Readiness Kit: A Compilation of Y2K Resources for Schools, Colleges and Universities.
ERIC Educational Resources Information Center
Department of Education, Washington, DC.
This kit was developed to assist the postsecondary education community's efforts to resolve the Year 2000 (Y2K) computer problem. The kit includes a description of the Y2K problem, an assessment of the readiness of colleges and universities, a checklist for institutions, a Y2K communications strategy, articles on addressing the problem in academic…
Inoue, Yuuji; Yoneyama, Masami; Nakamura, Masanobu; Takemura, Atsushi
2018-06-01
The two-dimensional Cartesian turbo spin-echo (TSE) sequence is widely used in routine clinical studies, but it is sensitive to respiratory motion. We investigated the k-space orders in Cartesian TSE that can effectively reduce motion artifacts. The purpose of this study was to demonstrate the relationship between k-space order and degree of motion artifacts using a moving phantom. We compared the degree of motion artifacts between linear and asymmetric k-space orders. The actual spacing of ghost artifacts in the asymmetric order was doubled compared with that in the linear order in the free-breathing situation. The asymmetric order clearly showed less sensitivity to incomplete breath-hold at the latter half of the imaging period. Because of the actual number of partitions of the k-space and the temporal filling order, the asymmetric k-space order of Cartesian TSE was superior to the linear k-space order for reduction of ghosting motion artifacts.
An evolution in interdisciplinary competencies to prevent and manage patient violence.
Morton, Paula G
2002-01-01
Patient violence is a growing problem in healthcare institutions. Incidents of violence lead to injuries and increased operating costs. An innovative organizational approach to this problem is inclusion of interdisciplinary competency-based staff education and practice, as a key component of a comprehensive violence prevention program.Interdisciplinary competencies include a variety of behavioral responses, aimed at prevention, environmental, interpersonal, and physical interventions and postvention techniques for aggression and violence. Methods to maintain, monitor, document, and improve staff performance and skills are delineated. Organizational investment in such interdisciplinary competency-based education and practice evolves over time. Results include fewer incidents and injuries and enhanced interdisciplinary cooperation.
lsjk—a C++ library for arbitrary-precision numeric evaluation of the generalized log-sine functions
NASA Astrophysics Data System (ADS)
Kalmykov, M. Yu.; Sheplyakov, A.
2005-10-01
Generalized log-sine functions Lsj(k)(θ) appear in higher order ɛ-expansion of different Feynman diagrams. We present an algorithm for the numerical evaluation of these functions for real arguments. This algorithm is implemented as a C++ library with arbitrary-precision arithmetics for integer 0⩽k⩽9 and j⩾2. Some new relations and representations of the generalized log-sine functions are given. Program summaryTitle of program:lsjk Catalogue number:ADVS Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVS Program obtained from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing terms: GNU General Public License Computers:all Operating systems:POSIX Programming language:C++ Memory required to execute:Depending on the complexity of the problem, at least 32 MB RAM recommended No. of lines in distributed program, including testing data, etc.:41 975 No. of bytes in distributed program, including testing data, etc.:309 156 Distribution format:tar.gz Other programs called:The CLN library for arbitrary-precision arithmetics is required at version 1.1.5 or greater External files needed:none Nature of the physical problem:Numerical evaluation of the generalized log-sine functions for real argument in the region 0<θ<π. These functions appear in Feynman integrals Method of solution:Series representation for the real argument in the region 0<θ<π Restriction on the complexity of the problem:Limited up to Lsj(9)(θ), and j is an arbitrary integer number. Thus, all function up to the weight 12 in the region 0<θ<π can be evaluated. The algorithm can be extended up to higher values of k(k>9) without modification Typical running time:Depending on the complexity of problem. See text below.
Attanasi, E.D.; Root, David H.
1993-01-01
This circular presents a summary of the geographic location, amount, and results of petroleum exploration, including an atlas showing explored and delineated prospective areas through 1990. The data show that wildcat well drilling has continued through the last decade to expand the prospective area by about 40,000 to 50,000 square miles per year. However, the area delineated by 1970, which represents only about one-third of the prospective area delineated to date, contains about 80 percent of the oil discovered to date. This discovery distribution suggests that, from an overall prospective, the industry was successful in delineating the most productive areas early. The price increases of the 1970's and 1980's allowed the commercial exploration and development of fields in high-cost areas, such as the North Sea and Campos Basin, Brazil. Data on natural-gas discoveries also indicate that gas will be supplying an increasing share of the worldwide energy market. The size distribution of petroleum provinces is highly skewed. The skewed distribution and the stability in province size orderings suggest that intense exploration in identified provinces will not change the distribution of oil within the study area. Although evidence of the field-growth phenomenon outside the United States and Canada is presented, the data are not yet reliable enough for projecting future growth. The field-growth phenomenon implies not only that recent discoveries are substantially understated, but that field growth could become the dominant source of additions to proved reserves in the future.
Carrington, Melinda J; Kok, Simone; Jansen, Kiki; Stewart, Simon
2013-08-01
A sustained epidemic of cardiovascular disease and related risk factors is a global phenomenon contributing significantly to premature deaths and costly morbidity. Preventative strategies across the full continuum of life, from a population to individual perspective, are not optimally applied. This paper describes a simple and adaptable 'traffic-light' system we have developed to systematically perform individual risk and need delineation in order to 'titrate' the intensity and frequency of healthcare intervention in a cost-effective manner. The GARDIAN (Green Amber Red Delineation of Risk and Need) system is an individual assessment of risk and need that modulates the frequency and intensity of future healthcare intervention. Individual assessment of risk and need for ongoing intervention and support is determined with reference to three domains: (1) clinical stability, (2) gold-standard management, and (3) a broader, holistic assessment of individual circumstance. This can be applied from a primary prevention, secondary prevention, or chronic disease management perspective. Our experience with applying and validating GARDIAN to titrate healthcare resources according to need has been extensive to date, with >5000 individuals profiled in a host of clinical settings. A series of clinical randomized trials will determine the impact of the GARDIAN system on important indices of healthcare utilization and health status. The GARDIAN model to delineating risk and need for varied intensity of management shows strong potential to cost effectively improve health outcomes for both individuals at risk of heart disease and those with established heart disease.
Using the nursing process to implement a Y2K computer application.
Hobbs, C F; Hardinge, T T
2000-01-01
Because of the coming year 2000, the need was assessed to upgrade the order entry system at many hospitals. At Somerset Medical Center, a training team divided the transition into phases and used a modified version of the nursing process to implement the new program. The entire process required fewer than 6 months and was relatively problem-free. This successful transition was aided by the nursing process, training team, and innovative educational techniques.
USSR and Eastern Europe Scientific Abstracts, Electronics and Electrical Engineering, Number 33.
1977-09-27
reduces to an infinite system of linear homogeneous algebraic equations and leads to Mathieu functions of the k-th order. The solution is convergent in...cylinder walls to be infinitesimally thin ideal conductors. The problem is reduced to a system of Fredholm linear algebraic equations of the second...EXPECTED DEVELOPMENTS OF TRANSISTORIZED LOW-NOISE MICROWAVE AMPLIFIERS Prague SDELOVACI TECHNIKA in Czech Vol 25, No 2, Feb 77 pp 47-49 TALLO, ANTON
Calculation of Moment Matrix Elements for Bilinear Quadrilaterals and Higher-Order Basis Functions
2016-01-06
methods are known as boundary integral equation (BIE) methods and the present study falls into this category. The numerical solution of the BIE is...iterated integrals. The inner integral involves the product of the free-space Green’s function for the Helmholtz equation multiplied by an appropriate...Website: http://www.wipl-d.com/ 5. Y. Zhang and T. K. Sarkar, Parallel Solution of Integral Equation -Based EM Problems in the Frequency Domain. New
PseKNC: a flexible web server for generating pseudo K-tuple nucleotide composition.
Chen, Wei; Lei, Tian-Yu; Jin, Dian-Chuan; Lin, Hao; Chou, Kuo-Chen
2014-07-01
The pseudo oligonucleotide composition, or pseudo K-tuple nucleotide composition (PseKNC), can be used to represent a DNA or RNA sequence with a discrete model or vector yet still keep considerable sequence order information, particularly the global or long-range sequence order information, via the physicochemical properties of its constituent oligonucleotides. Therefore, the PseKNC approach may hold very high potential for enhancing the power in dealing with many problems in computational genomics and genome sequence analysis. However, dealing with different DNA or RNA problems may need different kinds of PseKNC. Here, we present a flexible and user-friendly web server for PseKNC (at http://lin.uestc.edu.cn/pseknc/default.aspx) by which users can easily generate many different modes of PseKNC according to their need by selecting various parameters and physicochemical properties. Furthermore, for the convenience of the vast majority of experimental scientists, a step-by-step guide is provided on how to use the current web server to generate their desired PseKNC without the need to follow the complicated mathematical equations, which are presented in this article just for the integrity of PseKNC formulation and its development. It is anticipated that the PseKNC web server will become a very useful tool in computational genomics and genome sequence analysis. Copyright © 2014 Elsevier Inc. All rights reserved.
Local and Cumulative Impervious Cover of Massachusetts Stream Basins
Brandt, Sara L.; Steeves, Peter A.
2009-01-01
Impervious surfaces such as paved roads, parking lots, and building roofs can affect the natural streamflow patterns and ecosystems of nearby streams. This dataset summarizes the percentage of impervious area for watersheds across Massachusetts by using a newly available statewide 1-m binary raster dataset of impervious surface for 2005. In order to accurately capture the wide spatial variability of impervious surface, it was necessary to delineate a new set of finely discretized basin boundaries for Massachusetts. This new set of basins was delineated at a scale finer than that of the existing 12-digit Hydrologic Unit Code basins (HUC-12s) of the national Watershed Boundary Dataset. The dataset consists of three GIS shapefiles. The Massachusetts nested subbasins and the hydrologic units data layers consist of topographically delineated boundaries and their associated percentage of impervious cover for all of Massachusetts except Cape Cod, the Islands, and the Plymouth-Carver region. The Massachusetts groundwater-contributing areas data layer consists of groundwater contributing-area boundaries for streams and coastal areas of Cape Cod and the Plymouth-Carver region. These boundaries were delineated by using groundwater-flow models previously published by the U.S. Geological Survey. Subbasin and hydrologic unit boundaries were delineated statewide with the exception of Cape Cod and the Plymouth-Carver Region. For the purpose of this study, a subbasin is defined as the entire drainage area upstream of an outlet point. Subbasins draining to multiple outlet points on the same stream are nested. That is, a large downstream subbasin polygon comprises all of the smaller upstream subbasin polygons. A hydrologic unit is the intervening drainage area between a given outlet point and the outlet point of the next upstream unit (Fig. 1). Hydrologic units divide subbasins into discrete, nonoverlapping areas. Each hydrologic unit corresponds to a subbasin delineated from the same outlet point; the hydrologic unit and the subbasin share the same unique identifier attribute. Because the same set of outlet points was used for the delineation of subbasins and hydrologic units, the linework for both data layers is identical; however, polygon attributes differ because for a given outlet point, the subbasin polygon area is the sum of all the upstream hydrologic units. Impervious surface summarized for a subbasin represents the percentage of impervious surface area of the entire upstream watershed, whereas the impervious surface for a hydrologic unit represents the percentage of impervious surface area for the intervening drainage area between two outlet points.
Alternate forms of the associated Legendre functions for use in geomagnetic modeling.
Alldredge, L.R.; Benton, E.R.
1986-01-01
An inconvenience attending traditional use of associated Legendre functions in global modeling is that the functions are not separable with respect to the 2 indices (order and degree). In 1973 Merilees suggested a way to avoid the problem by showing that associated Legendre functions of order m and degree m+k can be expressed in terms of elementary functions. This note calls attention to some possible gains in time savings and accuracy in geomagnetic modeling based upon this form. For this purpose, expansions of associated Legendre polynomials in terms of sines and cosines of multiple angles are displayed up to degree and order 10. Examples are also given explaining how some surface spherical harmonics can be transformed into true Fourier series for selected polar great circle paths. -from Authors
Temperature oscillation suppression of GM cryocooler
NASA Astrophysics Data System (ADS)
Okidono, K.; Oota, T.; Kurihara, H.; Sumida, T.; Nishioka, T.; Kato, H.; Matsumura, M.; Sasaki, O.
2012-12-01
GM cryocooler is a convenient refrigerator to achieve low temperatures about 4 K, while it is not suitable for precise measurements because of the large temperature oscillation of typically about 0.3 K. To resolve this problem, we have developed an adapter (He-pot) with a simple structure as possible. From the thermodynamic consideration, both heat capacity and thermal conductance should be large in order to reduce the temperature oscillation without compromising cooling power. Optimal structure of the He-pot is a copper cylinder filled with high pressure He-gas at room temperature. This can reduce the temperature oscillation to less than 10 mK below a certain temperature TH without compromising cooling power. TH are 3.8 and 4.5 for filled He-gas pressures of 90 and 60 atm, respectively. By using this He-pot, GM cryocooler can be applied to such as precise physical property measurements and THz detection.
Developing index maps of water-harvest potential in Africa
Senay, G.B.; Verdin, J.P.
2004-01-01
The food security problem in Africa is tied to the small farmer, whose subsistence farming relies heavily on rain-fed agriculture. A dry spell lasting two to three weeks can cause a significant yield reduction. A small-scale irrigation scheme from small-capacity ponds can alleviate this problem. This solution would require a water harvest mechanism at a farm level. In this study, we looked at the feasibility of implementing such a water harvest mechanism in drought prone parts of Africa. A water balance study was conducted at different watershed levels. Runoff (watershed yield) was estimated using the SCS curve number technique and satellite derived rainfall estimates (RFE). Watersheds were delineated from the Africa-wide HYDRO-1K digital elevation model (DEM) data set in a GIS environment. Annual runoff volumes that can potentially be stored in a pond during storm events were estimated as the product of the watershed area and runoff excess estimated from the SCS Curve Number method. Estimates were made for seepage and net evaporation losses. A series of water harvest index maps were developed based on a combination of factors that took into account the availability of runoff, evaporation losses, population density, and the required watershed size needed to fill a small storage reservoir that can be used to alleviate water stress during a crop growing season. This study presents Africa-wide water-harvest index maps that could be used for conducting feasibility studies at a regional scale in assessing the relative differences in runoff potential between regions for the possibility of using ponds as a water management tool. ?? 2004 American Society of Agricultural Engineers.
Parametric mapping of [18F]fluoromisonidazole positron emission tomography using basis functions.
Hong, Young T; Beech, John S; Smith, Rob; Baron, Jean-Claude; Fryer, Tim D
2011-02-01
In this study, we show a basis function method (BAFPIC) for voxelwise calculation of kinetic parameters (K(1), k(2), k(3), K(i)) and blood volume using an irreversible two-tissue compartment model. BAFPIC was applied to rat ischaemic stroke micro-positron emission tomography data acquired with the hypoxia tracer [(18)F]fluoromisonidazole because irreversible two-tissue compartmental modelling provided good fits to data from both hypoxic and normoxic tissues. Simulated data show that BAFPIC produces kinetic parameters with significantly lower variability and bias than nonlinear least squares (NLLS) modelling in hypoxic tissue. The advantage of BAFPIC over NLLS is less pronounced in normoxic tissue. K(i) determined from BAFPIC has lower variability than that from the Patlak-Gjedde graphical analysis (PGA) by up to 40% and lower bias, except for normoxic tissue at mid-high noise levels. Consistent with the simulation results, BAFPIC parametric maps of real data suffer less noise-induced variability than do NLLS and PGA. Delineation of hypoxia on BAFPIC k(3) maps is aided by low variability in normoxic tissue, which matches that in K(i) maps. BAFPIC produces K(i) values that correlate well with those from PGA (r(2)=0.93 to 0.97; slope 0.99 to 1.05, absolute intercept <0.00002 mL/g per min). BAFPIC is a computationally efficient method of determining parametric maps with low bias and variance.
An Implicit Enumeration Algorithm with Binary-Valued Constraints.
1986-03-01
problems is the National Basketball Association ( NBA -) schedul- ing problems developed by Bean (1980), as discussed in detail in the Appendix. These...fY! X F L- %n~ P ’ % -C-10 K7 K: K7 -L- -7".i - W. , W V APPENDIX The NBA Scheduling Problem §A.1 Formulation The National Basketball Association...16 2.2 4.9 40.2 15.14 §6.2.3 NBA Scheduling Problem The last set of testing problems involves the NBA scheduling problem. A detailed description of
Murphy, A B
2004-01-01
A number of assessments of electron temperatures in atmospheric-pressure arc plasmas using Thomson scattering of laser light have recently been published. However, in this method, the electron temperature is perturbed due to strong heating of the electrons by the incident laser beam. This heating was taken into account by measuring the electron temperature as a function of the laser pulse energy, and linearly extrapolating the results to zero pulse energy to obtain an unperturbed electron temperature. In the present paper, calculations show that the laser heating process has a highly nonlinear dependence on laser power, and that the usual linear extrapolation leads to an overestimate of the electron temperature, typically by 5000 K. The nonlinearity occurs due to the strong dependence on electron temperature of the absorption of laser energy and of the collisional and radiative cooling of the heated electrons. There are further problems in deriving accurate electron temperatures from laser scattering due to necessary averages that have to be made over the duration of the laser pulse and over the finite volume from which laser light is scattered. These problems are particularly acute in measurements in which the laser beam is defocused in order to minimize laser heating; this can lead to the derivation of electron temperatures that are significantly greater than those existing anywhere in the scattering volume. It was concluded from the earlier Thomson scattering measurements that there were significant deviations from equilibrium between the electron and heavy-particle temperatures at the center of arc plasmas of industrial interest. The present calculations indicate that such deviations are only of the order of 1000 K in 20 000 K, so that the usual approximation that arc plasmas are approximately in local thermodynamic equilibrium still applies.
Hidden magnetism and quantum criticality in the heavy fermion superconductor CeRhIn5.
Park, Tuson; Ronning, F; Yuan, H Q; Salamon, M B; Movshovich, R; Sarrao, J L; Thompson, J D
2006-03-02
With only a few exceptions that are well understood, conventional superconductivity does not coexist with long-range magnetic order (for example, ref. 1). Unconventional superconductivity, on the other hand, develops near a phase boundary separating magnetically ordered and magnetically disordered phases. A maximum in the superconducting transition temperature T(c) develops where this boundary extrapolates to zero Kelvin, suggesting that fluctuations associated with this magnetic quantum-critical point are essential for unconventional superconductivity. Invariably, though, unconventional superconductivity masks the magnetic phase boundary when T < T(c), preventing proof of a magnetic quantum-critical point. Here we report specific-heat measurements of the pressure-tuned unconventional superconductor CeRhIn5 in which we find a line of quantum-phase transitions induced inside the superconducting state by an applied magnetic field. This quantum-critical line separates a phase of coexisting antiferromagnetism and superconductivity from a purely unconventional superconducting phase, and terminates at a quantum tetracritical point where the magnetic field completely suppresses superconductivity. The T --> 0 K magnetic field-pressure phase diagram of CeRhIn5 is well described with a theoretical model developed to explain field-induced magnetism in the high-T(c) copper oxides, but in which a clear delineation of quantum-phase boundaries has not been possible. These experiments establish a common relationship among hidden magnetism, quantum criticality and unconventional superconductivity in copper oxides and heavy-electron systems such as CeRhIn5.
Penalized unsupervised learning with outliers
Witten, Daniela M.
2013-01-01
We consider the problem of performing unsupervised learning in the presence of outliers – that is, observations that do not come from the same distribution as the rest of the data. It is known that in this setting, standard approaches for unsupervised learning can yield unsatisfactory results. For instance, in the presence of severe outliers, K-means clustering will often assign each outlier to its own cluster, or alternatively may yield distorted clusters in order to accommodate the outliers. In this paper, we take a new approach to extending existing unsupervised learning techniques to accommodate outliers. Our approach is an extension of a recent proposal for outlier detection in the regression setting. We allow each observation to take on an “error” term, and we penalize the errors using a group lasso penalty in order to encourage most of the observations’ errors to exactly equal zero. We show that this approach can be used in order to develop extensions of K-means clustering and principal components analysis that result in accurate outlier detection, as well as improved performance in the presence of outliers. These methods are illustrated in a simulation study and on two gene expression data sets, and connections with M-estimation are explored. PMID:23875057
NASA Astrophysics Data System (ADS)
Camporese, M.; Cassiani, G.; Deiana, R.; Salandin, P.
2011-12-01
In recent years geophysical methods have become increasingly popular for hydrological applications. Time-lapse electrical resistivity tomography (ERT) represents a potentially powerful tool for subsurface solute transport characterization since a full picture of the spatiotemporal evolution of the process can be obtained. However, the quantitative interpretation of tracer tests is difficult because of the uncertainty related to the geoelectrical inversion, the constitutive models linking geophysical and hydrological quantities, and the a priori unknown heterogeneous properties of natural formations. Here an approach based on the Lagrangian formulation of transport and the ensemble Kalman filter (EnKF) data assimilation technique is applied to assess the spatial distribution of hydraulic conductivity K by incorporating time-lapse cross-hole ERT data. Electrical data consist of three-dimensional cross-hole ERT images generated for a synthetic tracer test in a heterogeneous aquifer. Under the assumption that the solute spreads as a passive tracer, for high Peclet numbers the spatial moments of the evolving plume are dominated by the spatial distribution of the hydraulic conductivity. The assimilation of the electrical conductivity 4D images allows updating of the hydrological state as well as the spatial distribution of K. Thus, delineation of the tracer plume and estimation of the local aquifer heterogeneity can be achieved at the same time by means of this interpretation of time-lapse electrical images from tracer tests. We assess the impact on the performance of the hydrological inversion of (i) the uncertainty inherently affecting ERT inversions in terms of tracer concentration and (ii) the choice of the prior statistics of K. Our findings show that realistic ERT images can be integrated into a hydrological model even within an uncoupled inverse modeling framework. The reconstruction of the hydraulic conductivity spatial distribution is satisfactory in the portion of the domain directly covered by the passage of the tracer. Aside from the issues commonly affecting inverse models, the proposed approach is subject to the problem of the filter inbreeding and the retrieval performance is sensitive to the choice of K prior geostatistical parameters.
Alkan, Ozlem; Kizilkiliç, Osman; Yildirim, Tülin; Alibek, Sedat
2009-06-01
We compared periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER, BLADE) MR technique with spin echo (SE) technique for evaluation of artifacts, and detection and delineation of brain lesions. Contrast-enhanced T1-weighted fluid attenuated inversion recovery (FLAIR) images with BLADE technique (CE T1W-FLAIR BLADE) and contrast-enhanced T1-weighted SE (CE T1W-SE) were performed in 50 patients with intracranial enhancing lesions. These techniques were compared by two neuroradiologists for qualitative analysis of artifacts, lesion detectability, lesion delineation from adjacent structures, and preferred imaging technique; and for quantitative variables, i.e., lesion-to-background and lesion-to-cerebrospinal fluid (CSF) contrast-to-noise (CNR) ratios. Reader agreement was assessed by kappa statistics. All lesions depicted with the CE T1W-SE were also detected with the CE T1W-FLAIR BLADE technique. Delineation of lesions was better on CE T1W-FLAIR BLADE in the majority of patients. Flow-related artifacts were considerably reduced with CE T1W-FLAIR BLADE. A star-like artifact at the level of the 4(th) ventricle was noted on CE T1W-FLAIR BLADE but not on CE T1W-SE. The lesion-to-background CNR and lesion-to-CSF CNR did not show a statistically significant difference between the two techniques. CE T1W-FLAIR BLADE images were preferred by the observers over the CE T1w-SE images, indicating good interobserver agreement (k = 0.70). CE T1W-FLAIR BLADE technique is superior to CE T1WSE for delineation of lesions and reduction of flow-related artifacts, especially within the posterior fossa, and is preferred by readers. CE T1W-FLAIR BLADE may be an alternative approach to imaging, especially for posterior fossa lesions.
NASA Astrophysics Data System (ADS)
Borovkov, Alexei I.; Avdeev, Ilya V.; Artemyev, A.
1999-05-01
In present work, the stress, vibration and buckling finite element analysis of laminated beams is performed. Review of the equivalent single-layer (ESL) laminate theories is done. Finite element algorithms and procedures integrated into the original FEA program system and based on the classical laminated plate theory (CLPT), first-order shear deformation theory (FSDT), third-order theory of Reddy (TSDT-R) and third- order theory of Kant (TSDT-K) with the use of the Lanczos method for solving of the eigenproblem are developed. Several numerical tests and examples of bending, free vibration and buckling of multilayered and sandwich beams with various material, geometry properties and boundary conditions are solved. New effective higher-order hierarchical element for the accurate calculation of transverse shear stress is proposed. The comparative analysis of results obtained by the considered models and solutions of 2D problems of the heterogeneous anisotropic elasticity is fulfilled.
NASA Astrophysics Data System (ADS)
Busto, S.; Ferrín, J. L.; Toro, E. F.; Vázquez-Cendón, M. E.
2018-01-01
In this paper the projection hybrid FV/FE method presented in [1] is extended to account for species transport equations. Furthermore, turbulent regimes are also considered thanks to the k-ε model. Regarding the transport diffusion stage new schemes of high order of accuracy are developed. The CVC Kolgan-type scheme and ADER methodology are extended to 3D. The latter is modified in order to profit from the dual mesh employed by the projection algorithm and the derivatives involved in the diffusion term are discretized using a Galerkin approach. The accuracy and stability analysis of the new method are carried out for the advection-diffusion-reaction equation. Within the projection stage the pressure correction is computed by a piecewise linear finite element method. Numerical results are presented, aimed at verifying the formal order of accuracy of the scheme and to assess the performance of the method on several realistic test problems.
From Healthcare to Warfare and Reverse: How Should We Regulate Dual-Use Neurotechnology?
Ienca, Marcello; Jotterand, Fabrice; Elger, Bernice S
2018-01-17
Recent advances in military-funded neurotechnology and novel opportunities for misusing neurodevices show that the problem of dual use is inherent to neuroscience. This paper discusses how the neuroscience community should respond to these dilemmas and delineates a neuroscience-specific biosecurity framework. This neurosecurity framework involves calibrated regulation, (neuro)ethical guidelines, and awareness-raising activities within the scientific community. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
King, J. C.
1975-01-01
The general orbit-coverage problem in a simplified physical model is investigated by application of numerical approaches derived from basic number theory. A system of basic and general properties is defined by which idealized periodic coverage patterns may be characterized, classified, and delineated. The principal common features of these coverage patterns are their longitudinal quantization, determined by the revolution number R, and their overall symmetry.
Hyper-gravitational effects on metabolism and thermoregulation
NASA Technical Reports Server (NTRS)
Oyama, J.
1984-01-01
Animal hypergravitational effects on metabolism and thermoregulation was studied. The two major problem areas investigated are: initial and short-term exposure effects, and chronic, long-term exposure effects involving developmental and adaptational effects. Investigations focused on: (1) quantifying changes in thermoregulation with graded G-intensities in rats; (2) further delineating the effects of duration on gluconeogenesis, gluconeogenic hormones and substrates, and glucose homeostasis; and (3) reproduction and neonatal survival rates under different G-intensities.
ABM Clinical Protocol #21: Guidelines for Breastfeeding and the Drug-Dependent Woman
2009-01-01
A central goal of The Academy of Breastfeeding Medicine is the development of clinical protocols for managing common medical problems that may impact breastfeeding success. These protocols serve only as guidelines for the care of breastfeeding mothers and infants and do not delineate an exclusive course of treatment or serve as standards of medical care. Variations in treatment may be appropriate according to the needs of an individual patient. PMID:19835481
ABM Clinical Protocol #19: Breastfeeding Promotion in the Prenatal Setting, Revision 2015
Rosen-Carole, Casey; Hartman, Scott
2015-01-01
A central goal of the Academy of Breastfeeding Medicine is the development of clinical protocols for managing common medical problems that may impact breastfeeding success. These protocols serve only as guidelines for the care of breastfeeding mothers and infants and do not delineate an exclusive course of treatment or serve as standards of medical care. Variations in treatment may be appropriate according to the needs of an individual patient. PMID:26651541
The Shock and Vibration Bulletin. Part 1. Summaries of Presented Papers
1973-10-01
Mechanical Engineering Department, University of the Negev , Israel It was recently observed [1] that during metal deformation a transient e.m.f. is...turn radiate noise. In many cases, significant contributions can be made toward solution of the overall problem by Uaing properly optimized damping...delineate the role of these resonant sources of secondary sound radiation in a USAF MAC HH-53 helicopter as regards internal cabin noise level (b
Slot angle detecting method for fiber fixed chip
NASA Astrophysics Data System (ADS)
Zhang, Jiaquan; Wang, Jiliang; Zhou, Chaochao
2018-04-01
The slot angle of fiber fixed chip has a significant impact on performance of photoelectric devices. In order to solve the actual engineering problem, this paper put forward a detecting method based on imaging processing. Because the images have very low contrast that is hardly segmented, so this paper proposes imaging segment methods based on edge character. Then get fixed chip edge line slope k2 and calculate the fiber fixed slot line slope k1, which can be used calculating the slot angle. Lastly, test the repeatability and accuracy of system, which show that this method has very fast operation speed and good robustness. Clearly, it is also satisfied to the actual demand of fiber fixed chip slot angle detection.
Orbital thermal analysis of lattice structured spacecraft using color video display techniques
NASA Technical Reports Server (NTRS)
Wright, R. L.; Deryder, D. D.; Palmer, M. T.
1983-01-01
A color video display technique is demonstrated as a tool for rapid determination of thermal problems during the preliminary design of complex space systems. A thermal analysis is presented for the lattice-structured Earth Observation Satellite (EOS) spacecraft at 32 points in a baseline non Sun-synchronous (60 deg inclination) orbit. Large temperature variations (on the order of 150 K) were observed on the majority of the members. A gradual decrease in temperature was observed as the spacecraft traversed the Earth's shadow, followed by a sudden rise in temperature (100 K) as the spacecraft exited the shadow. Heating rate and temperature histories of selected members and color graphic displays of temperatures on the spacecraft are presented.
Delineation, characterization, and classification of topographic eminences
NASA Astrophysics Data System (ADS)
Sinha, Gaurav
Topographic eminences are defined as upwardly rising, convex shaped topographic landforms that are noticeably distinct in their immediate surroundings. As opposed to everyday objects, the properties of a topographic eminence are dependent not only on how it is conceptualized, but is also intrinsically related to its spatial extent and its relative location in the landscape. In this thesis, a system for automated detection, delineation and characterization of topographic eminences based on an analysis of digital elevation models is proposed. Research has shown that conceptualization of eminences (and other landforms) is linked to the cultural and linguistic backgrounds of people. However, the perception of stimuli from our physical environment is not subject to cultural or linguistic bias. Hence, perceptually salient morphological and spatial properties of the natural landscape can form the basis for generically applicable detection and delineation of topographic eminences. Six principles of cognitive eminence modeling are introduced to develop the philosophical foundation of this research regarding eminence delineation and characterization. The first step in delineating eminences is to automatically detect their presence within digital elevation models. This is achieved by the use of quantitative geomorphometric parameters (e.g., elevation, slope and curvature) and qualitative geomorphometric features (e.g., peaks, passes, pits, ridgelines, and valley lines). The process of eminence delineation follows that of eminence detection. It is posited that eminences may be perceived either as monolithic terrain objects, or as composites of morphological parts (e.g., top, bottom, slope). Individual eminences may also simultaneously be conceived as comprising larger, higher order eminence complexes (e.g., mountain ranges). Multiple algorithms are presented for the delineation of simple and complex eminences, and the morphological parts of eminences. The proposed eminence detection and delineation methods are amenable to intuitive parameterization such that they can easily capture the multitude of eminence conceptualizations that people develop due to differences in terrain type and cultural and linguistic backgrounds. Eminence delineation is an important step in object based modeling of the natural landscape. However, mere 'geocoding' of eminences is not sufficient for modeling how people intuitively perceive and reason about eminences. Therefore, a comprehensive eminence parameterization system for characterizing the perceptual properties of eminences is also proposed in this thesis. Over 40 parameters are suggested for measuring the commonly perceived properties of eminences: size, shape, topology, proximity, and visibility. The proposed parameters describe different aspects of naive eminence perception. Quantitative analysis of eminence parameters using cluster analysis, confirms that not only can eminences be parameterized as individual terrain objects, but that eminence (dis)similarities can be exploited to develop intuitive eminence classification systems. Eminence parameters are also shown to be essential for exploring the relationships between extracted eminences and natural language terms (e.g., hill, mount, mountain, peak) used commonly to refer to different types of eminences. The results from this research confirm that object based modeling of the landscape is not only useful for terrain information system design, but is also essential for understanding how people commonly conceptualize their observations of and interactions with the natural landscape.
Use of Maximum Intensity Projections (MIPs) for target outlining in 4DCT radiotherapy planning.
Muirhead, Rebecca; McNee, Stuart G; Featherstone, Carrie; Moore, Karen; Muscat, Sarah
2008-12-01
Four-dimensional computed tomography (4DCT) is currently being introduced to radiotherapy centers worldwide, for use in radical radiotherapy planning for non-small cell lung cancer (NSCLC). A significant drawback is the time required to delineate 10 individual CT scans for each patient. Every department will hence ask the question if the single Maximum Intensity Projection (MIP) scan can be used as an alternative. Although the problems regarding the use of the MIP in node-positive disease have been discussed in the literature, a comprehensive study assessing its use has not been published. We compared an internal target volume (ITV) created using the MIP to an ITV created from the composite volume of 10 clinical target volumes (CTVs) delineated on the 10 phases of the 4DCT. 4DCT data was collected from 14 patients with NSCLC. In each patient, the ITV was delineated on the MIP image (ITV_MIP) and a composite ITV created from the 10 CTVs delineated on each of the 10 scans in the dataset. The structures were compared by assessment of volumes of overlap and exclusion. There was a median of 19.0% (range, 5.5-35.4%) of the volume of ITV_10phase not enclosed by the ITV_MIP, demonstrating that the use of the MIP could result in under-treatment of disease. In contrast only a very small amount of the ITV_MIP was not enclosed by the ITV_10phase (median of 2.3%, range, 0.4-9.8%), indicating the ITV_10phase covers almost all of the tumor tissue as identified by MIP. Although there were only two Stage I patients, both demonstrated very similar ITV_10phase and ITV_MIP volumes. These findings suggest that Stage I NSCLC tumors could be outlined on the MIP alone. In Stage II and III tumors the ITV_10phase would be more reliable. To prevent under-treatment of disease, the MIP image can only be used for delineation in Stage I tumors.
1987-01-01
goosefoot :sr’ Standl. Saline goosefoot - ,crs nr->ita Lag. Chloriq " 2 s~er- - ec-r ’ : Ciclo-permum- (Pers.) Sprague ex. Britt. . o *dr e...Fl1. Water purslane OIL DRA C.pev71oides (H.B.K.) Raven Perennial waterprinrose OIL upiraus kirg",* Wats. King lupine FAC _uzu7a -arv-_*-.ra (Ehrh...s22(H. &, A.) Torr. Fleshy porterella CPL 16 tm% Indicator *.. Scientific Name Common Name Status Portuiaca oleracea L. Ra.Common purslane FAC Potw
Astronaut Jack Lousma participates in EVA to deploy twin pole solar shield
1973-08-06
SL3-122-2611 (22 Sept. 1973) --- Astronaut Alan L. Bean, Skylab 3 commander, participates in the final extravehicular activity (EVA) for that mission, during which a variety of tasks were performed. Here, Bean is near the Apollo Telescope Mount (ATM) during final film change out for the giant telescope facility. Astronaut Owen K. Garriott, who took the picture, is reflected in Bean's helmet visor. The reflected Earth disk in Bean's visor is so clear that the Red Sea and Nile River area can delineated. Photo credit: NASA
1982-08-01
Distance, m Species 22022.5 23.0 34.0 34.5 35.0 35.5 Typha angustifolia 8 3 1 Lemna , spp. 3 5 ’Pbragmites communis 75 39 1 *Scolochloa festucacea 9...is dominated by Typha, Scolochloa, Phragmites, Lemna , and Lycopus, while the zone above the vetlami border is dominated by prairie grasses (Panicum, k...Maximum Percent Cover for Species in Transect 01 at Buffalo Slough Emergent Low to Mid Species Aquatic Transition Prairie Typha anguatifolia X Lemna app. X
2006-07-01
Jeffrey S. S., Botstein D ., Brown P . O. Genome-wide analysis of DNA copy-number changes using cDNA microarrays. Nat. Genet., 23: 41-46, 1999 3...Duggan D . J., Bittner M., Chen Y., Meltzer P ., Trent J. M. Expression profiling using cDNA microarrays. Nat. Genet., 21: 10-14, 1999 4. Oh J. M...1999 5. Golub T. R., Slonim D . K., Tamayo P ., Huard C., Gaasenbeek M., Mesirov J. P ., Coller H., Loh M. L., Downing J. R., Caligiuri M. A
1984-10-13
41G-121-139 (5-13 Oct. 1984) --- The Strait of Dover and London, seldom seen in space photography, can be delineated in this medium format camera's scene showing parts of England and France from onboard the Earth-orbiting space shuttle Challenger. Parts of the Thames River can also be traced in the frame. The 41-G crew consisted of astronauts Robert L. Crippen, commander; Jon A. McBride, pilot; and Mission Specialists Kathryn D. Sullivan, Sally K. Ride, and David D. Leestma; along with Canadian astronaut Marc Garneau; and Paul D. Scully-Power, both payload specialists. Photo credit: NASA
Pan, Bingying; Wang, Yang; Zhang, Lijuan; Li, Shiyan
2014-04-07
Single crystals of a metal organic complex (C5H12N)CuBr3 (C5H12N = piperidinium, pipH for short) have been synthesized, and the structure was determined by single-crystal X-ray diffraction. (pipH)CuBr3 crystallizes in the monoclinic group C2/c. Edging-sharing CuBr5 units link to form zigzag chains along the c axis, and the neighboring Cu(II) ions with spin-1/2 are bridged by bibromide ions. Magnetic susceptibility data down to 1.8 K can be well fitted by the Bonner-Fisher formula for the antiferromagnetic spin-1/2 chain, giving the intrachain magnetic coupling constant J ≈ -17 K. At zero field, (pipH)CuBr3 shows three-dimensional (3D) order below TN = 1.68 K. Calculated by the mean-field theory, the interchain coupling constant J' = -0.91 K is obtained and the ordered magnetic moment m0 is about 0.23 μB. This value of m0 makes (pipH)CuBr3 a rare compound suitable to study the 1D-3D dimensional cross-over problem in magnetism, since both 3D order and one-dimensional (1D) quantum fluctuations are prominent. In addition, specific heat measurements reveal two successive magnetic transitions with lowering temperature when external field μ0H ≥ 3 T is applied along the a' axis. The μ0H-T phase diagram of (pipH)CuBr3 is roughly constructed.
Modulation Transfer Function of Infrared Focal Plane Arrays
NASA Technical Reports Server (NTRS)
Gunapala, S. D.; Rafol, S. B.; Ting, D. Z.; Soibel, A.; Hill, C. J.; Khoshakhlagh, A.; Liu, J. K.; Mumolo, J. M.; Hoglund, L.; Luong, E. M.
2015-01-01
Modulation transfer function (MTF) is the ability of an imaging system to faithfully image a given object. The MTF of an imaging system quantifies the ability of the system to resolve or transfer spatial frequencies. In this presentation we will discuss the detail MTF measurements of 1024x1024 pixels mid -wavelength and long- wavelength quantum well infrared photodetector, and 320x256 pixels long- wavelength InAs/GaSb superlattice infrared focal plane arrays (FPAs). Long wavelength Complementary Barrier Infrared Detector (CBIRD) based on InAs/GaSb superlattice material is hybridized to recently designed and fabricated 320x256 pixel format ROIC. The n-type CBIRD was characterized in terms of performance and thermal stability. The experimentally measured NE delta T of the 8.8 micron cutoff n-CBIRD FPA was 18.6 mK with 300 K background and f/2 cold stop at 78K FPA operating temperature. The horizontal and vertical MTFs of this pixel fully delineated CBIRD FPA at Nyquist frequency are 49% and 52%, respectively.
Evaluating Henry's law constant of N-nitrosodimethylamine (NDMA).
Haruta, Shinsuke; Jiao, Wentao; Chen, Weiping; Chang, Andrew C; Gan, Jay
2011-01-01
N-Nitrosodimethylamine (NDMA), a potential carcinogen, may contaminate the groundwater when the reclaimed wastewater is used for irrigation and groundwater recharge. Henry's law constant is a critical parameter to assess the fate and transport of reclaimed wastewater-borne NDMA in the soil profile. We conducted a laboratory experiment in which the change of NDMA concentration in water exposed to the atmosphere was measured with respect to time and, based on the data, obtained the dimensionless Henry's law constant (K(H)') of NDMA, at 1.0 x 10(-4). The K(H)' suggests that NDMA has a relatively high potential to volatilize in the field where NDMA-containing wastewater is used for irrigation and the volatilization loss may be a significant pathway of NDMA transport. The experiment was based on the two boundary-layer approach of mass transfer at the atmosphere-water interface. It is an expedient method to delineate K(H)' for volatile or semi-volatile compounds present in water at low concentrations.
Low frequency acoustic and electromagnetic scattering
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Maccamy, R. C.
1986-01-01
This paper deals with two classes of problems arising from acoustics and electromagnetics scattering in the low frequency stations. The first class of problem is solving Helmholtz equation with Dirichlet boundary conditions on an arbitrary two dimensional body while the second one is an interior-exterior interface problem with Helmholtz equation in the exterior. Low frequency analysis show that there are two intermediate problems which solve the above problems accurate to 0(k/2/ log k) where k is the frequency. These solutions greatly differ from the zero frequency approximations. For the Dirichlet problem numerical examples are shown to verify the theoretical estimates.
Scott, Gary K; Atsriku, Christian; Kaminker, Patrick; Held, Jason; Gibson, Brad; Baldwin, Michael A; Benz, Christopher C
2005-09-01
The vitamin K analog menadione (K3), capable of both redox cycling and arylating nucleophilic substrates by Michael addition, has been extensively studied as a model stress-inducing quinone in both cell culture and animal model systems. Exposure of keratin 8 (k-8) expressing human breast cancer cells (MCF7, T47D, SKBr3) to K3 (50-100 microM) induced rapid, sustained, and site-specific k-8 serine phosphorylation (pSer73) dependent on signaling by a single mitogen activated protein kinase (MAPK) pathway, MEK1/2. Normal nuclear morphology and k-8 immunofluorescence coupled with the lack of DNA laddering or other features of apoptosis indicated that K3-induced cytotoxicity, evident within 4 h of treatment and delayed but not prevented by MEK1/2 inhibition, was due to a form of stress-activated cell death known as oncosis. Independent of MAPK signaling was the progressive appearance of K3-induced cellular fluorescence, principally nuclear in origin and suggested by in vitro fluorimetry to have been caused by K3 thiol arylation. Imaging by UV transillumination of protein gels containing nuclear extracts from K3-treated cells revealed a prominent 17-kDa band shown to be histone H3 by immunoblotting and mass spectrometry (MS). K3 arylation of histones in vitro followed by electrospray ionization-tandem MS analyses identified the unique Cys110 residue within H3, exposed only in the open chromatin of transcriptionally active genes, as a K3 arylation target. These findings delineate new pathways associated with K3-induced stress and suggest a potentially novel role for H3 Cys110 as a nuclear stress sensor.
Mild cognitive impairment in early life and mental health problems in adulthood.
Chen, Chuan-Yu; Lawlor, John P; Duggan, Anne K; Hardy, Janet B; Eaton, William W
2006-10-01
We assessed the extent to which borderline mental retardation and mental retardation at preschool ages are related to emotional and behavioral problems in young adulthood. We also explored early risk factors for having mental health problems as a young adult that might be related to preschool differences in cognitive ability. We used data from a cohort of births studied in the Johns Hopkins Collaborative Perinatal Study and followed up in the Pathways to Adulthood Study. Preschool cognitive functioning was assessed at 4 years of age. Individual characteristics, psychosocial factors, and mental problems were prospectively evaluated from birth through young adulthood. Children with subaverage cognitive abilities were more likely to develop mental health problems than their counterparts with IQs above 80. Inadequate family interactions were shown to increase 2- to 4-fold the risk of emotional or behavioral problems among children with borderline mental retardation. Subaverage cognitive functioning in early life increases later risk of mental health problems. Future research may help to delineate possible impediments faced at different developmental stages and guide changes in supportive services to better address the needs of children with borderline mental retardation.
NASA Astrophysics Data System (ADS)
Pásztor, László; Bakacsi, Zsófia; Laborczi, Annamária; Takács, Katalin; Szatmári, Gábor; Tóth, Tibor; Szabó, József
2016-04-01
One of the main objectives of the EU's Common Agricultural Policy is to encourage maintaining agricultural production in Areas Facing Natural Constraints (ANC) in order to sustain agricultural production and use natural resources, in such a way to secure both stable production and income to farmers and to protect the environment. ANC assignment has both ecological and severe economical aspects. Recently the delimitation of ANCs is suggested to be carried out by using common biophysical diagnostic criteria on low soil productivity and poor climate conditions all over Europe. The criterion system was elaborated and has been repeatedly upgraded by JRC. The operational implementation is under member state competence. This process requires application of available soil databases and proper thematic and spatial inference methods. In our paper we present the inferences applied for the latest identification and delineation of areas with low soil productivity in Hungary according to JRC biophysical criteria related to soil: limited soil drainage, texture and stoniness (coarse texture, heavy clay, vertic properties), shallow rooting depth, chemical properties (salinity, sodicity, low pH). The compilation of target specific maps were based on the available legacy and recently collected data. In the present work three different data sources were used. The most relevant available data were queried from the datasets for each mapped criterion for either direct application or for the compilation a suitable, synthetic (non-measured) parameter. In some cases the values of the target variable originated from only one, in other cases from more databases. The reference dataset used in the mapping process was set up after substantial statistical analysis and filtering. It consisted of the values of the target variable attributed to the finally selected georeferenced locations. For spatial inference regression kriging was applied. Accuracy assessment was carried out by Leave One Out Cross Validation (LOOCV). In some cases the DSM product directly provided the delineation result by simple querying, in other cases further interpretation of the map was necessary. As the result of our work not only spatial fulfilment of the European biophysical criteria was assessed and provided for decision makers, but unique digital soil map products were elaborated regionalizing specific soil features, which were never mapped before, even nationally with 1 ha spatial resolution. Acknowledgement: Our work was supported by the "European Fund for Agricultural and Rural Development: Europe investing in rural areas" with the support of the European Union and Hungary and by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.; Gelder, Brian K.
2018-02-14
Basin-characteristic measurements related to stream length, stream slope, stream density, and stream order have been identified as significant variables for estimation of flood, flow-duration, and low-flow discharges in Iowa. The placement of channel initiation points, however, has always been a matter of individual interpretation, leading to differences in stream definitions between analysts.This study investigated five different methods to define stream initiation using 3-meter light detection and ranging (lidar) digital elevation models (DEMs) data for 17 streamgages with drainage areas less than 50 square miles within the Des Moines Lobe landform region in north-central Iowa. Each DEM was hydrologically enforced and the five stream initiation methods were used to define channel initiation points and the downstream flow paths. The five different methods to define stream initiation were tested side-by-side for three watershed delineations: (1) the total drainage-area delineation, (2) an effective drainage-area delineation of basins based on a 2-percent annual exceedance probability (AEP) 12-hour rainfall, and (3) an effective drainage-area delineation based on a 20-percent AEP 12-hour rainfall.Generalized least squares regression analysis was used to develop a set of equations for sites in the Des Moines Lobe landform region for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEPs. A total of 17 streamgages were included in the development of the regression equations. In addition, geographic information system software was used to measure 58 selected basin-characteristics for each streamgage.Results of the regression analyses of the 15 lidar datasets indicate that the datasets that produce regional regression equations (RREs) with the best overall predictive accuracy are the National Hydrographic Dataset, Iowa Department of Natural Resources, and profile curvature of 0.5 stream initiation methods combined with the 20-percent AEP 12-hour rainfall watershed delineation method. These RREs have a mean average standard error of prediction (SEP) for 4-, 2-, and 1-percent AEP discharges of 53.9 percent and a mean SEP for all eight AEPs of 55.5 percent. Compared to the RREs developed in this study using the basin characteristics from the U.S. Geological Survey StreamStats application, the lidar basin characteristics provide better overall predictive accuracy.
NASA Astrophysics Data System (ADS)
Guterch, A.; Grad, M.; Keller, G. R.
2005-12-01
Beginning in 1997, Central Europe between the Baltic and Adriatic Seas, has been covered by an unprecedented network of seismic refraction experiments POLONAISE'97, CELEBRATION 2000, ALP 2002, and SUDETES 2003, have only been possible due to a massive international consortium consisted of more than 30 institutions from 16 countries in Europe and North America. The majority of recording instruments was provided by the IRIS/PASSCAL Instrument Center and the University of Texas at El Paso (USA), and several other countries also provided instrumentation. Total length of seismic profiles in all experiments is about 20,000 km. The main results of these experiments are: 1) the delineation of the deep structure of the southwestern margin of the East European Craton (southern Baltica) and its relationship to younger terranes; delineation of the major terranes and crustal blocks in the Trans European Suture Zone; determination of the structural framework of the Pannonian basin; elucidation of the deep structure and evolution of the Western Carpathian Mountains and Eastern Alps; determination of the structural relationships between the structural elements of the Bohemian massif and adjacent features; construction of 3-D models of the lithospheric structure; and evaluation and develop geodynamic models for the tectonic evolution of the region. Experiment Working Groups Members: K. Aric, M. Behm, E. Brueckl, W. Chwatal, H. Grassl, S. Hock, V. Hoeck, F. Kohlbeck, E.-M. Rumpfhuber, Ch. Schmid, R. Schmoller, C. Tomek, Ch. Ullrich, F.Weber (Austria), A.A. Belinsky (Belarus), I. Asudeh, R. Clowes, Z. Hajnal (Canada), F. Sumanova (Croatia), M. Broz , P. Hrubcova, M. Korn, O. Karousova, J. Malek, A. Spicak (Czech Republic), S.L. Jensen, P. Joergensen, H. Thybo (Denmark), K. Komminaho, U. Luosto, T. Tiira, J. Yliniemi (Finland), F. Bleibinhaus, R. Brinkmann, B. Forkmann, H. Gebrande, H. Geissler, A. Hemmann, G. Jentzsch, D. Kracke, A. Schulze, K. Schuster (Germany), T. Bodoky, T. Fancik, E. Hegedas, K. Posgay, E. Takacs (Hungary), J. Jacyna, L. Korabliova, G. Motuza, V. Nasedkin (Lithuania), W. Czuba, E. Gaczynski, M. Grad, A. Guterch, T. Janik, M. Majdanski, M. Malinowski, P. Sroda, M. Wilde-Piorko, (Poland), S.L. Kostiuchenko, A.F. Morozov (Russia), J. Vozar (Slovakia), A. Gosar (Slovenia), O. Selvi (Turkey), S. Acevedo, M. Averill, M. Fort, R. Greschke, S.Harder, G. Kaip, G.R. Keller, K.C. Miller, C.M. Snelson (USA)
Highly accurate adaptive finite element schemes for nonlinear hyperbolic problems
NASA Astrophysics Data System (ADS)
Oden, J. T.
1992-08-01
This document is a final report of research activities supported under General Contract DAAL03-89-K-0120 between the Army Research Office and the University of Texas at Austin from July 1, 1989 through June 30, 1992. The project supported several Ph.D. students over the contract period, two of which are scheduled to complete dissertations during the 1992-93 academic year. Research results produced during the course of this effort led to 6 journal articles, 5 research reports, 4 conference papers and presentations, 1 book chapter, and two dissertations (nearing completion). It is felt that several significant advances were made during the course of this project that should have an impact on the field of numerical analysis of wave phenomena. These include the development of high-order, adaptive, hp-finite element methods for elastodynamic calculations and high-order schemes for linear and nonlinear hyperbolic systems. Also, a theory of multi-stage Taylor-Galerkin schemes was developed and implemented in the analysis of several wave propagation problems, and was configured within a general hp-adaptive strategy for these types of problems. Further details on research results and on areas requiring additional study are given in the Appendix.
Looping Genomes: Diagnostic Change and the Genetic Makeup of the Autism Population.
Navon, Daniel; Eyal, Gil
2016-03-01
This article builds on Hacking's framework of "dynamic nominalism" to show how knowledge about biological etiology can interact with the "kinds of people" delineated by diagnostic categories in ways that "loop" or modify both over time. The authors use historical materials to show how "geneticization" played a crucial role in binding together autism as a biosocial community and how evidence from genetics research later made an important contribution to the diagnostic expansion of autism. In the second part of the article, the authors draw on quantitative and qualitative analyses of autism rates over time in several rare conditions that are delineated strictly according to genomic mutations in order to demonstrate that these changes in diagnostic practice helped to both increase autism's prevalence and create its enormous genetic heterogeneity. Thus, a looping process that began with geneticization and involved the social effects of genetics research itself transformed the autism population and its genetic makeup.
SO2 damage to forests recorded by ERTS-1. [Ontario, Canada
NASA Technical Reports Server (NTRS)
Murtha, P. A.
1974-01-01
Sulfur dioxide fumes have been affecting the forests around Wawa, Ontario, which have been under surveillance for a number of years and were recently covered by ultra-small-scale (1:160,000) air photography for damage-assessment purposes. Image interpretation supported by electronic color enhancement was used to delineate on ERTS imagery three damage zones (total-kill, heavy-kill and medium-damage zones). The zones delineated on ERTS imagery are similar to the results of aerial sketch-mapping and air photo interpretation. Band 5 provided the greatest detail for assessing the damage to the forests, followed in successive order by bands 4, 6 and 7. Comparison with ERTS images obtained in the winter showed that even though the total-kill could be separated from heavy-kill damage zones, total-kill could not be consistently separated from clear-cut logging, burned areas, frozen lakes and bogs.
Concolino, D; Roversi, G; Muzzi, G L; Sestito, S; Colombo, E A; Volpi, L; Larizza, L; Strisciuglio, P
2010-10-01
We report on three sibs who have autosomal recessive Clericuzio-type poikiloderma neutropenia (PN) syndrome. Recently, this consanguineous family was reported and shown to be informative in identifying the C16orf57 gene as the causative gene for this syndrome. Here we present the clinical data in detail. PN is a distinct and recognizable entity belonging to the group of poikiloderma syndromes among which Rothmund-Thomson is perhaps the best described and understood. PN is characterized by cutaneous poikiloderma, hyperkeratotic nails, generalized hyperkeratosis on palms and soles, neutropenia, short stature, and recurrent pulmonary infections. In order to delineate the phenotype of this rare genodermatosis, the clinical presentation together with the molecular investigations in our patients are reported and compared to those from the literature. Copyright © 2010 Wiley-Liss, Inc.
Maltese, Matthew R.; Chen, Irene G.; Arbogast, Kristy B.
2005-01-01
Previous work identified a similar risk of injury for children seated on the struck side and center rear in side impact crashes in passenger cars. In order to further explain this finding, we investigated the effect of sharing the rear row with other occupants on injury risk and delineated differences in injury patterns among the seat positions. These analyses, conducted from a large child specific crash surveillance system, included: children 4–15 years old, rear seated, seat belt restrained, in a passenger car, and in a side impact crash. Injury risk was compared among each rear seat position stratified by the presence of other occupants on the rear row. Occupants are at an increased risk of injury if they sit alone on their row as compared to sitting with other occupants. Patterns of injuries distinct to each seat position were delineated. PMID:16179151
Call for research: detecting early vulnerability for psychiatric hospitalization.
Prince, Jonathan D
2013-01-01
This study delineated the extent to which a broad set of risk factors in youth, a period well suited to primary prevention strategies, influences the likelihood and timing of first lifetime psychiatric hospitalizations. Logistic regression was used to delineate early risk factors for psychiatric hospitalization among Americans in a nationally representative survey (NCS-R, Part II, 2001-2003: N = 5,692). Results suggest that inpatient stay is more common and happens at earlier ages among Americans who report growing up with versus without: (1) depressed parents or caregivers, (2) family members who victimized them, or (3) one of three child mental illnesses (conduct, oppositional defiant, or separation anxiety disorder). In order to prevent inpatient stay, findings call for longitudinal research on early vulnerability for psychiatric hospitalization among families with: (1) depressed parents of children or adolescents, (2) violence against children, and (3) children that have externalizing or separation anxiety disorders.
Mapping sensory circuits by anterograde trans-synaptic transfer of recombinant rabies virus
Zampieri, Niccolò; Jessell, Thomas M.; Murray, Andrew J.
2014-01-01
Summary Primary sensory neurons convey information from the external world to relay circuits within the central nervous system (CNS), but the identity and organization of the neurons that process incoming sensory information remains sketchy. Within the CNS viral tracing techniques that rely on retrograde trans-synaptic transfer provide a powerful tool for delineating circuit organization. Viral tracing of the circuits engaged by primary sensory neurons has, however, been hampered by the absence of a genetically tractable anterograde transfer system. In this study we demonstrate that rabies virus can infect sensory neurons in the somatosensory system, is subject to anterograde trans-synaptic transfer from primary sensory to spinal target neurons, and can delineate output connectivity with third-order neurons. Anterograde trans-synaptic transfer is a feature shared by other classes of primary sensory neurons, permitting the identification and potentially the manipulation of neural circuits processing sensory feedback within the mammalian CNS. PMID:24486087
Complex network problems in physics, computer science and biology
NASA Astrophysics Data System (ADS)
Cojocaru, Radu Ionut
There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe lattice at zero temperature and then we apply this formalism to the K-SAT problem defined in Chapter 1. The phase transition which physicists study often corresponds to a change in the computational complexity of the corresponding computer science problem. Chapter 3 presents phase transitions which are specific to the problems discussed in Chapter 1 and also known results for the K-SAT problem. We discuss the replica method and experimental evidences of replica symmetry breaking. The physics approach to hard problems is based on replica methods which are difficult to understand. In Chapter 4 we develop novel methods for studying hard problems using methods similar to the message passing techniques that were discussed in Chapter 2. Although we concentrated on the symmetric case, cavity methods show promise for generalizing our methods to the un-symmetric case. As has been highlighted by John Hopfield, several key features of biological systems are not shared by physical systems. Although living entities follow the laws of physics and chemistry, the fact that organisms adapt and reproduce introduces an essential ingredient that is missing in the physical sciences. In order to extract information from networks many algorithm have been developed. In Chapter 5 we apply polynomial algorithms like minimum spanning tree in order to study and construct gene regulatory networks from experimental data. As future work we propose the use of algorithms like min-cut/max-flow and Dijkstra for understanding key properties of these networks.
Model Order Reduction Algorithm for Estimating the Absorption Spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.
The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less
Tan, Sze-Yin; Unwin, Patrick R; Macpherson, Julie V; Zhang, Jie; Bond, Alan M
2017-03-07
Quantitative studies of electron transfer processes at electrode/electrolyte interfaces, originally developed for homogeneous liquid mercury or metallic electrodes, are difficult to adapt to the spatially heterogeneous nanostructured electrode materials that are now commonly used in modern electrochemistry. In this study, the impact of surface heterogeneity on Fourier-transformed alternating current voltammetry (FTACV) has been investigated theoretically under the simplest possible conditions where no overlap of diffusion layers occurs and where numerical simulations based on a 1D diffusion model are sufficient to describe the mass transport problem. Experimental data that meet these requirements can be obtained with the aqueous [Ru(NH 3 ) 6 ] 3+/2+ redox process at a dual-electrode system comprised of electrically coupled but well-separated glassy carbon (GC) and boron-doped diamond (BDD) electrodes. Simulated and experimental FTACV data obtained with this electrode configuration, and where distinctly different heterogeneous charge transfer rate constants (k 0 values) apply at the individual GC and BDD electrode surfaces, are in excellent agreement. Principally, because of the far greater dependence of the AC current magnitude on k 0 , it is straightforward with the FTACV method to resolve electrochemical heterogeneities that are ∼1-2 orders of magnitude apart, as applies in the [Ru(NH 3 ) 6 ] 3+/2+ dual-electrode configuration experiments, without prior knowledge of the individual kinetic parameters (k 0 1 and k 0 2 ) or the electrode size ratio (θ 1 :θ 2 ). In direct current voltammetry, a difference in k 0 of >3 orders of magnitude is required to make this distinction.
ERIC Educational Resources Information Center
Malott, Curry; Ford, Derek R.
2015-01-01
Part two: This article is the second part of a project concerned with developing a Marxist critical pedagogy that moves beyond a critique of capital and toward a communist future. The article performs an educational reading of Marx's Critique of the Gotha Programme in order to delineate what a Marxist critical pedagogy of becoming communist might…
KaDonna C. Randolph
2015-01-01
Within the days and weeks following a catastrophic weather event, governmental forestry agencies often implement aerial reconnaissance missions to delineate damage zones. These initial rapid assessments are sometimes followed by on-the-ground surveys in order to verify the rapid assessments and more precisely quantify damage. When aerial or on-the-ground surveys are...
Computer-based synthetic data to assess the tree delineation algorithm from airborne LiDAR survey
Lei Wang; Andrew G. Birt; Charles W. Lafon; David M. Cairns; Robert N. Coulson; Maria D. Tchakerian; Weimin Xi; Sorin C. Popescu; James M. Guldin
2013-01-01
Small Footprint LiDAR (Light Detection And Ranging) has been proposed as an effective tool for measuring detailed biophysical characteristics of forests over broad spatial scales. However, by itself LiDAR yields only a sample of the true 3D structure of a forest. In order to extract useful forestry relevant information, this data must be interpreted using mathematical...
Expanding our understanding of students' use of graphs for learning physics
NASA Astrophysics Data System (ADS)
Laverty, James T.
It is generally agreed that the ability to visualize functional dependencies or physical relationships as graphs is an important step in modeling and learning. However, several studies in Physics Education Research (PER) have shown that many students in fact do not master this form of representation and even have misconceptions about the meaning of graphs that impede learning physics concepts. Working with graphs in classroom settings has been shown to improve student abilities with graphs, particularly when the students can interact with them. We introduce a novel problem type in an online homework system, which requires students to construct the graphs themselves in free form, and requires no hand-grading by instructors. A study of pre/post-test data using the Test of Understanding Graphs in Kinematics (TUG-K) over several semesters indicates that students learn significantly more from these graph construction problems than from the usual graph interpretation problems, and that graph interpretation alone may not have any significant effect. The interpretation of graphs, as well as the representation translation between textual, mathematical, and graphical representations of physics scenarios, are frequently listed among the higher order thinking skills we wish to convey in an undergraduate course. But to what degree do we succeed? Do students indeed employ higher order thinking skills when working through graphing exercises? We investigate students working through a variety of graph problems, and, using a think-aloud protocol, aim to reconstruct the cognitive processes that the students go through. We find that to a certain degree, these problems become commoditized and do not trigger the desired higher order thinking processes; simply translating ``textbook-like'' problems into the graphical realm will not achieve any additional educational goals. Whether the students have to interpret or construct a graph makes very little difference in the methods used by the students. We will also look at the results of using graph problems in an online learning environment. We will show evidence that construction problems lead to a higher degree of difficulty and degree of discrimination than other graph problems and discuss the influence the course has on these variables.
Fabelo, Himar; Ortega, Samuel; Ravi, Daniele; Kiran, B Ravi; Sosa, Coralia; Bulters, Diederik; Callicó, Gustavo M; Bulstrode, Harry; Szolna, Adam; Piñeiro, Juan F; Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O'Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.
Imaging ultrasonic dispersive guided wave energy in long bones using linear radon transform.
Tran, Tho N H T; Nguyen, Kim-Cuong T; Sacchi, Mauricio D; Le, Lawrence H
2014-11-01
Multichannel analysis of dispersive ultrasonic energy requires a reliable mapping of the data from the time-distance (t-x) domain to the frequency-wavenumber (f-k) or frequency-phase velocity (f-c) domain. The mapping is usually performed with the classic 2-D Fourier transform (FT) with a subsequent substitution and interpolation via c = 2πf/k. The extracted dispersion trajectories of the guided modes lack the resolution in the transformed plane to discriminate wave modes. The resolving power associated with the FT is closely linked to the aperture of the recorded data. Here, we present a linear Radon transform (RT) to image the dispersive energies of the recorded ultrasound wave fields. The RT is posed as an inverse problem, which allows implementation of the regularization strategy to enhance the focusing power. We choose a Cauchy regularization for the high-resolution RT. Three forms of Radon transform: adjoint, damped least-squares, and high-resolution are described, and are compared with respect to robustness using simulated and cervine bone data. The RT also depends on the data aperture, but not as severely as does the FT. With the RT, the resolution of the dispersion panel could be improved up to around 300% over that of the FT. Among the Radon solutions, the high-resolution RT delineated the guided wave energy with much better imaging resolution (at least 110%) than the other two forms. The Radon operator can also accommodate unevenly spaced records. The results of the study suggest that the high-resolution RT is a valuable imaging tool to extract dispersive guided wave energies under limited aperture. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O’Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area. PMID:29554126
Choi, Gloria B; Dong, Hong-Wei; Murphy, Andrew J; Valenzuela, David M; Yancopoulos, George D; Swanson, Larry W; Anderson, David J
2005-05-19
In mammals, innate reproductive and defensive behaviors are mediated by anatomically segregated connections between the amygdala and hypothalamus. This anatomic segregation poses the problem of how the brain integrates activity in these circuits when faced with conflicting stimuli eliciting such mutually exclusive behaviors. Using genetically encoded and conventional axonal tracers, we have found that the transcription factor Lhx6 delineates the reproductive branch of this pathway. Other Lhx proteins mark neurons in amygdalar nuclei implicated in defense. We have traced parallel projections from the posterior medial amygdala, activated by reproductive or defensive olfactory stimuli, respectively, to a point of convergence in the ventromedial hypothalamus. The opposite neurotransmitter phenotypes of these convergent projections suggest a "gate control" mechanism for the inhibition of reproductive behaviors by threatening stimuli. Our data therefore identify a potential neural substrate for integrating the influences of conflicting behavioral cues and a transcription factor family that may contribute to the development of this substrate.
The effects of the physical and chemical properties of soils on the spectral reflectance of soils
NASA Technical Reports Server (NTRS)
Montgomery, O. L.; Baumgardner, M. F.
1974-01-01
The effects of organic matter, free iron oxides, texture, moisture content, and cation exchange capacity on the spectral reflectance of soils were investigated along with techniques for differentiating soil orders by computer analysis of multispectral data. By collecting soil samples of benchmark soils from the different climatic regions within the United States and using the extended wavelength field spectroradiometer to obtain reflectance values and curves for each sample, average curves were constructed for each soil order. Results indicate that multispectral analysis may be a valuable tool for delineating and quantifying differences between soils.
Hydrogen Ordering in Hexagonal Intermetallic AB5 Type Compounds
NASA Astrophysics Data System (ADS)
Sikora, W.; Kuna, A.
2008-04-01
Intermetallic compounds AB5 type (A = rare-earth atoms, B = transition metal) are known to store reversibly large amounts of hydrogen and as that are discussed in this work. It was shown that the alloy cycling stability can be significantly improved by employing the so-called non-stoichiometric compounds AB5+x and that is why analysis of change of structure turned out to be interesting. A tendency for ordering of hydrogen atoms is one of the most intriguing problems for the unsaturated hydrides. The symmetry analysis method in the frame of the theory of space group and their representation gives opportunity to find all possible transformations of the parent structure. In this work symmetry analysis method was applied for AB5+x structure type (P6/mmm parent symmetry space group). There were investigated all possible ordering types and accompanying atom displacements in positions 1a, 2c, 3g (fully occupied in stoichiometric compounds AB5), in positions 2e, 6l (where atom B could appear in non-stoichiometric compounds) and also 4h, 6m, 6k, 12n, 12o, which could be partly occupied by hydrogen as a result of hydrides. An analysis was carried out of all possible structures of lower symmetry, following from P6/mmm for we k=(0, 0, 0). Also the way of getting the structure described by the P63mc space group with double cell along the z-axiswe k=(0, 0, 0.5), as it is suggested in the work of Latroche et al. is discussed by the symmetry analysis. The analysis was obtained by computer program MODY. The program calculates the so-called basis vectors of irreducible representations of a given symmetry group, which can be used for calculation of possible ordering modes.
Fat segmentation on chest CT images via fuzzy models
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Wu, Caiyun; Pednekar, Gargi; Subramanian, Janani Rajan; Lederer, David J.; Christie, Jason; Torigian, Drew A.
2016-03-01
Quantification of fat throughout the body is vital for the study of many diseases. In the thorax, it is important for lung transplant candidates since obesity and being underweight are contraindications to lung transplantation given their associations with increased mortality. Common approaches for thoracic fat segmentation are all interactive in nature, requiring significant manual effort to draw the interfaces between fat and muscle with low efficiency and questionable repeatability. The goal of this paper is to explore a practical way for the segmentation of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) components of chest fat based on a recently developed body-wide automatic anatomy recognition (AAR) methodology. The AAR approach involves 3 main steps: building a fuzzy anatomy model of the body region involving all its major representative objects, recognizing objects in any given test image, and delineating the objects. We made several modifications to these steps to develop an effective solution to delineate SAT/VAT components of fat. Two new objects representing interfaces of SAT and VAT regions with other tissues, SatIn and VatIn are defined, rather than using directly the SAT and VAT components as objects for constructing the models. A hierarchical arrangement of these new and other reference objects is built to facilitate their recognition in the hierarchical order. Subsequently, accurate delineations of the SAT/VAT components are derived from these objects. Unenhanced CT images from 40 lung transplant candidates were utilized in experimentally evaluating this new strategy. Mean object location error achieved was about 2 voxels and delineation error in terms of false positive and false negative volume fractions were, respectively, 0.07 and 0.1 for SAT and 0.04 and 0.2 for VAT.
NASA Astrophysics Data System (ADS)
Guha, Arindam; Singh, Vivek Kr.; Parveen, Reshma; Kumar, K. Vinod; Jeyaseelan, A. T.; Dhanamjaya Rao, E. N.
2013-04-01
Bauxite deposits of Jharkhand in India are resulted from the lateritization process and therefore are often associated with the laterites. In the present study, ASTER (Advanced Space borne Thermal Emission and Reflection Radiometer) image is processed to delineate bauxite rich pockets within the laterites. In this regard, spectral signatures of lateritic bauxite samples are analyzed in the laboratory with reference to the spectral features of gibbsite (main mineral constituent of bauxite) and goethite (main mineral constituent of laterite) in VNIR-SWIR (visible-near infrared and short wave infrared) electromagnetic domain. The analysis of spectral signatures of lateritic bauxite samples helps in understanding the differences in the spectral features of bauxites and laterites. Based on these differences; ASTER data based relative band depth and simple ratio images are derived for spatial mapping of the bauxites developed within the lateritic province. In order to integrate the complementary information of different index image, an index based principal component (IPC) image is derived to incorporate the correlative information of these indices to delineate bauxite rich pockets. The occurrences of bauxite rich pockets derived from density sliced IPC image are further delimited by the topographic controls as it has been observed that the major bauxite occurrences of the area are controlled by slope and altitude. In addition to above, IPC image is draped over the digital elevation model (DEM) to illustrate how bauxite rich pockets are distributed with reference to the topographic variability of the terrain. Bauxite rich pockets delineated in the IPC image are also validated based on the known mine occurrences and existing geological map of the bauxite. It is also conceptually validated based on the spectral similarity of the bauxite pixels delineated in the IPC image with the ASTER convolved laboratory spectra of bauxite samples.
Channel migration of the White River in the eastern Uinta Basin, Utah and Colorado
Jurado, Antonio; Fields, Fred K.
1978-01-01
The White River is the largest stream in the southeastern part of the Uinta Basin in Utah and Colorado. This map shows the changes that have occurred in the location of the main channel of the river from 1936 to 1974. The map indicated that certain reaches of the river are subject to different rates of channel migration. Also shown is the boundary of the flood plain, which is mapped at the point of abrupt break in slope. This map documents the position of the river channel prior to any withdrawals of water or alteration of the flow characteristics of the white river that may occur in order to meet water requirements principally associated with the proposed oil-shale industry or other development in the area.The channel locations were determined from aerial photographs taken at four different time periods for the following Federal agencies: In 1936, U.S. Soil Conservation Services; 1953, U.S. Corps of Engineers; 1965, U.S. Geological Survey; and in 1974, U.S. Bureau of Land Management. The 1936 delineation, which is actually based upon photographs that were taken in 1936 and 1937, was made by projection of the original photographs on a base map that was prepared from 1:24,000 scale topographic maps. The 1953, 1965, and 1974 delineations were produced from stereographic models. The 1965 delineation was compiled from photographs that were taken during 1962-65. The delineation is labeled as 1965 for simplicity, however, because the photographs for 1965 cover about 60 percent of the study read of the river, and because no changed were discernable in those areas of repetitive photographic coverage.
Demountable damped cavity for HOM-damping in ILC superconducting accelerating cavities
NASA Astrophysics Data System (ADS)
Konomi, T.; Yasuda, F.; Furuta, F.; Saito, K.
2014-01-01
We have designed a new higher-order-mode (HOM) damper called a demountable damped cavity (DDC) as part of the R&D efforts for the superconducting cavity of the International Linear Collider (ILC). The DDC has two design concepts. The first is an axially symmetrical layout to obtain high damping efficiency. The DDC has a coaxial structure along the beam axis to realize strong coupling with HOMs. HOMs are damped by an RF absorber at the end of the coaxial waveguide and the accelerating mode is reflected by a choke filter mounted at the entrance of the coaxial waveguide. The second design concept is a demountable structure to facilitate cleaning, in order to suppress the Q-slope problem in a high field. A single-cell cavity with the DDC was fabricated to test four performance parameters. The first was frequency matching between the accelerating cavity and the choke filter. Since the bandwidth of the resonance frequency in a superconducting cavity is very narrow, there is a possibility that the accelerating field will leak to the RF absorber because of thermal shrinkage. The design bandwidth of the choke filter is 25 kHz. It was demonstrated that frequency matching adjusted at room temperature could be successfully maintained at 2 K. The second parameter was the performance of the demountable structure. At the joint, the magnetic field is 1/6 of the maximum field in the accelerating cavity. Ultimately, the accelerating field reached 19 MV/m and Q0 was 1.5×1010 with a knife-edge shape. The third parameter was field emission and multipacting. Although the choke structure has numerous parallel surfaces that are susceptible to the multipacting problem, it was found that neither field emission nor multipacting presented problems in both an experiment and simulation. The final parameter was the Q values of the HOM. The RF absorber adopted in the system is a Ni-Zn ferrite type. The RF absorber shape was designed based on the measurement data of permittivity and permeability at 77 K. The Q values of the HOM in the DDC are 10-100 times lower than those of a TESLA-type HOM coupler.
Design considerations for composite fuselage structure of commercial transport aircraft
NASA Technical Reports Server (NTRS)
Davis, G. W.; Sakata, I. F.
1981-01-01
The structural, manufacturing, and service and environmental considerations that could impact the design of composite fuselage structure for commercial transport aircraft application were explored. The severity of these considerations was assessed and the principal design drivers delineated. Technical issues and potential problem areas which must be resolved before sufficient confidence is established to commit to composite materials were defined. The key issues considered are: definition of composite fuselage design specifications, damage tolerance, and crashworthiness.
Application of the Modular Command and Control Structure (MCES) to Marine Corps SINCGARS Allocation
1991-07-01
The first goal is to delineate the difference between the system being analyzed and its environment . To bound the C3 system, the analyst should...hardware and software entities and structures, is related to the forces it controls and the environmental stimuli to which it responds, including the enemy...M CES represents the environmental facto:s that require explicit assumptions in the problem. This ring may be seen as including the major scenario
The reactive bed plasma system for contamination control
NASA Technical Reports Server (NTRS)
Birmingham, Joseph G.; Moore, Robert R.; Perry, Tony R.
1990-01-01
The contamination control capabilities of the Reactive Bed Plasma (RBP) system is described by delineating the results of toxic chemical composition studies, aerosol filtration work, and other testing. The RBP system has demonstrated its capabilities to decompose toxic materials and process hazardous aerosols. The post-treatment requirements for the reaction products have possible solutions. Although additional work is required to meet NASA requirements, the RBP may be able to meet contamination control problems aboard the Space Station.
2015-01-01
A central goal of The Academy of Breastfeeding Medicine is the development of clinical protocols for managing common medical problems that may impact breastfeeding success. These protocols serve only as guidelines for the care of breastfeeding mothers and infants and do not delineate an exclusive course of treatment or serve as standards of medical care. Variations in treatment may be appropriate according to the needs of an individual patient. PMID:25836677
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
This report addresses the issues delineated by the 70th Texas Legislature in directing the Central Education Agency to study the problem of youth suicide. "Causes and Factors Contributing to Youth Suicide" presents data on national suicide rates for 15- to 19-year-olds and deaths by suicide in Texas for children between the ages of 5 and 19.…
Current Approaches in Implementing Citizen Science in the Classroom.
Shah, Harsh R; Martinez, Luis R
2016-03-01
Citizen science involves a partnership between inexperienced volunteers and trained scientists engaging in research. In addition to its obvious benefit of accelerating data collection, citizen science has an unexplored role in the classroom, from K-12 schools to higher education. With recent studies showing a weakening in scientific competency of American students, incorporating citizen science initiatives in the curriculum provides a means to address deficiencies in a fragmented educational system. The integration of traditional and innovative pedagogical methods to reform our educational system is therefore imperative in order to provide practical experiences in scientific inquiry, critical thinking, and problem solving for school-age individuals. Citizen science can be used to emphasize the recognition and use of systematic approaches to solve problems affecting the community.
Larsen, Brian Roland; Assentoft, Mette; Cotrina, Maria L.; Hua, Susan Z.; Nedergaard, Maiken; Kaila, Kai; Voipio, Juha; MacAulay, Nanna
2015-01-01
Bursts of network activity in the brain are associated with a transient increase in extracellular K+ concentration. The excess K+ is removed from the extracellular space by mechanisms proposed to involve Kir4.1-mediated spatial buffering, the Na+/K+/2Cl− cotransporter (NKCC1), and/or Na+/K+-ATPase activity. Their individual contribution to [K+]o management has been of extended controversy. The present study aimed, by several complementary approaches, to delineate the transport characteristics of Kir4.1, NKCC1, and Na+/K+-ATPase and to resolve their involvement in clearance of extracellular K+ transients. Primary cultures of rat astrocytes displayed robust NKCC1 activity with [K+]o increases above basal levels. Increased [K+]o produced NKCC1-mediated swelling of cultured astrocytes and NKCC1 could thereby potentially act as a mechanism of K+ clearance while concomitantly mediate the associated shrinkage of the extracellular space. In rat hippocampal slices, inhibition of NKCC1 failed to affect the rate of K+ removal from the extracellular space while Kir4.1 enacted its spatial buffering only during a local [K+]o increase. In contrast, inhibition of the different isoforms of Na+/K+-ATPase reduced post-stimulusclearance of K+ transients. The glia-specific α2/β2 subunit composition of Na+/K+-ATPase, when expressed in Xenopus oocytes, displayed a K+ affinity and voltage-sensitivity that would render this astrocyte-specific subunit composition specifically geared for controlling [K+]o during neuronal activity. In rat hippocampal slices, simultaneous measurements of the extracellular space volume revealed that neither Kir4.1, NKCC1, nor Na+/K+-ATPase accounted for the stimulus-induced shrinkage of the extracellular space. Thus, NKCC1 plays no role in activity-induced extracellular K+ recovery in native hippocampal tissue while Kir4.1 and Na+/K+-ATPase serve temporally distinct roles. PMID:24482245
Problem of time: facets and Machian strategy.
Anderson, Edward
2014-10-01
The problem of time is that "time" in each of ordinary quantum theory and general relativity are mutually incompatible notions. This causes difficulties in trying to put these two theories together to form a theory of quantum gravity. The problem of time has eight facets in canonical approaches. I clarify that all but one of these facets already occur at the classical level, and reconceptualize and re-name some of these facets as follows. The frozen formalism problem becomes temporal relationalism, the thin sandwich problem becomes configurational relationalism, via the notion of best matching. The problem of observables becomes the problem of beables, and the functional evolution problem becomes the constraint closure problem. I also outline how each of the global and multiple-choice problems of time have their own plurality of facets. This article additionally contains a local resolution to the problem of time at the conceptual level and which is actually realizable for the relational triangle and minisuperspace models. This resolution is, moreover, Machian, and has three levels: classical, semiclassical, and a combined semiclassical-histories-timeless records scheme. I end by delineating the current frontiers of this program toward resolution of the problem of time in the cases of full general relativity and of slightly inhomogeneous cosmology. © 2014 New York Academy of Sciences.
Principal Component Geostatistical Approach for large-dimensional inverse problems
Kitanidis, P K; Lee, J
2014-01-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Principal Component Geostatistical Approach for large-dimensional inverse problems.
Kitanidis, P K; Lee, J
2014-07-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturza, Mihai; Allred, Jared M.; Malliakas, Christos D.
Effecting and controlling ferromagnetic-like properties hi senticonductors has proven to be a complex problem, especially when approaching room temperature. Here, we demonstrate the important role of defects in the magnetic properties of semiconductors by reporting the structures and properties of the iron chalcogenides (BaF)(2)Fe2-x Q(3) (Q= S, Se), which exhibit anomalous Magnetic properties that are correlated' with detects in the Fe-sublattice, The compounds form in both long-range ordered and disordered polytypes of a new structure typified by the alternate stacking of fluorite (BaF)(2)(2+) and (Fe(2-x)Q(3))(2-) layers. The latter layers exhibit an ordered array of strong Pe-Pe dimers in edge-Sharing tetrahedra.more » Given the strong Fe-Fe interaction, it is expected that the Fe-Fe dimer is,antiferromagnetically coupled, yet crystals exhibit a Weak ferromagnetic moment that orders at relatively-high temperature: below 280-315 K and 240275 K for the sulfide and selenide analogues, respectively. This transition temperature positively correlates with the concentration of defect in the Fe-sublattice, as determined by single-crystal X-ray diffraction. Our results indicate that internal defects in Fe(2-x)Q(3) layers play an important role in dictating the magnetic properties of newly discovered (BaF)2Fe(2),Q-3, (Q= 5-, Se), which can yield switchable ferromagnetically ordered mother-its at or above room temperature.« less
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
Dek, Mohd Sabri Pak; Padmanabhan, Priya; Sherif, Sherif; Subramanian, Jayasankar; Paliyath, And Gopinadhan
2017-07-15
Phosphatidylinositol 3-kinase (PI3K) is a key enzyme that phosphorylates phosphatidylinositol at 3'-hydroxyl position of the inositol head group initiating the generation of several phosphorylated phosphatidylinositols, collectively referred to as phosphoinositides. The function of PI3K in plant senescence and ethylene signal transduction process was studied by expression of Solanum lycopersicum PI3K in transgenic Nicotiana tabacum , and delineating its effect on flower senescence. Detached flowers of transgenic tobacco plants with overexpressed Sl - PI3K (OX) displayed accelerated senescence and reduced longevity, when compared to the flowers of wild type plants. Flowers from PI3K-overexpressing plants showed enhanced ethylene production and upregulated expression of 1-aminocyclopropane-1-carboxylic acid oxidase 1 ( ACO1 ). Real time polymerase chain reaction (PCR) analysis showed that PI3K was expressed at a higher level in OX flowers than in the control. Seedlings of OX-lines also demonstrated a triple response phenotype with characteristic exaggerated apical hook, shorter hypocotyls and increased sensitivity to 1-aminocyclopropane-1-carboxylate than the control wild type seedlings. In floral tissue from OX-lines, Solanum lycopersicum phosphatidylinositol 3-kinase green fluorescent protein (PI3K-GFP) chimera protein was localized primarily in stomata, potentially in cytoplasm and membrane adjacent to stomatal pores in the guard cells. Immunoblot analysis of PI3K expression in OX lines demonstrated increased protein level compared to the control. Results of the present study suggest that PI3K plays a crucial role in senescence by enhancing ethylene biosynthesis and signaling.
The Significance of Land Cover Delineation on Soil Erosion Assessment.
Efthimiou, Nikolaos; Psomiadis, Emmanouil
2018-04-25
The study aims to evaluate the significance of land cover delineation on soil erosion assessment. To that end, RUSLE (Revised Universal Soil Loss Equation) was implemented at the Upper Acheloos River catchment, Western Central Greece, annually and multi-annually for the period 1965-92. The model estimates soil erosion as the linear product of six factors (R, K, LS, C, and P) considering the catchment's climatic, pedological, topographic, land cover, and anthropogenic characteristics, respectively. The C factor was estimated using six alternative land use delineations of different resolution, namely the CORINE Land Cover (CLC) project (2000, 2012 versions) (1:100,000), a land use map conducted by the Greek National Agricultural Research Foundation (NAGREF) (1:20,000), a land use map conducted by the Greek Payment and Control Agency for Guidance and Guarantee Community Aid (PCAGGCA) (1:5,000), and the Landsat 8 16-day Normalized Difference Vegetation Index (NDVI) dataset (30 m/pixel) (two approximations) based on remote sensing data (satellite image acquired on 07/09/2016) (1:40,000). Since all other factors remain unchanged per each RUSLE application, the differences among the yielded results are attributed to the C factor (thus the land cover pattern) variations. Validation was made considering the convergence between simulated (modeled) and observed sediment yield. The latter was estimated based on field measurements conducted by the Greek PPC (Public Power Corporation). The model performed best at both time scales using the Landsat 8 (Eq. 13) dataset, characterized by a detailed resolution and a satisfactory categorization, allowing the identification of the most susceptible to erosion areas.
NASA Astrophysics Data System (ADS)
Mehrpooya, Mehdi; Dehghani, Hossein; Ali Moosavian, S. M.
2016-02-01
A combined system containing solid oxide fuel cell-gas turbine power plant, Rankine steam cycle and ammonia-water absorption refrigeration system is introduced and analyzed. In this process, power, heat and cooling are produced. Energy and exergy analyses along with the economic factors are used to distinguish optimum operating point of the system. The developed electrochemical model of the fuel cell is validated with experimental results. Thermodynamic package and main parameters of the absorption refrigeration system are validated. The power output of the system is 500 kW. An optimization problem is defined in order to finding the optimal operating point. Decision variables are current density, temperature of the exhaust gases from the boiler, steam turbine pressure (high and medium), generator temperature and consumed cooling water. Results indicate that electrical efficiency of the combined system is 62.4% (LHV). Produced refrigeration (at -10 °C) and heat recovery are 101 kW and 22.1 kW respectively. Investment cost for the combined system (without absorption cycle) is about 2917 kW-1.
K-Partite RNA Secondary Structures
NASA Astrophysics Data System (ADS)
Jiang, Minghui; Tejada, Pedro J.; Lasisi, Ramoni O.; Cheng, Shanhong; Fechser, D. Scott
RNA secondary structure prediction is a fundamental problem in structural bioinformatics. The prediction problem is difficult because RNA secondary structures may contain pseudoknots formed by crossing base pairs. We introduce k-partite secondary structures as a simple classification of RNA secondary structures with pseudoknots. An RNA secondary structure is k-partite if it is the union of k pseudoknot-free sub-structures. Most known RNA secondary structures are either bipartite or tripartite. We show that there exists a constant number k such that any secondary structure can be modified into a k-partite secondary structure with approximately the same free energy. This offers a partial explanation of the prevalence of k-partite secondary structures with small k. We give a complete characterization of the computational complexities of recognizing k-partite secondary structures for all k ≥ 2, and show that this recognition problem is essentially the same as the k-colorability problem on circle graphs. We present two simple heuristics, iterated peeling and first-fit packing, for finding k-partite RNA secondary structures. For maximizing the number of base pair stackings, our iterated peeling heuristic achieves a constant approximation ratio of at most k for 2 ≤ k ≤ 5, and at most frac6{1-(1-6/k)^k} le frac6{1-e^{-6}} < 6.01491 for k ≥ 6. Experiment on sequences from PseudoBase shows that our first-fit packing heuristic outperforms the leading method HotKnots in predicting RNA secondary structures with pseudoknots. Source code, data set, and experimental results are available at
Li, Shuai; Li, Yangming; Wang, Zheng
2013-03-01
This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, Geoffrey C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
NASA Astrophysics Data System (ADS)
Starko, Darij; Craig, Walter
2018-04-01
Variations in redshift measurements of Type 1a supernovae and intensity observations from large sky surveys are an indicator of a component of acceleration in the rate of expansion of space-time. A key factor in the measurements is the intensity-distance relation for Maxwell's equations in Friedmann-Robertson-Walker (FRW) space-times. In view of future measurements of the decay of other fields on astronomical time and spatial scales, we determine the asymptotic behavior of the intensity-distance relationship for the solution of the wave equation in space-times with an FRW metric. This builds on previous work done on initial value problems for the wave equation in FRW space-time [Abbasi, B. and Craig, W., Proc. R. Soc. London, Ser. A 470, 20140361 (2014)]. In this paper, we focus on the precise intensity decay rates of the special cases for curvature k = 0 and k = -1, as well as giving a general derivation of the wave solution for -∞ < k < 0. We choose a Cauchy surface {(t, x) : t = t0 > 0} where t0 represents the time of an initial emission source, relative to the Big Bang singularity at t = 0. The initial data [g(x), h(x)] are assumed to be compactly supported; supp(g, h) ⊆ BR(0) and terms in the expression for the fundamental solution for the wave equation with the slowest decay rate are retained. The intensities calculated for coordinate time {t : t > 0} contain correction terms proportional to the ratio of t0 and the time differences ρ = t - t0. For the case of general curvature k, these expressions for the intensity reduce by scaling to the same form as for k = -1, from which we deduce the general formula. We note that for typical astronomical events such as Type 1a supernovae, the first order correction term for all curvatures -∞ < k < 0 is on the order of 10-4 smaller than the zeroth order term. These correction terms are small but may be significant in applications to alternative observations of cosmological space-time expansion rates.
Quantitative analysis of the correlations in the Boltzmann-Grad limit for hard spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulvirenti, M.
2014-12-09
In this contribution I consider the problem of the validity of the Boltzmann equation for a system of hard spheres in the Boltzmann-Grad limit. I briefly review the results available nowadays with a particular emphasis on the celebrated Lanford’s validity theorem. Finally I present some recent results, obtained in collaboration with S. Simonella, concerning a quantitative analysis of the propagation of chaos. More precisely we introduce a quantity (the correlation error) measuring how close a j-particle rescaled correlation function at time t (sufficiently small) is far from the full statistical independence. Roughly speaking, a correlation error of order k, measuresmore » (in the context of the BBKGY hierarchy) the event in which k tagged particles form a recolliding group.« less
NASA Astrophysics Data System (ADS)
Kasiviswanathan, Shiva Prasad; Pan, Feng
In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.
NASA Astrophysics Data System (ADS)
Swathi Lakshmi, A.; Saran, S.; Srivastav, S. K.; Krishna Murthy, Y. V. N.
2014-11-01
India is prone to several natural disasters such as floods, droughts, cyclones, landslides and earthquakes on account of its geoclimatic conditions. But the most frequent and prominent disasters are floods and droughts. So to reduce the impact of floods and droughts in India, interlinking of rivers is one of the best solutions to transfer the surplus flood waters to deficit/drought prone areas. Geospatial modelling provides a holistic approach to generate probable interlinking routes of rivers based on existing geoinformatics tools and technologies. In the present study, SRTM DEM and AWiFS datasets coupled with land-use/land -cover, geomorphology, soil and interpolated rainfall surface maps have been used to identify the potential routes in geospatial domain for interlinking of Vamsadhara and Nagavali River Systems in Srikakulam district, Andhra Pradesh. The first order derivatives are derived from DEM and road, railway and drainage networks have been delineated using the satellite data. The inundation map has been prepared using AWiFS derived Normalized Difference Water Index (NDWI). The Drought prone areas were delineated on the satellite image as per the records declared by Revenue Department, Srikakulam. Majority Rule Based (MRB) aggregation technique is performed to optimize the resolution of obtained data in order to retain the spatial variability of the classes. Analytical Hierarchy Process (AHP) based Multi-Criteria Decision Making (MCDM) is implemented to obtain the prioritization of parameters like geomorphology, soil, DEM, slope, and land use/land-cover. A likelihood grid has been generated and all the thematic layers are overlaid to identify the potential grids for routing optimization. To give a better routing map, impedance map has been generated and several other constraints are considered. The implementation of canal construction needs extra cost in some areas. The developed routing map is published into OGC WMS services using open source GeoServer and proposed routing service can be visualized over Bhuvan portal (http://www.bhuvan.nrsc.gov.in/).Thus the obtained routing map of proposed canals focuses on transferring the surplus waters to drought prone areas to solve the problem of water scarcity, to properly utilize the flood waters for irrigational purposes and also help in recharging of groundwater. Similar methodology can be adopted in other interlinking of river systems.
Stockton, S.L.; Balch, Alfred H.
1978-01-01
The Salt Valley anticline, in the Paradox Basin of southeastern Utah, is under investigation for use as a location for storage of solid nuclear waste. Delineation of thin, nonsalt interbeds within the upper reaches of the salt body is extremely important because the nature and character of any such fluid- or gas-saturated horizons would be critical to the mode of emplacement of wastes into the structure. Analysis of 50 km of conventional seismic-reflection data, in the vicinity of the anticline, indicates that mapping of thin beds at shallow depths may well be possible using a specially designed adaptation of state-of-the-art seismic oil-exploration procedures. Computer ray-trace modeling of thin beds in salt reveals that the frequency and spatial resolution required to map the details of interbeds at shallow depths (less than 750 m) may be on the order of 500 Hz, with surface-spread lengths of less than 350 m. Consideration should be given to the burial of sources and receivers in order to attenuate surface noise and to record the desired high frequencies. Correlation of the seismic-reflection data with available well data and surface geology reveals the complex, structurally initiated diapir, whose upward flow was maintained by rapid contemporaneous deposition of continental clastic sediments on its flanks. Severe collapse faulting near the crests of these structures has distorted the seismic response. Evidence exists, however, that intrasalt thin beds of anhydrite, dolomite, and black shale are mappable on seismic record sections either as short, discontinuous reflected events or as amplitude anomalies that result from focusing of the reflected seismic energy by the thin beds; computer modeling of the folded interbeds confirms both of these as possible causes of seismic response from within the salt diapir. Prediction of the seismic signatures of the interbeds can be made from computer-model studies. Petroleum seismic-reflection data are unsatisfactory for mapping the thin beds because of the lack of sufficient resolution to provide direct evidence of the presence of the thin beds. However, indirect evidence, present in these data as discontinuous seismic events, suggests that two geophysical techniques designed for this specific problem would allow direct detection of the interbeds in salt. These techniques are vertical seismic profiling and shallow, short-offset, high-frequency, seismic-reflection recording.
Synthesis and thermoluminescence characteristics of γ-irradiated K3Ca2(SO4)3F:Eu or Ce fluoride
NASA Astrophysics Data System (ADS)
Poddar, Anuradha; Gedam, S. C.; Dhoble, S. J.
2015-05-01
New halophosphor K3Ca2(SO4)3F activated by Eu and Ce has been synthesized by a co-precipitation method and characterized according to its thermoluminescence. The formation of traps in rare earth doped K3Ca2(SO4)3F and the effects of γ-radiation dose on the glow curve are discussed. The glow curve of K3Ca2(SO4)3F:Ce shows a prominent single peak at 150°C, whereas K3Ca2(SO4)3F:Eu and K3Ca2(SO4)3F:Ce,Eu at 142°C and 192°C, respectively. A single glow peak indicates that there is only one set of trap being activated within the particular temperature range. The presented phosphors are also studied because of its fading, reusability and trapping parameters. There was just 2% fading during a period of 10 days, indicating no serious fading problem. Trapping parameters such as order of kinetics (b), activation energy (E) and frequency factor (S) were calculated by using Chen's half-width method. The observations presented in this paper are good for lamp phosphors as well as solid-state dosimeter.
Exploiting Elementary Landscapes for TSP, Vehicle Routing and Scheduling
2015-09-03
Traveling Salesman Problem (TSP) and Graph Coloring are elementary. Problems such as MAX-kSAT are a superposition of k elementary landscapes. This...search space. Problems such as the Traveling Salesman Problem (TSP), Graph Coloring, the Frequency Assignment Problem , as well as Min-Cut and Max-Cut...echoing our earlier esults on the Traveling Salesman Problem . Using two locally optimal solutions as “parent” solutions, we have developed a
NASA Technical Reports Server (NTRS)
Cisowski, S. M.; Fuller, M.
1986-01-01
A method for determining a planetary body's magnetic field environment over time is proposed. This relative paleointensity method is based on the normalization of natural remanence to saturation remanence magnetization as measured after each sample is exposed to a strong magnetic field. It is shown that this method is well suited to delineating order-of-magnitude changes in magnetizing fields.
Development of a Computational Assay for the Estrogen Receptor
2006-07-01
University Ashley Deline, Senior Thesis in chemistry, " Molecular Dynamic Simulations of a Glycoform and its Constituent Parts Related to Rheumatoid Arthritis...involves running a long molecular dynamics (MD) simulation of the uncoupled receptor in order to sample the protein’s unique conformations. The second...Receptor binding domain. * Performed several long molecular dynamics simulations (800 ps - 3 ns) on the ligand-ER system using ligands with known
Aznar, Marianne C; Girinsky, Theodore; Berthelsen, Anne Kiil; Aleman, Berthe; Beijert, Max; Hutchings, Martin; Lievens, Yolande; Meijnders, Paul; Meidahl Petersen, Peter; Schut, Deborah; Maraldo, Maja V; van der Maazen, Richard; Specht, Lena
2017-04-01
In early-stage classical Hodgkin lymphoma (HL) the target volume nowadays consists of the volume of the originally involved nodes. Delineation of this volume on a post-chemotherapy CT-scan is challenging. We report on the interobserver variability in target volume definition and its impact on resulting treatment plans. Two representative cases were selected (1: male, stage IB, localization: left axilla; 2: female, stage IIB, localizations: mediastinum and bilateral neck). Eight experienced observers individually defined the clinical target volume (CTV) using involved-node radiotherapy (INRT) as defined by the EORTC-GELA guidelines for the H10 trial. A consensus contour was generated and the standard deviation computed. We investigated the overlap between observer and consensus contour [Sørensen-Dice coefficient (DSC)] and the magnitude of gross deviations between the surfaces of the observer and consensus contour (Hausdorff distance). 3D-conformal (3D-CRT) and intensity-modulated radiotherapy (IMRT) plans were calculated for each contour in order to investigate the impact of interobserver variability on each treatment modality. Similar target coverage was enforced for all plans. The median CTV was 120 cm 3 (IQR: 95-173 cm 3 ) for Case 1, and 255 cm 3 (IQR: 183-293 cm 3 ) for Case 2. DSC values were generally high (>0.7), and Hausdorff distances were about 30 mm. The SDs between all observer contours, providing an estimate of the systematic error associated with delineation uncertainty, ranged from 1.9 to 3.8 mm (median: 3.2 mm). Variations in mean dose resulting from different observer contours were small and were not higher in IMRT plans than in 3D-CRT plans. We observed considerable differences in target volume delineation, but the systematic delineation uncertainty of around 3 mm is comparable to that reported in other tumour sites. This report is a first step towards calculating an evidence-based planning target volume margin for INRT in HL.
Accuracy assessment of airborne LIDAR data and automated extraction of features
NASA Astrophysics Data System (ADS)
Cetin, Ali Fuat
Airborne LIDAR technology is becoming more widely used since it provides fast and dense irregularly spaced 3D point clouds. The coordinates produced as a result of calibration of the system are used for surface modeling and information extraction. In this research a new idea of LIDAR detectable targets is introduced. In the second part of this research, a new technique to delineate the edge of road pavements automatically using only LIDAR is presented. The accuracy of LIDAR data should be determined before exploitation for any information extraction to support a Geographic Information System (GIS) database. Until recently there was no definitive research to provide a methodology for common and practical assessment of both horizontal and vertical accuracy of LIDAR data for end users. The idea used in this research was to use targets of such a size and design so that the position of each target can be determined using the Least Squares Image Matching Technique. The technique used in this research can provide end users and data providers an easy way to evaluate the quality of the product, especially when there are accessible hard surfaces to install the targets. The results of the technique are determined to be in a reasonable range when the point spacing of the data is sufficient. To delineate the edge of pavements, trees and buildings are removed from the point cloud, and the road surfaces are segmented from the remaining terrain data. This is accomplished using the homogeneous nature of road surfaces in intensity and height. There are not many studies to delineate the edge of road pavement after the road surfaces are extracted. In this research, template matching techniques are used with criteria computed by Gray Level Co-occurrence Matrix (GLCM) properties, in order to locate seed pixels in the image. The seed pixels are then used for placement of the matched templates along the road. The accuracy of the delineated edge of pavement is determined by comparing the coordinates of reference points collected via photogrammetry with the coordinates of the nearest points along the delineated edge.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Styner, Martin
2016-03-01
The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.
NASA Astrophysics Data System (ADS)
Botter Martins, Samuel; Vallin Spina, Thiago; Yasuda, Clarissa; Falcão, Alexandre X.
2017-02-01
Statistical Atlases have played an important role towards automated medical image segmentation. However, a challenge has been to make the atlas more adaptable to possible errors in deformable registration of anomalous images, given that the body structures of interest for segmentation might present significant differences in shape and texture. Recently, deformable registration errors have been accounted by a method that locally translates the statistical atlas over the test image, after registration, and evaluates candidate objects from a delineation algorithm in order to choose the best one as final segmentation. In this paper, we improve its delineation algorithm and extend the model to be a multi-object statistical atlas, built from control images and adaptable to anomalous images, by incorporating a texture classifier. In order to provide a first proof of concept, we instantiate the new method for segmenting, object-by-object and all objects simultaneously, the left and right brain hemispheres, and the cerebellum, without the brainstem, and evaluate it on MRT1-images of epilepsy patients before and after brain surgery, which removed portions of the temporal lobe. The results show efficiency gain with statistically significant higher accuracy, using the mean Average Symmetric Surface Distance, with respect to the original approach.
Implications of recent measurements of hadronic charmless B decays
NASA Astrophysics Data System (ADS)
Cheng, Hai-Yang; Yang, Kwei-Chou
2000-09-01
The implications of recent CLEO measurements of hadronic charmless B decays are discussed. (i) Employing the Bauer-Stech-Wirbel (BSW) model for form factors as a benchmark, the B-->π+π- data indicate that the form factor FBπ0(0) is smaller than that predicted by the BSW model, whereas the data of B-->ωπ, K*η imply that the form factors ABω0(0), ABK*0(0) are greater than the BSW model values. (ii) The tree-dominated modes B-->π+π-, ρ0π+/-, ωπ+/- imply that the effective number of colors Neffc(LL) for (V-A)(V-A) operators is preferred to be smaller, while the current limit on B-->φK shows that Neffc(LR)>3. The data of B-->Kη' and K*η clearly indicate that Neffc(LR)>>Neffc(LL). (iii) In order to understand the observed suppression of π+π- and nonsuppression of Kπ modes, both being governed by the form factor FBπ0, the unitarity angle γ is preferred to be greater than 90°. By contrast, the new measurement of B+/--->ρ0π+/- no longer strongly favors cos γ<0. (iv) The observed pattern K-π+~K¯0π-~23K-π0 is consistent with the theoretical expectation: The constructive interference between electroweak and QCD penguin diagrams in the K-π0 mode explains why B(B--->K-π0)>12B(B¯0-->K-π+). (v) The observation Neffc(LL)<3
Fu, Chenglai; Xu, Jing; Li, Ruo-Jing; Crawford, Joshua A.; Khan, A. Basit; Ma, Ting Martin; Cha, Jiyoung Y.; Snowman, Adele M.; Pletnikov, Mikhail V.
2015-01-01
The inositol hexakisphosphate kinases (IP6Ks) are the principal enzymes that generate inositol pyrophosphates. There are three IP6Ks (IP6K1, 2, and 3). Functions of IP6K1 and IP6K2 have been substantially delineated, but little is known of IP6K3's role in normal physiology, especially in the brain. To elucidate functions of IP6K3, we generated mice with targeted deletion of IP6K3. We demonstrate that IP6K3 is highly concentrated in the brain in cerebellar Purkinje cells. IP6K3 physiologically binds to the cytoskeletal proteins adducin and spectrin, whose mutual interactions are perturbed in IP6K3-null mutants. Consequently, IP6K3 knock-out cerebella manifest abnormalities in Purkinje cell structure and synapse number, and the mutant mice display deficits in motor learning and coordination. Thus, IP6K3 is a major determinant of cytoskeletal disposition and function of cerebellar Purkinje cells. SIGNIFICANCE STATEMENT We identified and cloned a family of three inositol hexakisphosphate kinases (IP6Ks) that generate the inositol pyrophosphates, most notably 5-diphosphoinositol pentakisphosphate (IP7). Of these, IP6K3 has been least characterized. In the present study we generated IP6K3 knock-out mice and show that IP6K3 is highly expressed in cerebellar Purkinje cells. IP6K3-deleted mice display defects of motor learning and coordination. IP6K3-null mice manifest aberrations of Purkinje cells with a diminished number of synapses. IP6K3 interacts with the cytoskeletal proteins spectrin and adducin whose altered disposition in IP6K3 knock-out mice may mediate phenotypic features of the mutant mice. These findings afford molecular/cytoskeletal mechanisms by which the inositol polyphosphate system impacts brain function. PMID:26245967
Evans, Amy; Taylor, Julie Scott
2014-01-01
A central goal of The Academy of Breastfeeding Medicine is the development of clinical protocols for managing common medical problems that may impact breastfeeding success. These protocols serve only as guidelines for the care of breastfeeding mothers and infants and do not delineate an exclusive course of treatment or serve as standards of medical care. Variations in treatment may be appropriate according to the needs of an individual patient. PMID:24456024