ERIC Educational Resources Information Center
Maclennan, Ian
1977-01-01
Suggests that there exists a "finite" number of elementary concepts and distinguishable modes of thinking, that all human beings tend to acquire the same set of elements of thinking and the same strategies with which to understand and control their physical environment, and that the method of analysis used here is a standard scientific method.…
Jakarta socio-cultural ecology: a sustainable architecture concept in urban neighbourhood
NASA Astrophysics Data System (ADS)
Wijaksono, Sigit; Sasmoko; Indrianti, Y.; Widhoyoko, SA
2017-12-01
As a metropolitan city with densely populated and fast residential development Jakarta should be able to implement a concept that is Jakarta socio-cultural ecology Architecture as the basis of settlement development. The concept of Jakarta socio-cultural ecology architecture is characterized by residential development capabilities that reflect and express the indigenous culture, the settlements built by linking the social and economic activities of the people of Jakarta and the settlements built by maintaining the building with the value of existing heritage. The objectives of this research are 1) to find a relevant construct to housing condition in Jakarta which then called Jakarta socio-cultural ecology, and 2) to see the tendency of complex condition of Jakarta socio-cultural ecology settlement. This research uses Neuroresearch method, which is one of mix-method research method as a mixture research method between qualitative research (exploration) and quantitative research method (explanatory and confirmatory). The population of research as well as unit analysis are all settlements in Jakarta. Sampling technique using probability sampling that is with multistage sampling. The results show that nowadays the Jakarta residential complex tends to lead to socio-cultural ecology and rather reflects and expresses the indigenous culture, the residential complex in Jakarta tends to form the building has been linked fully with the social and economic activities of Jakarta society but tends to occasionally maintain buildings with existing heritage values. This study also found that indigenous culture is a significant determinant of the formation of the condition of Jakarta socio-cultural ecology.
Cholewicki, Jacek; van Dieën, Jaap; Lee, Angela S.; Reeves, N. Peter
2011-01-01
The problem with normalizing EMG data from patients with painful symptoms (e.g. low back pain) is that such patients may be unwilling or unable to perform maximum exertions. Furthermore, the normalization to a reference signal, obtained from a maximal or sub-maximal task, tends to mask differences that might exist as a result of pathology. Therefore, we presented a novel method (GAIN method) for normalizing trunk EMG data that overcomes both problems. The GAIN method does not require maximal exertions (MVC) and tends to preserve distinct features in the muscle recruitment patterns for various tasks. Ten healthy subjects performed various isometric trunk exertions, while EMG data from 10 muscles were recorded and later normalized using the GAIN and MVC methods. The MVC method resulted in smaller variation between subjects when tasks were executed at the three relative force levels (10%, 20%, and 30% MVC), while the GAIN method resulted in smaller variation between subjects when the tasks were executed at the three absolute force levels (50 N, 100 N, and 145 N). This outcome implies that the MVC method provides a relative measure of muscle effort, while the GAIN-normalized EMG data gives an estimate of the absolute muscle force. Therefore, the GAIN-normalized EMG data tends to preserve the EMG differences between subjects in the way they recruit their muscles to execute various tasks, while the MVC-normalized data will tend to suppress such differences. The appropriate choice of the EMG normalization method will depend on the specific question that an experimenter is attempting to answer. PMID:21665489
The Beck Depression Inventory, Second Edition (BDI-II): A Cross-Sample Structural Analysis
ERIC Educational Resources Information Center
Strunk, Kamden K.; Lane, Forrest C.
2017-01-01
A common concern about the Beck Depression Inventory, Second edition (BDI-II) among researchers in the area of depression has long been the single-factor scoring scheme. Methods exist for making cross-sample comparisons of latent structure but tend to rely on estimation methods that can be imprecise and unnecessarily complex. This study presents a…
Comparing and improving reconstruction methods for proxies based on compositional data
NASA Astrophysics Data System (ADS)
Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.
2017-12-01
Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data
Elicitation Support Requirements of Multi-Expertise Teams
ERIC Educational Resources Information Center
Bitter-Rijpkema, Marlies; Martens, Rob; Jochems, Wim
2005-01-01
Tools to support knowledge elicitation are used more and more in situations where employees or students collaborate using the computer. Studies indicate that differences exist between experts and novices regarding their methods of work and reasoning. However, the commonly preferred approach tends to deal with team members as a single system with…
Rank and Sparsity in Language Processing
ERIC Educational Resources Information Center
Hutchinson, Brian
2013-01-01
Language modeling is one of many problems in language processing that have to grapple with naturally high ambient dimensions. Even in large datasets, the number of unseen sequences is overwhelmingly larger than the number of observed ones, posing clear challenges for estimation. Although existing methods for building smooth language models tend to…
A General Purpose Feature Extractor for Light Detection and Ranging Data
2010-11-17
datasets, and the 3D MIT DARPA Urban Challenge dataset. Keywords: SLAM ; LIDARs ; feature detection; uncertainty estimates; descriptors 1. Introduction The...November 2010 Abstract: Feature extraction is a central step of processing Light Detection and Ranging ( LIDAR ) data. Existing detectors tend to exploit...detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image
Educational Data Mining Applications and Tasks: A Survey of the Last 10 Years
ERIC Educational Resources Information Center
Bakhshinategh, Behdad; Zaiane, Osmar R.; ElAtia, Samira; Ipperciel, Donald
2018-01-01
Educational Data Mining (EDM) is the field of using data mining techniques in educational environments. There exist various methods and applications in EDM which can follow both applied research objectives such as improving and enhancing learning quality, as well as pure research objectives, which tend to improve our understanding of the learning…
Uncertainty loops in travel-time tomography from nonlinear wave physics.
Galetti, Erica; Curtis, Andrew; Meles, Giovanni Angelo; Baptie, Brian
2015-04-10
Estimating image uncertainty is fundamental to guiding the interpretation of geoscientific tomographic maps. We reveal novel uncertainty topologies (loops) which indicate that while the speeds of both low- and high-velocity anomalies may be well constrained, their locations tend to remain uncertain. The effect is widespread: loops dominate around a third of United Kingdom Love wave tomographic uncertainties, changing the nature of interpretation of the observed anomalies. Loops exist due to 2nd and higher order aspects of wave physics; hence, although such structures must exist in many tomographic studies in the physical sciences and medicine, they are unobservable using standard linearized methods. Higher order methods might fruitfully be adopted.
Indurkhya, Sagar; Beal, Jacob
2010-01-06
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.
Indurkhya, Sagar; Beal, Jacob
2010-01-01
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048
Hole filling with oriented sticks in ultrasound volume reconstruction
Vaughan, Thomas; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor
2015-01-01
Abstract. Volumes reconstructed from tracked planar ultrasound images often contain regions where no information was recorded. Existing interpolation methods introduce image artifacts and tend to be slow in filling large missing regions. Our goal was to develop a computationally efficient method that fills missing regions while adequately preserving image features. We use directional sticks to interpolate between pairs of known opposing voxels in nearby images. We tested our method on 30 volumetric ultrasound scans acquired from human subjects, and compared its performance to that of other published hole-filling methods. Reconstruction accuracy, fidelity, and time were improved compared with other methods. PMID:26839907
ERIC Educational Resources Information Center
Berkowitz, J. H.
1920-01-01
Competent authorities seem to agree as to the causes of eye strain in school children other than congenital defects. Standard works on diseases of the eye are practically unanimous in declaring that myopia results from the protracted and unhygienic use of the eyes in near work. Most of the factors tending to cause eye strain exist in the schools.…
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Jin, Chao; Lee, Jay
2017-01-01
Minimum Entropy Deconvolution (MED) filter, which is a non-parametric approach for impulsive signature detection, has been widely studied recently. Although the merits of the MED filter are manifold, this method tends to over highlight the dominant peaks and its performance becomes less stable when strong noise exists. In order to better understand the behavior of the MED filter, this study first investigated the mathematical fundamentals of the MED filter and then explained the reason why the MED filter tends to over highlight the dominant peaks. In order to pursue finer solutions for weak impulsive signature enhancement, the Convolutional Sparse Filter (CSF) is originally proposed in this work and the derivation of the CSF is presented in details. The superiority of the proposed CSF over the MED filter is validated by both simulated data and experimental data. The results demonstrate that CSF is an effective method for impulsive signature enhancement that could be applied in rotating machines for incipient fault detection.
Identification of influential users by neighbors in online social networks
NASA Astrophysics Data System (ADS)
Sheikhahmadi, Amir; Nematbakhsh, Mohammad Ali; Zareie, Ahmad
2017-11-01
Identification and ranking of influential users in social networks for the sake of news spreading and advertising has recently become an attractive field of research. Given the large number of users in social networks and also the various relations that exist among them, providing an effective method to identify influential users has been gradually considered as an essential factor. In most of the already-provided methods, those users who are located in an appropriate structural position of the network are regarded as influential users. These methods do not usually pay attention to the interactions among users, and also consider those relations as being binary in nature. This paper, therefore, proposes a new method to identify influential users in a social network by considering those interactions that exist among the users. Since users tend to act within the frame of communities, the network is initially divided into different communities. Then the amount of interaction among users is used as a parameter to set the weight of relations existing within the network. Afterward, by determining the neighbors' role for each user, a two-level method is proposed for both detecting users' influence and also ranking them. Simulation and experimental results on twitter data shows that those users who are selected by the proposed method, comparing to other existing ones, are distributed in a more appropriate distance. Moreover, the proposed method outperforms the other ones in terms of both the influential speed and capacity of the users it selects.
Research study on high energy radiation effect and environment solar cell degradation methods
NASA Technical Reports Server (NTRS)
Horne, W. E.; Wilkinson, M. C.
1974-01-01
The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.
Beef customer satisfaction: cooking method and degree of doneness effects on the top sirloin steak.
Savell, J W; Lorenzen, C L; Neely, T R; Miller, R K; Tatum, J D; Wise, J W; Taylor, J F; Buyck, M J; Reagan, J O
1999-03-01
The objective of this research was to evaluate the consumer-controlled factors of cooking method and degree of doneness on Top Choice, Low Choice, High Select, and Low Select top sirloin steaks. The in-home product test was conducted in Chicago, Houston, Philadelphia, and San Francisco. Consumers (n = 2,212) evaluated each top sirloin steak for overall like (OLIKE), tenderness (TEND), juiciness (JUIC), flavor desirability (DFLAV), and flavor intensity (IFLAV) using 23-point hedonic scales. Top sirloin steaks, regardless of city, were consistently cooked to well done or higher degrees of doneness. Dry-heat methods such as outdoor grilling, broiling, and indoor grilling were the most frequent cooking methods used. Four significant interactions existed for OLIKE: USDA quality grade x cooking method (P = .02), city x cooking method (P = .0001), city x degree of doneness (P = .01), and cooking method x degree of doneness (P = .009). Greater differences were found between cooking methods within USDA quality grade than between USDA quality grades within cooking method. Consumers in Houston rated steaks cooked by outdoor grilling higher than those from the other cities, and steaks cooked by indoor grilling were rated the highest among all cooking methods by consumers in Chicago. In Chicago, steaks cooked to more advanced degrees of doneness tended to receive higher ratings, but few differences between degrees of doneness in the other three cities were detected. For outdoor grilling, broiling, and pan-frying, the trend was for OLIKE ratings to decline as degree of doneness increased. The lowest customer satisfaction ratings tended to be given to top sirloin steaks cooked to more advanced degrees of doneness, and consumers most frequently cooked steaks to at least the well done stage. Consumer information programs or the development of postmortem techniques that would ensure acceptable palatability of top sirloin steaks may need to be developed.
An algorithm for spatial heirarchy clustering
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Velasco, F. R. D.
1981-01-01
A method for utilizing both spectral and spatial redundancy in compacting and preclassifying images is presented. In multispectral satellite images, a high correlation exists between neighboring image points which tend to occupy dense and restricted regions of the feature space. The image is divided into windows of the same size where the clustering is made. The classes obtained in several neighboring windows are clustered, and then again successively clustered until only one region corresponding to the whole image is obtained. By employing this algorithm only a few points are considered in each clustering, thus reducing computational effort. The method is illustrated as applied to LANDSAT images.
Interaction among actors in retail market competition in malang city
NASA Astrophysics Data System (ADS)
Kurniawan, B.; Ma'ruf, M. F.
2018-01-01
In several countries, traditional market lose in competition with supermarket. Supermarket has several advantages compared with traditional market. It can provides consumers with lower prices, offer more varieties of products and higher quality products than traditional retailers, has more comfortable place for shopping. In Malang City, the existence of traditional traders was threatened. In a competitive retail market, traditional traders get less protection from the Government of Malang Municipality. Massive demonstrations conducted by traditional traders along with other society elements unable to stem the rapid growth of modern retail. This paper focus on the interaction of Malang Municipality actors in the local retail market competition. How those interaction can make imbalance retail market competition. The author uses descriptive-analytic method with a qualitative approach in this work. As a result, the interaction tend to produce imbalance retail market competition. Interaction between legislative, executive, bureaucracy and mass media tend to support modern retail growth than traditional one.
East-West Cultural Differences in Context-Sensitivity are Evident in Early Childhood
ERIC Educational Resources Information Center
Imada, Toshie; Carlson, Stephanie M.; Itakura, Shoji
2013-01-01
Accumulating evidence suggests that North Americans tend to focus on central objects whereas East Asians tend to pay more attention to contextual information in a visual scene. Although it is generally believed that such culturally divergent attention tendencies develop through socialization, existing evidence largely depends on adult samples.…
L2 Perception of Spanish Palatal Variants across Different Tasks
ERIC Educational Resources Information Center
Shea, Christine; Renaud, Jeffrey
2014-01-01
While considerable dialectal variation exists, almost all varieties of Spanish exhibit some sort of alternation in terms of the palatal obstruent segments. Typically, the palatal affricate [??] tends to occur in word onset following a pause and in specific linear phonotactic environments. The palatal fricative [?] tends to occur in syllable onset…
Modeling integrated biomass gasification business concepts
Peter J. Ince; Ted Bilek; Mark A. Dietenberger
2011-01-01
Biomass gasification is an approach to producing energy and/or biofuels that could be integrated into existing forest product production facilities, particularly at pulp mills. Existing process heat and power loads tend to favor integration at existing pulp mills. This paper describes a generic modeling system for evaluating integrated biomass gasification business...
Inflation data clustering of some cities in Indonesia
NASA Astrophysics Data System (ADS)
Setiawan, Adi; Susanto, Bambang; Mahatma, Tundjung
2017-06-01
In this paper, it is presented how to cluster inflation data of cities in Indonesia by using k-means cluster method and fuzzy c-means method. The data that are used is limited to the monthly inflation data from 15 cities across Indonesia which have highest weight of donations and is supplemented with 5 cities used in the calculation of inflation in Indonesia. When they are applied into two clusters with k = 2 for k-means cluster method and c = 2, w = 1.25 for fuzzy c-means cluster method, Ambon, Manado and Jayapura tend to become one cluster (high inflation) meanwhile other cities tend to become members of other cluster (low inflation). However, if they are applied into two clusters with c=2, w=1.5, Surabaya, Medan, Makasar, Samarinda, Makasar, Manado, Ambon dan Jayapura tend to become one cluster (high inflation) meanwhile other cities tend to become members of other cluster (low inflation). Furthermore, when we use two clusters with k=3 for k-means cluster method and c=3, w = 1.25 for fuzzy c-means cluster method, Ambon tends to become member of first cluster (high inflation), Manado and Jayapura tend to become member of second cluster (moderate inflation), other cities tend to become members of third cluster (low inflation). If it is applied c=3, w = 1.5, Ambon, Manado and Jayapura tend to become member of first cluster (high inflation), Surabaya, Bandung, Medan, Makasar, Banyuwangi, Denpasar, Samarinda dan Mataram tend to become members of second cluster (moderate inflation), meanwhile other cities tend to become members of third cluster (low inflation). Similarly, interpretation can be made to the results of applying 5 clusters.
A flexible, open, decentralized system for digital pathology networks.
Schuler, Robert; Smith, David E; Kumaraguruparan, Gowri; Chervenak, Ann; Lewis, Anne D; Hyde, Dallas M; Kesselman, Carl
2012-01-01
High-resolution digital imaging is enabling digital archiving and sharing of digitized microscopy slides and new methods for digital pathology. Collaborative research centers, outsourced medical services, and multi-site organizations stand to benefit from sharing pathology data in a digital pathology network. Yet significant technological challenges remain due to the large size and volume of digitized whole slide images. While information systems do exist for managing local pathology laboratories, they tend to be oriented toward narrow clinical use cases or offer closed ecosystems around proprietary formats. Few solutions exist for networking digital pathology operations. Here we present a system architecture and implementation of a digital pathology network and share results from a production system that federates major research centers.
A Flexible, Open, Decentralized System for Digital Pathology Networks
SMITH, David E.; KUMARAGURUPARAN, Gowri; CHERVENAK, Ann; LEWIS, Anne D.; HYDE, Dallas M.; KESSELMAN, Carl
2014-01-01
High-resolution digital imaging is enabling digital archiving and sharing of digitized microscopy slides and new methods for digital pathology. Collaborative research centers, outsourced medical services, and multi-site organizations stand to benefit from sharing pathology data in a digital pathology network. Yet significant technological challenges remain due to the large size and volume of digitized whole slide images. While information systems do exist for managing local pathology laboratories, they tend to be oriented toward narrow clinical use cases or offer closed ecosystems around proprietary formats. Few solutions exist for networking digital pathology operations. Here we present a system architecture and implementation of a digital pathology network and share results from a production system that federates major research centers. PMID:22941985
Strengthening method of concrete structure
NASA Astrophysics Data System (ADS)
Inge, Wewin; Audrey; Nugroho, Sofie; Njo, Helen
2018-03-01
Building extension in Indonesia is not favored, and not many people know the advantages of the method because architects and engineers tend to lack the knowledge and experience. The aim of this paper is to explain a method on how to strengthen a concrete building structure that people can use/learn as a better way to cut potential cost and save time. The strengthening method explained in this paper is steel jacketing, providing a case study of this method in the extension of a restaurant located in Medan, Indonesia. In this study, engineers calculated that the tensile stress of the existing RC column and beam is not strong enough to reinforce the building extension applied load. Therefore, the steel jacketing method can be applied to improve the column and beam strength and ductility. The result of the case study proves that this is one of the best methods for building extension applied in Indonesia.
Raybould, Alan; Macdonald, Phil
2018-01-01
We describe two contrasting methods of comparative environmental risk assessment for genetically modified (GM) crops. Both are science-based, in the sense that they use science to help make decisions, but they differ in the relationship between science and policy. Policy-led comparative risk assessment begins by defining what would be regarded as unacceptable changes when the use a particular GM crop replaces an accepted use of another crop. Hypotheses that these changes will not occur are tested using existing or new data, and corroboration or falsification of the hypotheses is used to inform decision-making. Science-led comparative risk assessment, on the other hand, tends to test null hypotheses of no difference between a GM crop and a comparator. The variables that are compared may have little or no relevance to any previously stated policy objective and hence decision-making tends to be ad hoc in response to possibly spurious statistical significance. We argue that policy-led comparative risk assessment is the far more effective method. With this in mind, we caution that phenotypic profiling of GM crops, particularly with omics methods, is potentially detrimental to risk assessment. PMID:29755975
The Impact of Incentives on Exercise Behavior: A Systematic Review of Randomized Controlled Trials
Strohacker, Kelley; Galarraga, Omar; Williams, David M.
2015-01-01
Background The effectiveness of reinforcing exercise behavior with material incentives is unclear. Purpose Conduct a systematic review of existing research on material incentives for exercise, organized by incentive strategy. Methods Ten studies conducted between January 1965 and June 2013 assessed the impact of incentivizing exercise compared to a non-incentivized control. Results There was significant heterogeneity between studies regarding reinforcement procedures and outcomes. Incentives tended to improve behavior during the intervention while findings were mixed regarding sustained behavior after incentives were removed. Conclusions The most effective incentive procedure is unclear given the limitations of existing research. The effectiveness of various incentive procedures in promoting initial behavior change and habit formation, as well as the use of sustainable incentive procedures should be explored in future research. PMID:24307474
Compensated Box-Jenkins transfer function for short term load forecast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breipohl, A.; Yu, Z.; Lee, F.N.
In the past years, the Box-Jenkins ARIMA method and the Box-Jenkins transfer function method (BJTF) have been among the most commonly used methods for short term electrical load forecasting. But when there exists a sudden change in the temperature, both methods tend to exhibit larger errors in the forecast. This paper demonstrates that the load forecasting errors resulting from either the BJ ARIMA model or the BJTF model are not simply white noise, but rather well-patterned noise, and the patterns in the noise can be used to improve the forecasts. Thus a compensated Box-Jenkins transfer method (CBJTF) is proposed tomore » improve the accuracy of the load prediction. Some case studies have been made which result in about a 14-33% reduction of the root mean square (RMS) errors of the forecasts, depending on the compensation time period as well as the compensation method used.« less
Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J
2015-09-03
RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.
Early use of Space Station Freedom for NASA's Microgravity Science and Applications Program
NASA Technical Reports Server (NTRS)
Rhome, Robert C.; O'Malley, Terence F.
1992-01-01
The paper describes microgravity science opportunities inherent to the restructured Space Station and presents a synopsis of the scientific utilization plan for the first two years of ground-tended operations. In the ground-tended utilization mode the Space Station is a large free-flyer providing a continuous microgravity environment unmatched by any other platform within any existing U.S. program. It is pointed out that the importance of this period of early Space Station mixed-mode utilization between crew-tended and ground-tended approaches is of such magnitude that Station-based microgravity science experiments many become benchmarks to the disciplines involved. The traffic model that is currently being pursued is designed to maximize this opportunity for the U.S. microgravity science community.
Link prediction based on nonequilibrium cooperation effect
NASA Astrophysics Data System (ADS)
Li, Lanxi; Zhu, Xuzhen; Tian, Hui
2018-04-01
Link prediction in complex networks has become a common focus of many researchers. But most existing methods concentrate on neighbors, and rarely consider degree heterogeneity of two endpoints. Node degree represents the importance or status of endpoints. We describe the large-degree heterogeneity as the nonequilibrium between nodes. This nonequilibrium facilitates a stable cooperation between endpoints, so that two endpoints with large-degree heterogeneity tend to connect stably. We name such a phenomenon as the nonequilibrium cooperation effect. Therefore, this paper proposes a link prediction method based on the nonequilibrium cooperation effect to improve accuracy. Theoretical analysis will be processed in advance, and at the end, experiments will be performed in 12 real-world networks to compare the mainstream methods with our indices in the network through numerical analysis.
Han, Aaron L-F; Wong, Derek F; Chao, Lidia S; He, Liangye; Lu, Yi
2014-01-01
With the rapid development of machine translation (MT), the MT evaluation becomes very important to timely tell us whether the MT system makes any progress. The conventional MT evaluation methods tend to calculate the similarity between hypothesis translations offered by automatic translation systems and reference translations offered by professional translators. There are several weaknesses in existing evaluation metrics. Firstly, the designed incomprehensive factors result in language-bias problem, which means they perform well on some special language pairs but weak on other language pairs. Secondly, they tend to use no linguistic features or too many linguistic features, of which no usage of linguistic feature draws a lot of criticism from the linguists and too many linguistic features make the model weak in repeatability. Thirdly, the employed reference translations are very expensive and sometimes not available in the practice. In this paper, the authors propose an unsupervised MT evaluation metric using universal part-of-speech tagset without relying on reference translations. The authors also explore the performances of the designed metric on traditional supervised evaluation tasks. Both the supervised and unsupervised experiments show that the designed methods yield higher correlation scores with human judgments.
Network structure control of binary mixed langmuir monolayers of homo-PS and PS-b-P2VP.
Wen, Gangyao
2010-03-25
Our recent work showed there existed a composition window for mixed Langmuir monolayers of homopolystyrene (h-PS) and a symmetric diblock copolymer polystyrene-block-poly(2-vinylpyridine) (PS-b-P2VP) to form necklace-network structures at the air/water interface. In order to study further the possible mechanism and control the network structure (i.e., surface coverage and nanoaggregate diameter), effects of spreading solution concentration and volume, subphase temperature, and transfer pressure on the network structure were studied by the Langmuir monolayer technique and tapping mode atomic force microscopy. With the increase of transfer pressure, there existed a novel nonlinear behavior for the nanoaggregate diameter first to increase, then to decrease, and finally to increase again, while the surface coverage tended to increase step by step. Moreover, with the elevation of temperature, chain motion between the adjoining nanoaggregates tended to be improved and thus the nanoaggregate diameter tended to be more uniform.
Method selection for sustainability assessments: The case of recovery of resources from waste water.
Zijp, M C; Waaijers-van der Loop, S L; Heijungs, R; Broeren, M L M; Peeters, R; Van Nieuwenhuijzen, A; Shen, L; Heugens, E H W; Posthuma, L
2017-07-15
Sustainability assessments provide scientific support in decision procedures towards sustainable solutions. However, in order to contribute in identifying and choosing sustainable solutions, the sustainability assessment has to fit the decision context. Two complicating factors exist. First, different stakeholders tend to have different views on what a sustainability assessment should encompass. Second, a plethora of sustainability assessment methods exist, due to the multi-dimensional characteristic of the concept. Different methods provide other representations of sustainability. Based on a literature review, we present a protocol to facilitate method selection together with stakeholders. The protocol guides the exploration of i) the decision context, ii) the different views of stakeholders and iii) the selection of pertinent assessment methods. In addition, we present an online tool for method selection. This tool identifies assessment methods that meet the specifications obtained with the protocol, and currently contains characteristics of 30 sustainability assessment methods. The utility of the protocol and the tool are tested in a case study on the recovery of resources from domestic waste water. In several iterations, a combination of methods was selected, followed by execution of the selected sustainability assessment methods. The assessment results can be used in the first phase of the decision procedure that leads to a strategic choice for sustainable resource recovery from waste water in the Netherlands. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gait Analysis Methods for Rodent Models of Osteoarthritis
Jacobs, Brittany Y.; Kloefkorn, Heidi E.; Allen, Kyle D.
2014-01-01
Patients with osteoarthritis (OA) primarily seek treatment due to pain and disability, yet the primary endpoints for rodent OA models tend to be histological measures of joint destruction. The discrepancy between clinical and preclinical evaluations is problematic, given that radiographic evidence of OA in humans does not always correlate to the severity of patient-reported symptoms. Recent advances in behavioral analyses have provided new methods to evaluate disease sequelae in rodents. Of particular relevance to rodent OA models are methods to assess rodent gait. While obvious differences exist between quadrupedal and bipedal gait sequences, the gait abnormalities seen in humans and in rodent OA models reflect similar compensatory behaviors that protect an injured limb from loading. The purpose of this review is to describe these compensations and current methods used to assess rodent gait characteristics, while detailing important considerations for the selection of gait analysis methods in rodent OA models. PMID:25160712
Lengths of Orthologous Prokaryotic Proteins Are Affected by Evolutionary Factors
Tatarinova, Tatiana; Dien Bard, Jennifer; Cohen, Irit
2015-01-01
Proteins of the same functional family (for example, kinases) may have significantly different lengths. It is an open question whether such variation in length is random or it appears as a response to some unknown evolutionary driving factors. The main purpose of this paper is to demonstrate existence of factors affecting prokaryotic gene lengths. We believe that the ranking of genomes according to lengths of their genes, followed by the calculation of coefficients of association between genome rank and genome property, is a reasonable approach in revealing such evolutionary driving factors. As we demonstrated earlier, our chosen approach, Bubble-sort, combines stability, accuracy, and computational efficiency as compared to other ranking methods. Application of Bubble Sort to the set of 1390 prokaryotic genomes confirmed that genes of Archaeal species are generally shorter than Bacterial ones. We observed that gene lengths are affected by various factors: within each domain, different phyla have preferences for short or long genes; thermophiles tend to have shorter genes than the soil-dwellers; halophiles tend to have longer genes. We also found that species with overrepresentation of cytosines and guanines in the third position of the codon (GC3 content) tend to have longer genes than species with low GC3 content. PMID:26114113
Lengths of Orthologous Prokaryotic Proteins Are Affected by Evolutionary Factors.
Tatarinova, Tatiana; Salih, Bilal; Dien Bard, Jennifer; Cohen, Irit; Bolshoy, Alexander
2015-01-01
Proteins of the same functional family (for example, kinases) may have significantly different lengths. It is an open question whether such variation in length is random or it appears as a response to some unknown evolutionary driving factors. The main purpose of this paper is to demonstrate existence of factors affecting prokaryotic gene lengths. We believe that the ranking of genomes according to lengths of their genes, followed by the calculation of coefficients of association between genome rank and genome property, is a reasonable approach in revealing such evolutionary driving factors. As we demonstrated earlier, our chosen approach, Bubble-sort, combines stability, accuracy, and computational efficiency as compared to other ranking methods. Application of Bubble Sort to the set of 1390 prokaryotic genomes confirmed that genes of Archaeal species are generally shorter than Bacterial ones. We observed that gene lengths are affected by various factors: within each domain, different phyla have preferences for short or long genes; thermophiles tend to have shorter genes than the soil-dwellers; halophiles tend to have longer genes. We also found that species with overrepresentation of cytosines and guanines in the third position of the codon (GC3 content) tend to have longer genes than species with low GC3 content.
Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne
2016-01-05
In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.
Dynamic stability analysis of fractional order leaky integrator echo state neural networks
NASA Astrophysics Data System (ADS)
Pahnehkolaei, Seyed Mehdi Abedi; Alfi, Alireza; Tenreiro Machado, J. A.
2017-06-01
The Leaky integrator echo state neural network (Leaky-ESN) is an improved model of the recurrent neural network (RNN) and adopts an interconnected recurrent grid of processing neurons. This paper presents a new proof for the convergence of a Lyapunov candidate function to zero when time tends to infinity by means of the Caputo fractional derivative with order lying in the range (0, 1). The stability of Fractional-Order Leaky-ESN (FO Leaky-ESN) is then analyzed, and the existence, uniqueness and stability of the equilibrium point are provided. A numerical example demonstrates the feasibility of the proposed method.
Thermal Convection in Two-Dimensional Soap Films
NASA Astrophysics Data System (ADS)
Zhang, Jie; Wu, X. L.
2002-11-01
Thermal convection in a fluid is a common phenomenon. Due to thermal expansion, the light warm fluid at the bottom tends to rise and the cold, heavier fluid at the top tends to fall. This so-called thermal convection exists in earth atmosphere and in oceans. It is also an important mechanism by which energy is transported in stars. In this study we investigate thermal convection in a vertical soap film.
Comparison of eigensolvers for symmetric band matrices.
Moldaschl, Michael; Gansterer, Wilfried N
2014-09-15
We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues. Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated). Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.
1988-01-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
ERIC Educational Resources Information Center
Berbary, Lisbeth A.
2012-01-01
While multiple and competing understandings of sororities exist in popular culture, academic research on sororities tends to homogenize the experience of sorority women, simplifying their existence to a quantitative understanding of specific behaviors such as those associated with binge drinking, eating disorders, and heterosexuality.…
Backenroth, Daniel; He, Zihuai; Kiryluk, Krzysztof; Boeva, Valentina; Pethukova, Lynn; Khurana, Ekta; Christiano, Angela; Buxbaum, Joseph D; Ionita-Laza, Iuliana
2018-05-03
We describe a method based on a latent Dirichlet allocation model for predicting functional effects of noncoding genetic variants in a cell-type- and/or tissue-specific way (FUN-LDA). Using this unsupervised approach, we predict tissue-specific functional effects for every position in the human genome in 127 different tissues and cell types. We demonstrate the usefulness of our predictions by using several validation experiments. Using eQTL data from several sources, including the GTEx project, Geuvadis project, and TwinsUK cohort, we show that eQTLs in specific tissues tend to be most enriched among the predicted functional variants in relevant tissues in Roadmap. We further show how these integrated functional scores can be used for (1) deriving the most likely cell or tissue type causally implicated for a complex trait by using summary statistics from genome-wide association studies and (2) estimating a tissue-based correlation matrix of various complex traits. We found large enrichment of heritability in functional components of relevant tissues for various complex traits, and FUN-LDA yielded higher enrichment estimates than existing methods. Finally, using experimentally validated functional variants from the literature and variants possibly implicated in disease by previous studies, we rigorously compare FUN-LDA with state-of-the-art functional annotation methods and show that FUN-LDA has better prediction accuracy and higher resolution than these methods. In particular, our results suggest that tissue- and cell-type-specific functional prediction methods tend to have substantially better prediction accuracy than organism-level prediction methods. Scores for each position in the human genome and for each ENCODE and Roadmap tissue are available online (see Web Resources). Copyright © 2018 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-10-01
... STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH... obtained during a research misconduct proceeding that tends to prove or disprove the existence of an...
Code of Federal Regulations, 2011 CFR
2011-10-01
... STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH... obtained during a research misconduct proceeding that tends to prove or disprove the existence of an...
Accessory mental foramina and nerves: Application to periodontal, periapical, and implant surgery.
Iwanaga, Joe; Watanabe, Koichi; Saga, Tsuyoshi; Tabira, Yoko; Kitashima, Sadaharu; Kusukawa, Jingo; Yamaki, Koh-Ichi
2016-05-01
Recent studies investigating accessory mental foramina using developments in diagnostic imaging have primarily defined the morphology of the foramina; however, few studies have described the structures passing through them. Additional clinical knowledge of the foramina is therefore required for preoperative diagnosis prior to surgery, including implant, periodontal and periapical surgery. In this study, we investigated the accessory mental foramina and the associated nerves and arteries in donated cadaveric mandibles using anatomical and radiological observation methods. We examined 63 mandibles with overlying soft tissue by cone-beam computed tomography and noted the existence of the accessory mental foramina. Mandibles with accessory mental foramina were subsequently analyzed. Additionally, the neurovascular bundles passing through these foramina were dissected using anatomical methods.The incidence of accessory mental foramina was 14.3%. The larger foramina tended to be located anteriorly or superiorly and proximal to the mental foramen, while the smaller foramina tended to be located posterosuperiorly and distal to the mental foramen. The mental foramen ipsilateral to the accessory mental foramen was smaller than the one contralateral to it. The comparatively distant and large accessory mental foramen included an artery.This study elucidated the relationship between accessory mental foramina and the associated nerves and arteries. We believe that the results will contribute to the clinical dentistry field. © 2015 Wiley Periodicals, Inc.
Crash characteristics at work zones
DOT National Transportation Integrated Search
2001-05-01
Work zones tend to cause hazardous conditions for vehicle drivers and construction workers since they generate conflicts between construction activities and the traffic, and therefore aggravate the existing traffic conditions.
Proof of age required--estimating age in adults without birth records.
Phillips, Christine; Narayanasamy, Shanti
2010-07-01
Many adults from refugee source countries do not have documents of birth, either because they have been lost in flight, or because the civil infrastructure is too fragile to support routine recording of birth. In Western countries, date of birth is used as a basic identifier, and access to services and support tends to be age regulated. Doctors are not infrequently asked to write formal reports estimating the true age of adult refugees; however, there are no existing guidelines to assist in this task. To provide an overview of methods to estimate age in living adults, and outline recommendations for best practice. Age should be estimated through physical examination; life history, matching local or national events with personal milestones; and existing nonformal documents. Accuracy of age estimation should be subject to three tests: biological plausibility, historical plausibility, and corroboration from reputable sources.
Machiavellian tendencies of nonprofit health care employees.
Richmond, Kelly A; Smith, Pamela C
2005-01-01
Federal and state regulators have heightened scrutiny of nonprofit hospital operations, particularly in billing collections. The move for hospitals to adopt more compassionate methods within their business functions drives the need to examine the ethical reasoning of their employees. The purpose of this study is to assess the existence of Machiavellian propensities among health care employees. People defined as Machiavellian are impersonal, rational, and strategy-oriented rather than person-oriented. Results indicate employee participants exhibit these propensities, and tend to agree with questionable scenarios. Knowledge of the ethical propensities of employees may serve as a crucial factor to the success of any plan in establishing an ethical work environment.
Fungal Infections: The Stubborn Cases
Adam, John E.
1982-01-01
Despite development of numerous antifungal preparations, mycotic infections persist, because of inaccurate diagnosis leading to inappropriate therapy, drug failure, non-compliance or resistance of the organism to antifungal medication. Direct KOH examination is the simplest method of proving the existence of a fungus. Fungal infections tend to be overdiagnosed; disorders which do not improve with three to four weeks of treatment should be reassessed before being labelled ‘stubborn’. Griseofulvin is effective treatment for all dermatophytes, but has certain side effects. Newer topical antifungals are also effective, but no single drug cures all fungal infections. ImagesFig. 1Fig. 2Fig. 3Fig. 4Fig. 5Fig. 6Fig. 7Fig. 8 PMID:20469387
Reiff, Marian; Bugos, Eva; Giarelli, Ellen; Bernhardt, Barbara A; Spinner, Nancy B; Sankar, Pamela L; Mulchandani, Surabhi
2017-05-01
Despite increasing utilization of chromosomal microarray analysis (CMA) for autism spectrum disorders (ASD), limited information exists about how results influence parents' beliefs about etiology and prognosis. We conducted in-depth interviews and surveys with 57 parents of children with ASD who received CMA results categorized as pathogenic, negative or variant of uncertain significance. Parents tended to incorporate their child's CMA results within their existing beliefs about the etiology of ASD, regardless of CMA result. However, parents' expectations for the future tended to differ depending on results; those who received genetic confirmation for their children's ASD expressed a sense of concreteness, acceptance and permanence of the condition. Some parents expressed hope for future biomedical treatments as a result of genetic research.
Frequently Asked Questions (FAQs) about Malaria
... areas tend to have less commercial activity, less population density, more green space, and agriculture ... that no more malaria exists in the world. Malaria has been eliminated from many developed countries ...
2014-01-01
Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614
Retaliatory Strike or Fired with Cause: A Case Study of Gay Identity Disclosure and Law Enforcement
ERIC Educational Resources Information Center
Collins, Joshua C.
2016-01-01
A relatively small amount of HRD research has focused on issues for lesbian, gay, bisexual, and transgender (LGBT) people. The majority that does exist tends to focus on issues assumed to cut across the entire LGBT community. However, a need exists for research that identifies and articulates the varied experiences of each of these identity…
Zhang, Hui; Zhou, Guo Mo; Bai, Shang Bin; Wang, Yi Xiang; You, Yu Jie; Zhu, Ting Ting; Zhang, Hua Feng
2017-05-18
The typical natural secondary shrub community was chosen in Lin'an of Zhejiang Pro-vince to discover its possibility of restoration to arbor forest with three kinds of forest management models being taken, i.e., no care as control, closed forest management and target tree tending. Over four years growth, compared with control, closed forest management significantly increased average DBH and height by 130% and 50%, respectively, while 260% and 110% for target tree tending. In target tree tending plots, larger trees had been emerging with 4.5-8.5 cm diameter class and 4.5-8.5 m height class and formed a new storey of 4 m compared with control. The species biodiversity indexes at shrub layer were significantly increased in closed management plots, and did not decrease in target tree tending plots. Closed forest management did not change the tree species composition, following its previous succession direction. However, target tree tending increased the importance value of target species with the high potential succession direction of mixed coniferous-broadleaved forest. The results revealed that the secondary shrub community with target tree tending achieved more desired goals on DBH and height growth of dominant trees and species composition improvement compared with closed management. If the secondary shrub community could be managed when the operational conditions existed, target tree tending model should be selected to accelerate the restoration of shrub toward arbor forest.
A new method for constructing analytic elements for groundwater flow.
NASA Astrophysics Data System (ADS)
Strack, O. D.
2007-12-01
The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.
Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency
NASA Astrophysics Data System (ADS)
Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu
2018-03-01
Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.
Spatio-temporal colour correction of strongly degraded movies
NASA Astrophysics Data System (ADS)
Islam, A. B. M. Tariqul; Farup, Ivar
2011-01-01
The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms. We present an automatic colour correction technique for digital colour restoration of strongly degraded movie material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve an 80 percent reduction of the computational time compared to processing every single frame individually. We performed two user experiments and found that the visual quality of the resulting frames was significantly better than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality and computational efficiency.
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-08-18
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.
Analytical N beam position monitor method
NASA Astrophysics Data System (ADS)
Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.
2017-11-01
Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.
Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C
2013-05-01
Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. Copyright © 2013 Elsevier B.V. All rights reserved.
Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.
2014-01-01
Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422
Lazy collaborative filtering for data sets with missing values.
Ren, Yongli; Li, Gang; Zhang, Jun; Zhou, Wanlei
2013-12-01
As one of the biggest challenges in research on recommender systems, the data sparsity issue is mainly caused by the fact that users tend to rate a small proportion of items from the huge number of available items. This issue becomes even more problematic for the neighborhood-based collaborative filtering (CF) methods, as there are even lower numbers of ratings available in the neighborhood of the query item. In this paper, we aim to address the data sparsity issue in the context of neighborhood-based CF. For a given query (user, item), a set of key ratings is first identified by taking the historical information of both the user and the item into account. Then, an auto-adaptive imputation (AutAI) method is proposed to impute the missing values in the set of key ratings. We present a theoretical analysis to show that the proposed imputation method effectively improves the performance of the conventional neighborhood-based CF methods. The experimental results show that our new method of CF with AutAI outperforms six existing recommendation methods in terms of accuracy.
Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo
2018-04-25
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.
What Should Stay Put? Campus Landscape Planning for the Long Term.
ERIC Educational Resources Information Center
Yahres, Mike Van
2000-01-01
Discusses campus landscape long-term planning and design decision making during campus alterations and upgrades. Those campus landscape elements that tend to remain in place and planning for their continued existence are discussed. (GR)
Christine L. Lane
1998-01-01
Export constraints affecting North American west coast logs have existed intermittently since 1831. Recent developments have tended toward tighter restrictions. National, Provincial, and State rules are described.
Generation of Neo Octaploid Switchgrass
USDA-ARS?s Scientific Manuscript database
Switchgrass (Panicum virgatum L.) exists as multiple cytotypes with octaploid and tetraploid populations occupying distinct, overlapping ranges. These cytotypes tend to show differences in adaptation, yield potential, and other characters, but the specific result of whole genome duplication is not ...
This paper provides an overview of existing statistical methodologies for the estimation of site-specific and regional trends in wet deposition. The interaction of atmospheric processes and emissions tend to produce wet deposition data patterns that show large spatial and tempora...
Uncovering the information core in recommender systems
NASA Astrophysics Data System (ADS)
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhou, Tao
2014-08-01
With the rapid growth of the Internet and overwhelming amount of information that people are confronted with, recommender systems have been developed to effectively support users' decision-making process in online systems. So far, much attention has been paid to designing new recommendation algorithms and improving existent ones. However, few works considered the different contributions from different users to the performance of a recommender system. Such studies can help us improve the recommendation efficiency by excluding irrelevant users. In this paper, we argue that in each online system there exists a group of core users who carry most of the information for recommendation. With them, the recommender systems can already generate satisfactory recommendation. Our core user extraction method enables the recommender systems to achieve 90% of the accuracy of the top-L recommendation by taking only 20% of the users into account. A detailed investigation reveals that these core users are not necessarily the large-degree users. Moreover, they tend to select high quality objects and their selections are well diversified.
Advantages of later motherhood.
Myrskylä, M; Barclay, K; Goisis, A
2017-01-01
In high-income countries childbearing has been increasingly postponed since the 1970s and it is crucial to understand the consequences of this demographic shift. The literature has tended to characterize later motherhood as a significant health threat for children and parents. We contribute to this debate by reviewing recent evidence suggesting that an older maternal age can also have positive effects. Literature linking the age at parenthood with the sociodemographic characteristics of the parents, with macrolevel interactions, and with subjective well-being. Comprehensive review of the existing literature. Recent studies show that there can also be advantages associated with later motherhood. First, whilst in past older mothers had low levels of education and large families, currently older mothers tend to have higher education and smaller families than their younger peers. Consequently, children born to older mothers in the past tended to have worse outcomes than children born to younger mothers, whilst the opposite is true in recent cohorts. Second, postponement of childbearing means that the child is born at a later date and in a later birth cohort, and may benefit from secular changes in the macroenvironment. Evidence shows that when the positive trends in the macroenvironment are strong they overweigh the negative effects of reproductive ageing. Third, existing studies show that happiness increases around and after childbirth among older mothers, whereas for younger mothers the effect does not exist or is short-lived. There are important sociodemographic pathways associated with postponement of childbearing which might compensate or even more than compensate for the biological disadvantages associated with reproductive ageing.
Application of Persistent Scatterer Radar Interferometry to the New Orleans delta region
NASA Astrophysics Data System (ADS)
Lohman, R.; Fielding, E.; Blom, R.
2007-12-01
Subsidence in New Orleans and along the Gulf Coast is currently monitored using a variety of ground- and satellite-based methods, and extensive geophysical modeling of the area seeks to understand the inputs to subsidence rates from sediment compaction, salt evacuation, oxidation and anthropogenic forcings such as the withdrawal or injection of subsurface fluids. Better understanding of the temporal and spatial variability of these subsidence rates can help us improve civic planning and disaster mitigation efforts with the goal of protecting lives and property over the long term. Existing ground-based surveys indicate that subsidence gradients of up to 1 cm/yr or more over length scales of several 10's of km exist in the region, especially in the vicinity of the city of New Orleans. Modeling results based on sediment inputs and post-glacial sea level change tend to predict lower gradients, presumably because there is a large input from unmodeled crustal faults and anthropogenic activity. The broad spatial coverage of InSAR can both add to the existing network of ground-based geodetic surveys, and can help to identify areas that are deforming anomalously with respect to surrounding areas. Here we present the use of a modified point scatterer method applied to radar data from the Radarsat satellite for New Orleans and the Gulf Coast. Point target analysis of InSAR data has already been successfully applied to the New Orleans area by Dixon et al (2006). Our method is similar to the Stanford Method for PS (StaMPS) developed by Andy Hooper, adapted to rely on combinations of small orbital baselines and the inclusion of coherent regions from the time span of each interferogram during phase unwrapping rather than only using points that are stable within all interferograms.
Attentional effects on gaze preference for salient loci in traffic scenes.
Sakai, Hiroyuki; Shin, Duk; Kohama, Takeshi; Uchiyama, Yuji
2012-01-01
Alerting drivers for self-regulation of attention might decrease crash risks attributable to absent-minded driving. However, no reliable method exists for monitoring driver attention. Therefore, we examined attentional effects on gaze preference for salient loci (GPS) in traffic scenes. In an active viewing (AV) condition requiring endogenous attention for traffic scene comprehension, participants identified appropriate speeds for driving in presented traffic scene images. In a passive viewing (PV) condition requiring no endogenous attention, participants passively viewed traffic scene images. GPS was quantified by the mean saliency value averaged across fixation locations. Results show that GPS was less during AV than during PV. Additionally, gaze dwell time on signboards was shorter for AV than for PV. These results suggest that, in the absence of endogenous attention for traffic scene comprehension, gaze tends to concentrate on irrelevant salient loci in a traffic environment. Therefore, increased GPS can indicate absent-minded driving. The present study demonstrated that, without endogenous attention for traffic scene comprehension, gaze tends to concentrate on irrelevant salient loci in a traffic environment. This result suggests that increased gaze preference for salient loci indicates absent-minded driving, which is otherwise difficult to detect.
Critical exponent analysis of lightly germanium-doped La0.7Ca0.3Mn1-xGexO3 (x = 0.05 and x = 0.07)
NASA Astrophysics Data System (ADS)
Nanto, Dwi; Kurniawan, Budhy; Soegijono, Bambang; Ghosh, Nilotpal; Hwang, Jong-Soon; Yu, Seong-Cho
2018-04-01
We have used a critical behavior study of La0.7Ca0.3MnO3 (LCMO) manganite perovskites whose Mn sites have been doped with Ge to explore magnetic interactions. Light Ge doping of 5 or 7 percent tended to produce LCMOs with second order magnetic transitions. The critical parameters of 5- and 7-percent Ge-doped LCMO were determined to be TC = 185 K, β = 0.331 ± 0.019, and γ = 1.15 ± 0.017; and TC = 153 K, β = 0.496 ± 0.011, and γ = 1.03 ± 0.046, respectively, via the modified Arrott plot method. Isothermal magnetization data collected near the Curie temperature (TC) was split into a universal function with two branches M(H,ɛ) = |ɛ|βf±(H/|ɛ|β+γ), where ɛ=(T-TC)/TC is the reduced temperature. f+ is used when T>TC, while f̲ is used when T
2010-01-01
Background In a recent study, two-dimensional (2D) network layouts were used to visualize and quantitatively analyze the relationship between chronic renal diseases and regulated genes. The results revealed complex relationships between disease type, gene specificity, and gene regulation type, which led to important insights about the underlying biological pathways. Here we describe an attempt to extend our understanding of these complex relationships by reanalyzing the data using three-dimensional (3D) network layouts, displayed through 2D and 3D viewing methods. Findings The 3D network layout (displayed through the 3D viewing method) revealed that genes implicated in many diseases (non-specific genes) tended to be predominantly down-regulated, whereas genes regulated in a few diseases (disease-specific genes) tended to be up-regulated. This new global relationship was quantitatively validated through comparison to 1000 random permutations of networks of the same size and distribution. Our new finding appeared to be the result of using specific features of the 3D viewing method to analyze the 3D renal network. Conclusions The global relationship between gene regulation and gene specificity is the first clue from human studies that there exist common mechanisms across several renal diseases, which suggest hypotheses for the underlying mechanisms. Furthermore, the study suggests hypotheses for why the 3D visualization helped to make salient a new regularity that was difficult to detect in 2D. Future research that tests these hypotheses should enable a more systematic understanding of when and how to use 3D network visualizations to reveal complex regularities in biological networks. PMID:21070623
Development of a Family of Ultra-High Performance Concrete Pi-Girders
DOT National Transportation Integrated Search
2014-01-01
Ultra-high performance concrete (UHPC) is an advanced cementitious composite material, which tends to exhibit superior properties such as exceptional durability, increased strength, and long-term stability. (See references 1-4.) The use of existing s...
Existence of ground state of an electron in the BDF approximation
NASA Astrophysics Data System (ADS)
Sok, Jérémy
2014-05-01
The Bogoliubov-Dirac-Fock (BDF) model allows us to describe relativistic electrons interacting with the Dirac sea. It can be seen as a mean-field approximation of Quantum Electrodynamics (QED) where photons are neglected. This paper treats the case of an electron together with the Dirac sea in the absence of any external field. Such a system is described by its one-body density matrix, an infinite rank, self-adjoint operator. The parameters of the model are the coupling constant α > 0 and the ultraviolet cut-off Λ > 0: we consider the subspace of squared integrable functions made of the functions whose Fourier transform vanishes outside the ball B(0, Λ). We prove the existence of minimizers of the BDF energy under the charge constraint of one electron and no external field provided that α, Λ-1 and α log(Λ) are sufficiently small. The interpretation is the following: in this regime the electron creates a polarization in the Dirac vacuum which allows it to bind. We then study the non-relativistic limit of such a system in which the speed of light tends to infinity (or equivalently α tends to zero) with αlog(Λ) fixed: after rescaling and translation the electronic solution tends to a Choquard-Pekar ground state.
Metallized Nanotube Polymer Composite (MNPC) and Methods for Making Same
NASA Technical Reports Server (NTRS)
Harrison, Joycelyn S. (Inventor); Lowther, Sharon E. (Inventor); Lillehei, Peter T. (Inventor); Park, Cheol (Inventor); Taylor, Larry (Inventor); Kang, Jin Ho (Inventor); Nazem, Negin (Inventor); Kim, Jae-Woo (Inventor); Sauti, Godfrey (Inventor)
2017-01-01
A novel method to develop highly conductive functional materials which can effectively shield various electromagnetic effects (EMEs) and harmful radiations. Metallized nanotube polymer composites (MNPC) are composed of a lightweight polymer matrix, superstrong nanotubes (NT), and functional nanoparticle inclusions. MNPC is prepared by supercritical fluid infusion of various metal precursors (Au, Pt, Fe, and Ni salts), incorporated simultaneously or sequentially, into a solid NT-polymer composite followed by thermal reduction. The infused metal precursor tends to diffuse toward the nanotube surface preferentially as well as the surfaces of the NT-polymer matrix, and is reduced to form nanometer-scale metal particles or metal coatings. The conductivity of the MNPC increases with the metallization, which provides better shielding capabilities against various EMEs and radiations by reflecting and absorbing EM waves more efficiently. Furthermore, the supercritical fluid infusion process aids to improve the toughness of the composite films significantly regardless of the existence of metal.
A visual model for object detection based on active contours and level-set method.
Satoh, Shunji
2006-09-01
A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.
Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.
Proença, Hugo
2010-08-01
Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.
A general purpose feature extractor for light detection and ranging data.
Li, Yangming; Olson, Edwin B
2010-01-01
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.
A General Purpose Feature Extractor for Light Detection and Ranging Data
Li, Yangming; Olson, Edwin B.
2010-01-01
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset. PMID:22163474
Noise radiation directivity from a wind-tunnel inlet with inlet vanes and duct wall linings
NASA Technical Reports Server (NTRS)
Soderman, P. T.; Phillips, J. D.
1986-01-01
The acoustic radiation patterns from a 1/15th scale model of the Ames 80- by 120-Ft Wind Tunnel test section and inlet have been measured with a noise source installed in the test section. Data were acquired without airflow in the duct. Sound-absorbent inlet vanes oriented parallel to each other, or splayed with a variable incidence relative to the duct long axis, were evaluated along with duct wall linings. Results show that splayed vans tend to spread the sound to greater angles than those measured with the open inlet. Parallel vanes narrowed the high-frequency radiation pattern. Duct wall linings had a strong effect on acoustic directivity by attenuating wall reflections. Vane insertion loss was measured. Directivity results are compared with existing data from square ducts. Two prediction methods for duct radiation directivity are described: one is an empirical method based on the test data, and the other is a analytical method based on ray acoustics.
A uniform Tauberian theorem in dynamic games
NASA Astrophysics Data System (ADS)
Khlopin, D. V.
2018-01-01
Antagonistic dynamic games including games represented in normal form are considered. The asymptotic behaviour of value in these games is investigated as the game horizon tends to infinity (Cesàro mean) and as the discounting parameter tends to zero (Abel mean). The corresponding Abelian-Tauberian theorem is established: it is demonstrated that in both families the game value uniformly converges to the same limit, provided that at least one of the limits exists. Analogues of one-sided Tauberian theorems are obtained. An example shows that the requirements are essential even for control problems. Bibliography: 31 titles.
The One-Child Policy Needs an Overhaul
ERIC Educational Resources Information Center
Jing, Yijia
2013-01-01
intact for over three decades, despite the vast socioeconomic changes emerging during this period. While the pressure of population growth still exists, the current control-focused policy has aroused problems and damages that tend to offset its gains. The legitimacy of…
Culture: Copying, Compression, and Conventionality
ERIC Educational Resources Information Center
Tamariz, Mónica; Kirby, Simon
2015-01-01
Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, 2008; Smith, Tamariz, & Kirby, 2013). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning…
1983-12-01
problem equipment, supply support deficiencies , maintenance difficulties, etc., which tend to reduce the combat readi- ness of the Navy. CASREPs are...with Operation Reports Publication NVP 7. The severity codes are as follows: C-2 - (Substantially Ready) A deficiency exists in mission essential...equipment which causes a minor degradaticn in any primary mission area. C-3 - (Marginally Ready) a deficiency exists in mission essential equipment which
Nanostructured Composites: Effective Mechanical Property Determination of Nanotube Bundles
NASA Technical Reports Server (NTRS)
Saether, E.; Pipes, R. B.; Frankland, S. J. V.
2002-01-01
Carbon nanotubes naturally tend to form crystals in the form of hexagonally packed bundles or ropes that should exhibit a transversely isotropic constitutive behavior. Although the intratube axial stiffness is on the order of 1 TPa due to a strong network of delocalized bonds, the intertube cohesive strength is orders of magnitude less controlled by weak, nonbonding van der Waals interactions. An accurate determination of the effective mechanical properties of nanotube bundles is important to assess potential structural applications such as reinforcement in future composite material systems. A direct method for calculating effective material constants is developed in the present study. The Lennard-Jones potential is used to model the nonbonding cohesive forces. A complete set of transverse moduli are obtained and compared with existing data.
Optimizing Aspect-Oriented Mechanisms for Embedded Applications
NASA Astrophysics Data System (ADS)
Hundt, Christine; Stöhr, Daniel; Glesner, Sabine
As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.
[The future of forensic DNA analysis for criminal justice].
Laurent, François-Xavier; Vibrac, Geoffrey; Rubio, Aurélien; Thévenot, Marie-Thérèse; Pène, Laurent
2017-11-01
In the criminal framework, the analysis of approximately 20 DNA microsatellites enables the establishment of a genetic profile with a high statistical power of discrimination. This technique gives us the possibility to establish or exclude a match between a biological trace detected at a crime scene and a suspect whose DNA was collected via an oral swab. However, conventional techniques do tend to complexify the interpretation of complex DNA samples, such as degraded DNA and mixture DNA. The aim of this review is to highlight the powerness of new forensic DNA methods (including high-throughput sequencing or single-cell sequencing) to facilitate the interpretation of the expert with full compliance with existing french legislation. © 2017 médecine/sciences – Inserm.
Towards Online Multiresolution Community Detection in Large-Scale Networks
Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim
2011-01-01
The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325
Atkinson, David A.
2002-01-01
Methods and apparatus for ion mobility spectrometry and analyte detection and identification verification system are disclosed. The apparatus is configured to be used in an ion mobility spectrometer and includes a plurality of reactant reservoirs configured to contain a plurality of reactants which can be reacted with the sample to form adducts having varying ion mobilities. A carrier fluid, such as air or nitrogen, is used to carry the sample into the spectrometer. The plurality of reactants are configured to be selectively added to the carrier stream by use inlet and outlet manifolds in communication with the reagent reservoirs, the reservoirs being selectively isolatable by valves. The invention further includes a spectrometer having the reagent system described. In the method, a first reactant is used with the sample. Following a positive result, a second reactant is used to determine whether a predicted response occurs. The occurrence of the second predicted response tends to verify the existence of a component of interest within the sample. A third reactant can also be used to provide further verification of the existence of a component of interest. A library can be established of known responses of compounds of interest with various reactants and the results of a specific multi-reactant survey of a sample can be compared against the library to determine whether a component detected in the sample is likely to be a specific component of interest.
Kusters, Koen; Buck, Louise; de Graaf, Maartje; Minang, Peter; van Oosten, Cora; Zagt, Roderick
2018-07-01
Integrated landscape initiatives typically aim to strengthen landscape governance by developing and facilitating multi-stakeholder platforms. These are institutional coordination mechanisms that enable discussions, negotiations, and joint planning between stakeholders from various sectors in a given landscape. Multi-stakeholder platforms tend to involve complex processes with diverse actors, whose objectives and focus may be subjected to periodic re-evaluation, revision or reform. In this article we propose a participatory method to aid planning, monitoring, and evaluation of such platforms, and we report on experiences from piloting the method in Ghana and Indonesia. The method is comprised of three components. The first can be used to look ahead, identifying priorities for future multi-stakeholder collaboration in the landscape. It is based on the identification of four aspirations that are common across multi-stakeholder platforms in integrated landscape initiatives. The second can be used to look inward. It focuses on the processes within an existing multi-stakeholder platform in order to identify areas for possible improvement. The third can be used to look back, identifying the main outcomes of an existing platform and comparing them to the original objectives. The three components can be implemented together or separately. They can be used to inform planning and adaptive management of the platform, as well as to demonstrate performance and inform the design of new interventions.
Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.
Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi
2015-04-22
Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.
Leading an Entrepreneurial Workforce: Development or Decline?
ERIC Educational Resources Information Center
Clargo, Paul; Tunstall, Richard
2011-01-01
Purpose: This paper analyses entrepreneurial activity within existing organisations. Research tends to limit entrepreneurial behaviour to owner-managers, corporate senior and middle managers and frequently presents intrapreneurship as a positive phenomenon. This paper seeks to broaden the focus of studies of intrapreneurship and corporate…
Evaluating the Interdisciplinary Discoverability of Data
NASA Astrophysics Data System (ADS)
Gordon, S.; Habermann, T.
2017-12-01
Documentation needs are similar across communities. Communities tend to agree on many of the basic concepts necessary for discovery. Shared concepts such as a title or a description of the data exist in most metadata dialects. Many dialects have been designed and recommendations implemented to create metadata valuable for data discovery. These implementations can create barriers to discovering the right data. How can we ensure that the documentation we curate will be discoverable and understandable by researchers outside of our own disciplines and organizations? Since communities tend to use and understand many of the same documentation concepts, the barriers to interdisciplinary discovery are caused by the differences in the implementation. Thus tools and methods designed for the conceptual layer that evaluate records for documentation concepts, regardless of the dialect, can be effective in identifying opportunities for improvement and providing guidance. The Metadata Evaluation Web Service combined with a Jupyter Notebook interface allows a user to gather insight about a collection of records with respect to different communities' conceptual recommendations. It accomplishes this via data visualizations and provides links to implementation specific guidance on the ESIP Wiki for each recommendation applied to the collection. By utilizing these curation tools as part of an iterative process the data's impact can be increased by making it discoverable to a greater scientific and research community. Due to the conceptual focus of the methods and tools used, they can be utilized by any community or organization regardless of their documentation dialect or tools.
Detecting and Characterizing Genomic Signatures of Positive Selection in Global Populations
Liu, Xuanyao; Ong, Rick Twee-Hee; Pillai, Esakimuthu Nisha; Elzein, Abier M.; Small, Kerrin S.; Clark, Taane G.; Kwiatkowski, Dominic P.; Teo, Yik-Ying
2013-01-01
Natural selection is a significant force that shapes the architecture of the human genome and introduces diversity across global populations. The question of whether advantageous mutations have arisen in the human genome as a result of single or multiple mutation events remains unanswered except for the fact that there exist a handful of genes such as those that confer lactase persistence, affect skin pigmentation, or cause sickle cell anemia. We have developed a long-range-haplotype method for identifying genomic signatures of positive selection to complement existing methods, such as the integrated haplotype score (iHS) or cross-population extended haplotype homozygosity (XP-EHH), for locating signals across the entire allele frequency spectrum. Our method also locates the founder haplotypes that carry the advantageous variants and infers their corresponding population frequencies. This presents an opportunity to systematically interrogate the whole human genome whether a selection signal shared across different populations is the consequence of a single mutation process followed subsequently by gene flow between populations or of convergent evolution due to the occurrence of multiple independent mutation events either at the same variant or within the same gene. The application of our method to data from 14 populations across the world revealed that positive-selection events tend to cluster in populations of the same ancestry. Comparing the founder haplotypes for events that are present across different populations revealed that convergent evolution is a rare occurrence and that the majority of shared signals stem from the same evolutionary event. PMID:23731540
Transversally periodic solitary gravity–capillary waves
Milewski, Paul A.; Wang, Zhan
2014-01-01
When both gravity and surface tension effects are present, surface solitary water waves are known to exist in both two- and three-dimensional infinitely deep fluids. We describe here solutions bridging these two cases: travelling waves which are localized in the propagation direction and periodic in the transverse direction. These transversally periodic gravity–capillary solitary waves are found to be of either elevation or depression type, tend to plane waves below a critical transverse period and tend to solitary lumps as the transverse period tends to infinity. The waves are found numerically in a Hamiltonian system for water waves simplified by a cubic truncation of the Dirichlet-to-Neumann operator. This approximation has been proved to be very accurate for both two- and three-dimensional computations of fully localized gravity–capillary solitary waves. The stability properties of these waves are then investigated via the time evolution of perturbed wave profiles. PMID:24399922
Leng, Yonggang; Fan, Shengbo
2018-01-01
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method. PMID:29693577
NASA Astrophysics Data System (ADS)
Zhou, Q.; Liu, L.
2017-12-01
Quantifying past mantle dynamic processes represents a major challenge in understanding the temporal evolution of the solid earth. Mantle convection modeling with data assimilation is one of the most powerful tools to investigate the dynamics of plate subduction and mantle convection. Although various data assimilation methods, both forward and inverse, have been created, these methods all have limitations in their capabilities to represent the real earth. Pure forward models tend to miss important mantle structures due to the incorrect initial condition and thus may lead to incorrect mantle evolution. In contrast, pure tomography-based models cannot effectively resolve the fine slab structure and would fail to predict important subduction-zone dynamic processes. Here we propose a hybrid data assimilation method that combines the unique power of the sequential and adjoint algorithms, which can properly capture the detailed evolution of the downgoing slab and the tomographically constrained mantle structures, respectively. We apply this new method to reconstructing mantle dynamics below the western U.S. while considering large lateral viscosity variations. By comparing this result with those from several existing data assimilation methods, we demonstrate that the hybrid modeling approach recovers the realistic 4-D mantle dynamics to the best.
Review of the Strength and Capacity Data for Manual Material Handling Activities.
1979-11-01
Industrial Association Journal, 1, 1-12, 1973. KEYWORDS: posture, lordosimetry, lunbosacral adjustments, ectomorphs , endomorphs METHODS: Ten women served as...no-load condition, the average lumbosacral angle is close to 130 degrees, and the range of its variation is less for the ectomorphs . Lumbosacral...in reaction to light, load application, ectomorphs tend to arch, and endomorphs tend to straighten. At greater levels of loading, the ectomorphs tend
Marihuana, Alcohol and Tobacco: Reassessment of a Presumed Relationship.
ERIC Educational Resources Information Center
Dull, R. Thomas; Williams, Franklin P., III
1981-01-01
Concludes little relationship exists between the three substances marihuana, alcohol and tobacco. Youthful subjects tend to overestimate the relationships between the three substances and cannot be generalized to other populations. Suggests an explanation of this youthful association focuses on simultaneous experimentation rather than causal…
An Introduction to Multicultural Counseling.
ERIC Educational Resources Information Center
Lee, Wanda M. L.
When client and counselor are from different cultural backgrounds, they tend to view things from disparate perspectives. Though a background in multiculturalism is required for program accreditation, most existing texts limit coverage to ethnicity, without the emphasis of broad concepts such as discrimination and acculturation, or coverage of…
NASA Astrophysics Data System (ADS)
Fiedler, Sabine; Illich, Bernhard; Berger, Jochen; Graw, Matthias
2009-07-01
Ground-penetration radar (GPR) is a geophysical method that is commonly used in archaeological and forensic investigations, including the determination of the exact location of graves. Whilst the method is rapid and does not involve disturbance of the graves, the interpretation of GPR profiles is nevertheless difficult and often leads to incorrect results. Incorrect identifications could hinder criminal investigations and complicate burials in cemeteries that have no information on the location of previously existing graves. In order to increase the number of unmarked graves that are identified, the GPR results need to be verified by comparing them with the soil and vegetation properties of the sites examined. We used a modern cemetery to assess the results obtained with GPR which we then compared with previously obtained tachymetric data and with an excavation of the graves where doubt existed. Certain soil conditions tended to make the application of GPR difficult on occasions, but a rough estimation of the location of the graves was always possible. The two different methods, GPR survey and tachymetry, both proved suitable for correctly determining the exact location of the majority of graves. The present study thus shows that GPR is a reliable method for determining the exact location of unmarked graves in modern cemeteries. However, the method did not allow statements to be made on the stage of decay of the bodies. Such information would assist in deciding what should be done with graves where ineffective degradation creates a problem for reusing graves following the standard resting time of 25 years.
Health insurance selection in Chile: a cross-sectional and panel analysis
Pardo, Cristian; Schott, Whitney
2014-01-01
In Chile, workers are mandated to choose either public or private health insurance coverage. Although private insurance premiums depend on health risk, public insurance premiums are solely linked to income. This structure implies that individuals with higher health risks may tend to avoid private insurance, leaving the public insurance system responsible for their care. This article attempts to explore the determinants of health insurance selection (private vs public) by individuals in Chile and to test empirically whether adverse selection indeed exists. We use panel data from Chile’s ‘Encuesta de Proteccion Social’ survey, which allows us to control for a rich set of individual observed and unobserved characteristics using both a cross-sectional analysis and fixed-effect methods. Results suggest that age, sex, job type, income quintile and self-reported health are the most important factors in explaining the type of insurance selected by individuals. Asymmetry in insurance mobility caused by restrictions on pre-existing conditions may explain why specific illnesses have an unambiguous relationship with insurance selection. Empirical evidence tends to indicate that some sorting by health risk and income levels takes place in Chile. In addition, by covering a less healthy population with higher utilization of general health consultations, the public insurance system may be incurring disproportionate expenses. Results suggest that if decreasing segmentation and unequal access to health services are important policy objectives, special emphasis should be placed on asymmetries in the premium structure and inter-system mobility within the health care system. Preliminary analysis of the impact of the ‘Garantias Explicitas de Salud’ plan (explicit guarantees on health care plan) on insurance selection is also considered. PMID:23558960
Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.; O'Brien, Grady M.
2004-01-01
We develop a new observation‐prediction (OPR) statistic for evaluating the importance of system state observations to model predictions. The OPR statistic measures the change in prediction uncertainty produced when an observation is added to or removed from an existing monitoring network, and it can be used to guide refinement and enhancement of the network. Prediction uncertainty is approximated using a first‐order second‐moment method. We apply the OPR statistic to a model of the Death Valley regional groundwater flow system (DVRFS) to evaluate the importance of existing and potential hydraulic head observations to predicted advective transport paths in the saturated zone underlying Yucca Mountain and underground testing areas on the Nevada Test Site. Important existing observations tend to be far from the predicted paths, and many unimportant observations are in areas of high observation density. These results can be used to select locations at which increased observation accuracy would be beneficial and locations that could be removed from the network. Important potential observations are mostly in areas of high hydraulic gradient far from the paths. Results for both existing and potential observations are related to the flow system dynamics and coarse parameter zonation in the DVRFS model. If system properties in different locations are as similar as the zonation assumes, then the OPR results illustrate a data collection opportunity whereby observations in distant, high‐gradient areas can provide information about properties in flatter‐gradient areas near the paths. If this similarity is suspect, then the analysis produces a different type of data collection opportunity involving testing of model assumptions critical to the OPR results.
Detecting subnetwork-level dynamic correlations.
Yan, Yan; Qiu, Shangzhao; Jin, Zhuxuan; Gong, Sihong; Bai, Yun; Lu, Jianwei; Yu, Tianwei
2017-01-15
The biological regulatory system is highly dynamic. The correlations between many functionally related genes change over different biological conditions. Finding dynamic relations on the existing biological network may reveal important regulatory mechanisms. Currently no method is available to detect subnetwork-level dynamic correlations systematically on the genome-scale network. Two major issues hampered the development. The first is gene expression profiling data usually do not contain time course measurements to facilitate the analysis of dynamic relations, which can be partially addressed by using certain genes as indicators of biological conditions. Secondly, it is unclear how to effectively delineate subnetworks, and define dynamic relations between them. Here we propose a new method named LANDD (Liquid Association for Network Dynamics Detection) to find subnetworks that show substantial dynamic correlations, as defined by subnetwork A is concentrated with Liquid Association scouting genes for subnetwork B. The method produces easily interpretable results because of its focus on subnetworks that tend to comprise functionally related genes. Also, the collective behaviour of genes in a subnetwork is a much more reliable indicator of underlying biological conditions compared to using single genes as indicators. We conducted extensive simulations to validate the method's ability to detect subnetwork-level dynamic correlations. Using a real gene expression dataset and the human protein-protein interaction network, we demonstrate the method links subnetworks of distinct biological processes, with both confirmed relations and plausible new functional implications. We also found signal transduction pathways tend to show extensive dynamic relations with other functional groups. The R package is available at https://cran.r-project.org/web/packages/LANDD CONTACTS: yunba@pcom.edu, jwlu33@hotmail.com or tianwei.yu@emory.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The "Canadian" in Canadian Children's Literature.
ERIC Educational Resources Information Center
Bainbridge, Joyce; Wolodko, Brenda
2001-01-01
Notes that a rich body of Canadian children's literature exists that reflects the country's literary and socio-cultural values, beliefs, themes and images, including those of geography, history, language and identity. Discusses how Canadians tend to identify themselves first by region or province and then by nation. (SG)
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Ismaïl, Rached; Aviat, Florence; Michel, Valérie; Le Bayon, Isabelle; Gay-Perret, Perrine; Kutnik, Magdalena; Fédérighi, Michel
2013-01-01
Various types of surfaces are used today in the food industry, such as plastic, stainless steel, glass, and wood. These surfaces are subject to contamination by microorganisms responsible for the cross-contamination of food by contact with working surfaces. The HACCP-based processes are now widely used for the control of microbial hazards to prevent food safety issues. This preventive approach has resulted in the use of microbiological analyses of surfaces as one of the tools to control the hygiene of products. A method of recovering microorganisms from different solid surfaces is necessary as a means of health prevention. No regulation exists for surface microbial contamination, but food companies tend to establish technical specifications to add value to their products and limit contamination risks. The aim of this review is to present the most frequently used methods: swabbing, friction or scrubbing, printing, rinsing or immersion, sonication and scraping or grinding and describe their advantages and drawbacks. The choice of the recovery method has to be suitable for the type and size of the surface tested for microbiological analysis. Today, quick and cheap methods have to be standardized and especially easy to perform in the field. PMID:24240728
Ismaïl, Rached; Aviat, Florence; Michel, Valérie; Le Bayon, Isabelle; Gay-Perret, Perrine; Kutnik, Magdalena; Fédérighi, Michel
2013-11-14
Various types of surfaces are used today in the food industry, such as plastic, stainless steel, glass, and wood. These surfaces are subject to contamination by microorganisms responsible for the cross-contamination of food by contact with working surfaces. The HACCP-based processes are now widely used for the control of microbial hazards to prevent food safety issues. This preventive approach has resulted in the use of microbiological analyses of surfaces as one of the tools to control the hygiene of products. A method of recovering microorganisms from different solid surfaces is necessary as a means of health prevention. No regulation exists for surface microbial contamination, but food companies tend to establish technical specifications to add value to their products and limit contamination risks. The aim of this review is to present the most frequently used methods: swabbing, friction or scrubbing, printing, rinsing or immersion, sonication and scraping or grinding and describe their advantages and drawbacks. The choice of the recovery method has to be suitable for the type and size of the surface tested for microbiological analysis. Today, quick and cheap methods have to be standardized and especially easy to perform in the field.
Microtubule defects influence kinesin-based transport in vitro.
NASA Astrophysics Data System (ADS)
Xu, Jing
Microtubules are protein polymers that form ``molecular highways'' for long-range transport within living cells. Molecular motors actively step along microtubules to shuttle cellular materials between the nucleus and the cell periphery; this transport is critical for the survival and health of all eukaryotic cells. Structural defects in microtubules exist, but whether these defects impact molecular motor-based transport remains unknown. Here, we report a new, to our knowledge, approach that allowed us to directly investigate the impact of such defects. Using a modified optical-trapping method, we examined the group function of a major molecular motor, conventional kinesin, when transporting cargos along individual microtubules. We found that microtubule defects influence kinesin-based transport in vitro. The effects depend on motor number: cargos driven by a few motors tended to unbind prematurely from the microtubule, whereas cargos driven by more motors tended to pause. To our knowledge, our study provides the first direct link between microtubule defects and kinesin function. The effects uncovered in our study may have physiological relevance in vivo. Supported by the UC Merced (to J.X.), NIH (NS048501 to S.J.K.), NSF (EF-1038697 to A.G.), and the James S. McDonnell Foundation (to A.G.). Work carried out at the Aspen Center for Physics was supported by NSF Grant PHY-1066293.
Two methods of Haustral fold detection from computed tomographic virtual colonoscopy images
NASA Astrophysics Data System (ADS)
Chowdhury, Ananda S.; Tan, Sovira; Yao, Jianhua; Linguraru, Marius G.; Summers, Ronald M.
2009-02-01
Virtual colonoscopy (VC) has gained popularity as a new colon diagnostic method over the last decade. VC is a new, less invasive alternative to the usually practiced optical colonoscopy for colorectal polyp and cancer screening, the second major cause of cancer related deaths in industrial nations. Haustral (colonic) folds serve as important landmarks for virtual endoscopic navigation in the existing computer-aided-diagnosis (CAD) system. In this paper, we propose and compare two different methods of haustral fold detection from volumetric computed tomographic virtual colonoscopy images. The colon lumen is segmented from the input using modified region growing and fuzzy connectedness. The first method for fold detection uses a level set that evolves on a mesh representation of the colon surface. The colon surface is obtained from the segmented colon lumen using the Marching Cubes algorithm. The second method for fold detection, based on a combination of heat diffusion and fuzzy c-means algorithm, is employed on the segmented colon volume. Folds obtained on the colon volume using this method are then transferred to the corresponding colon surface. After experimentation with different datasets, results are found to be promising. The results also demonstrate that the first method has a tendency of slight under-segmentation while the second method tends to slightly over-segment the folds.
NASA Astrophysics Data System (ADS)
Kahnert, Michael
2016-07-01
Numerical solution methods for electromagnetic scattering by non-spherical particles comprise a variety of different techniques, which can be traced back to different assumptions and solution strategies applied to the macroscopic Maxwell equations. One can distinguish between time- and frequency-domain methods; further, one can divide numerical techniques into finite-difference methods (which are based on approximating the differential operators), separation-of-variables methods (which are based on expanding the solution in a complete set of functions, thus approximating the fields), and volume integral-equation methods (which are usually solved by discretisation of the target volume and invoking the long-wave approximation in each volume cell). While existing reviews of the topic often tend to have a target audience of program developers and expert users, this tutorial review is intended to accommodate the needs of practitioners as well as novices to the field. The required conciseness is achieved by limiting the presentation to a selection of illustrative methods, and by omitting many technical details that are not essential at a first exposure to the subject. On the other hand, the theoretical basis of numerical methods is explained with little compromises in mathematical rigour; the rationale is that a good grasp of numerical light scattering methods is best achieved by understanding their foundation in Maxwell's theory.
Life-span perspective of personality in dementia.
Kolanowski, A M; Whall, A L
1996-01-01
To propose an alternative view of personality change in dementia by presenting existing evidence for the continuity of personality. As the population continues to age, dementing illnesses will account for a greater proportion of morbidity and mortality; the care of these people will have a significant effect on the health care system. Life-span perspective of personality continuity. SCOPE METHOD: Review of current literature on personality in dementia using Medline, 1980-1994; CINAHL, 1990-1994; and Psych Lit., 1980-1994. Although there are systematic shifts in personality with dementia, individuals tend to maintain their unique pattern of premorbid personality traits. The personalities of dementia patients seem to reflect adaptive patterns that served them in the past. Use of a life-span perspective can enhance individualized care for demented patients and advance theory development.
Stoller, Eleanor Palo; Webster, Noah J.; Blixen, Carol E.; McCormick, Richard A.; Hund, Andrew J.; Perzynski, Adam T.; Kanuch, Stephanie W.; Thomas, Charles L.; Kercher, Kyle; Dawson, Neal V.
2009-01-01
Most studies of decisions to curtail alcohol consumption reflect experiences of abusing drinkers. We employ an exploratory sequential research design to explore the applicability of this research to the experience of nonabusing drinkers advised to curtail alcohol consumption after a Hepatitis C diagnosis. A qualitative component identified 17 new decision factors not reflected in an inventory of factors based on synthesis of existing scales. We triangulated qualitative data by supplementing semi-structured interviews with Internet postings. A quantitative component estimated prevalence and association with current drinking of these new decision factors. Patients who quit drinking tended to attribute post-diagnosis drinking to occasional triggers, whereas patients who were still drinking were more likely to endorse rationales not tied to specific triggers. PMID:20046861
Some stylized facts of the Bitcoin market
NASA Astrophysics Data System (ADS)
Bariviera, Aurelio F.; Basgall, María José; Hasperué, Waldo; Naiouf, Marcelo
2017-10-01
In recent years a new type of tradable assets appeared, generically known as cryptocurrencies. Among them, the most widespread is Bitcoin. Given its novelty, this paper investigates some statistical properties of the Bitcoin market. This study compares Bitcoin and standard currencies dynamics and focuses on the analysis of returns at different time scales. We test the presence of long memory in return time series from 2011 to 2017, using transaction data from one Bitcoin platform. We compute the Hurst exponent by means of the Detrended Fluctuation Analysis method, using a sliding window in order to measure long range dependence. We detect that Hurst exponents changes significantly during the first years of existence of Bitcoin, tending to stabilize in recent times. Additionally, multiscale analysis shows a similar behavior of the Hurst exponent, implying a self-similar process.
An Earth Orbiting Satellite Service and Repair Facility
NASA Technical Reports Server (NTRS)
Berndt, Andrew; Cardoza, Mike; Chen, John; Daley, Gunter; Frizzell, Andy; Linton, Richard; Rast, Wayne
1989-01-01
A conceptual design was produced for the Geosynchronous Satellite Servicing Platform (GSSP), an orbital facility capable of repairing and servicing satellites in geosynchronous orbit. The GSSP is a man-tended platform, which consists of a habitation module, operations module, service bay and truss assembly. This design review includes an analysis of life support systems, thermal and power requirements, robotic and automated systems, control methods and navigation, and communications systems. The GSSP will utilize existing technology available at the time of construction, focusing mainly on modifying and integrating existing systems. The entire facility, along with two satellite retrieval vehicles (SRV), will be placed in geosynchronous orbit by the Advanced Launch System. The SRV will be used to ferry satellites to and from the GSSP. Technicians will be transferred from Earth to the GSSP and back in an Apollo-derived Crew Transfer Capsule (CTC). These missions will use advanced telerobotic equipment to inspect and service satellites. Four of these missions are tentatively scheduled per year. At this rate, the GSSP will service over 650 satelites during the projected 25 year lifespan.
An exotic long-term pattern in stock price dynamics.
Wei, Jianrong; Huang, Jiping
2012-01-01
To accurately predict the movement of stock prices is always of both academic importance and practical value. So far, a lot of research has been reported to help understand the behavior of stock prices. However, some of the existing theories tend to render us the belief that the time series of stock prices are unpredictable on a long-term timescale. The question arises whether the long-term predictability exists in stock price dynamics. In this work, we analyze the price reversals in the US stock market and the Chinese stock market on the basis of a renormalization method. The price reversals are divided into two types: retracements (the downward trends after upward trends) and rebounds (the upward trends after downward trends), of which the intensities are described by dimensionless quantities, R(t) and R(b), respectively. We reveal that for both mature and emerging markets, the distribution of either retracements R(t) or rebounds R(b) shows two characteristic values, 0.335 and 0.665, both of which are robust over the long term. The methodology presented here provides a way to quantify the stock price reversals. Our findings strongly support the existence of the long-term predictability in stock price dynamics, and may offer a hint on how to predict the long-term movement of stock prices.
Drive in Living Matter to Perfect Itself
ERIC Educational Resources Information Center
Szent-Gyoergyi, Albert
1977-01-01
There is mounting evidence for the existence of the principle: syntropy--or "negative entropy"--through the influence of which forms tend to reach higher and higher levels of organization, order, and dynamic harmony. Presented at the Symposium on the Relationship between the Biological and Physical Sciences at Columbia University.…
"That's Not Quite the Way We See It": The Epistemological Challenge of Visual Data
ERIC Educational Resources Information Center
Wall, Kate; Higgins, Steve; Hall, Elaine; Woolner, Pam
2013-01-01
In research textbooks, and much of the research practice, they describe, qualitative processes and interpretivist epistemologies tend to dominate visual methodology. This article challenges the assumptions behind this dominance. Using exemplification from three existing visual data sets produced through one large education research project, this…
Enhancing Newspaper's Value as Local Advertising Medium.
ERIC Educational Resources Information Center
Prater, Bruce W.; And Others
1994-01-01
Finds that, to enhance the value of a newspaper's advertising, newspaper executives seek to broaden the scope of their operations by adding new subscribers and advertisers, while marketers tend to prefer narrowing the focus of operations, serving the existing customer base or concentrating exclusively on select market segments. (SR)
The Shadows of Difference: Ethnicity and Young Children's Friendships
ERIC Educational Resources Information Center
Barron, Ian
2011-01-01
This paper explores the interaction of ethnicity and friendship in a kindergarten in England. Existing literature from different traditions, such as developmental psychology, sociocultural theory and postmodernism, suggests that pre-school children tend to choose friends from the same ethnic group. The research was carried out using an…
Apprentissage naturel et apprentissage guide (Natural Learning and Guided Learning).
ERIC Educational Resources Information Center
Veronique, Daniel
1984-01-01
Although second language pedagogy has tended increasingly toward simulation, role-playing, and natural communication, it has not profited from existing research on natural learning in second languages. The emphasis should be on understanding how the processes of guided learning and natural learning differ, psychologically and sociologically, and…
The Advantages of Hierarchical Linear Modeling. ERIC/AE Digest.
ERIC Educational Resources Information Center
Osborne, Jason W.
This digest introduces hierarchical data structure, describes how hierarchical models work, and presents three approaches to analyzing hierarchical data. Hierarchical, or nested data, present several problems for analysis. People or creatures that exist within hierarchies tend to be more similar to each other than people randomly sampled from the…
Detecting Item Drift in Large-Scale Testing
ERIC Educational Resources Information Center
Guo, Hongwen; Robin, Frederic; Dorans, Neil
2017-01-01
The early detection of item drift is an important issue for frequently administered testing programs because items are reused over time. Unfortunately, operational data tend to be very sparse and do not lend themselves to frequent monitoring analyses, particularly for on-demand testing. Building on existing residual analyses, the authors propose…
Rethinking Validation in Complex High-Stakes Assessment Contexts
ERIC Educational Resources Information Center
Koch, Martha J.; DeLuca, Christopher
2012-01-01
In this article we rethink validation within the complex contexts of high-stakes assessment. We begin by considering the utility of existing models for validation and argue that these models tend to overlook some of the complexities inherent to assessment use, including the multiple interpretations of assessment purposes and the potential…
Ecosystem services and amenities are undeniable valuable. However, their values are poorly recognized and, as a result, ecosystem services and amenities tend to be treated as though valueless. A need thus exists to increase both knowledge and recognition of these values. In this ...
The Skylab concentrated atmospheric radiation project
NASA Technical Reports Server (NTRS)
Kuhn, P. M.; Marlatt, W. E.; Whitehead, V. S. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Comparison of several existing infrared radiative transfer models under somewhat controlled conditions and with atmospheric observations of Skylab's S191 and S192 radiometers illustrated that the models tend to over-compute atmospheric attenuation in the window region of the atmospheric infrared spectra.
Convolutional networks for vehicle track segmentation
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2017-10-01
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple and fast models to label track pixels. These models, however, are unable to capture natural track features, such as continuity and parallelism. More powerful but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3×3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate in low power and have limited training data. As a result, we aim for small and efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our six-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.
Object-graphs for context-aware visual category discovery.
Lee, Yong Jae; Grauman, Kristen
2012-02-01
How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.
Detecting and characterizing genomic signatures of positive selection in global populations.
Liu, Xuanyao; Ong, Rick Twee-Hee; Pillai, Esakimuthu Nisha; Elzein, Abier M; Small, Kerrin S; Clark, Taane G; Kwiatkowski, Dominic P; Teo, Yik-Ying
2013-06-06
Natural selection is a significant force that shapes the architecture of the human genome and introduces diversity across global populations. The question of whether advantageous mutations have arisen in the human genome as a result of single or multiple mutation events remains unanswered except for the fact that there exist a handful of genes such as those that confer lactase persistence, affect skin pigmentation, or cause sickle cell anemia. We have developed a long-range-haplotype method for identifying genomic signatures of positive selection to complement existing methods, such as the integrated haplotype score (iHS) or cross-population extended haplotype homozygosity (XP-EHH), for locating signals across the entire allele frequency spectrum. Our method also locates the founder haplotypes that carry the advantageous variants and infers their corresponding population frequencies. This presents an opportunity to systematically interrogate the whole human genome whether a selection signal shared across different populations is the consequence of a single mutation process followed subsequently by gene flow between populations or of convergent evolution due to the occurrence of multiple independent mutation events either at the same variant or within the same gene. The application of our method to data from 14 populations across the world revealed that positive-selection events tend to cluster in populations of the same ancestry. Comparing the founder haplotypes for events that are present across different populations revealed that convergent evolution is a rare occurrence and that the majority of shared signals stem from the same evolutionary event. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
A powerful score-based test statistic for detecting gene-gene co-association.
Xu, Jing; Yuan, Zhongshang; Ji, Jiadong; Zhang, Xiaoshuai; Li, Hongkai; Wu, Xuesen; Xue, Fuzhong; Liu, Yanxun
2016-01-29
The genetic variants identified by Genome-wide association study (GWAS) can only account for a small proportion of the total heritability for complex disease. The existence of gene-gene joint effects which contains the main effects and their co-association is one of the possible explanations for the "missing heritability" problems. Gene-gene co-association refers to the extent to which the joint effects of two genes differ from the main effects, not only due to the traditional interaction under nearly independent condition but the correlation between genes. Generally, genes tend to work collaboratively within specific pathway or network contributing to the disease and the specific disease-associated locus will often be highly correlated (e.g. single nucleotide polymorphisms (SNPs) in linkage disequilibrium). Therefore, we proposed a novel score-based statistic (SBS) as a gene-based method for detecting gene-gene co-association. Various simulations illustrate that, under different sample sizes, marginal effects of causal SNPs and co-association levels, the proposed SBS has the better performance than other existed methods including single SNP-based and principle component analysis (PCA)-based logistic regression model, the statistics based on canonical correlations (CCU), kernel canonical correlation analysis (KCCU), partial least squares path modeling (PLSPM) and delta-square (δ (2)) statistic. The real data analysis of rheumatoid arthritis (RA) further confirmed its advantages in practice. SBS is a powerful and efficient gene-based method for detecting gene-gene co-association.
Anomalous change of Airy disk with changing size of spherical particles
NASA Astrophysics Data System (ADS)
Pan, Linchao; Zhang, Fugen; Meng, Rui; Xu, Jie; Zuo, Chenze; Ge, Baozhen
2016-02-01
Use of laser diffraction is considered as a method of reliable principle and mature technique in measurements of particle size distributions. It is generally accepted that for a certain relative refractive index, the size of the scattering pattern (also called Airy disk) of spherical particles monotonically decreases with increasing particle size. This fine structure forms the foundation of the laser diffraction method. Here we show that the Airy disk size of non-absorbing spherical particles becomes larger with increasing particle size in certain size ranges. To learn more about this anomalous change of Airy disk (ACAD), we present images of Airy disk and curves of Airy disk size versus particle size for spherical particles of different relative refractive indices by using Mie theory. These figures reveal that ACAD occurs periodically for non-absorbing particles and will disappear when the absorbing efficiency is higher than certain value. Then by using geometrical optics (GO) approximation, we derive the analytical formulae for the bounds of the size ranges where ACAD occurs. From the formulae, we obtain laws of ACAD as follows: (1) for non-absorbing particles, ACAD occurs periodically, and when the particle size tends to infinity, the period tends to a certain value. As the relative refractive index increases, (2) the particle size ranges where ACAD occurs shift to smaller values, (3) the period of ACAD becomes smaller, and (4) the width of the size ranges where ACAD occurs becomes narrower. In addition, we can predict from the formulae that ACAD also exists for particles whose relative refractive index is smaller than 1.
NASA Astrophysics Data System (ADS)
Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.
2015-12-01
A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Bockelman, B.; Blomer, J.
A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less
Fact and fictions in FX arbitrage processes
NASA Astrophysics Data System (ADS)
Cross, Rod; Kozyakin, Victor
2015-02-01
The efficient markets hypothesis implies that arbitrage opportunities in markets such as those for foreign exchange (FX) would be, at most, short-lived. The present paper surveys the fragmented nature of FX markets, revealing that information in these markets is also likely to be fragmented. The "quant" workforce in the hedge fund featured in The Fear Index novel by Robert Harris would have little or no reason for their existence in an EMH world. The four currency combinatorial analysis of arbitrage sequences contained in [1] is then considered. Their results suggest that arbitrage processes, rather than being self-extinguishing, tend to be periodic in nature. This helps explain the fact that arbitrage dealing tends to be endemic in FX markets.
Mimvec: a deep learning approach for analyzing the human phenome.
Gan, Mingxin; Li, Wenran; Zeng, Wanwen; Wang, Xiaojian; Jiang, Rui
2017-09-21
The human phenome has been widely used with a variety of genomic data sources in the inference of disease genes. However, most existing methods thus far derive phenotype similarity based on the analysis of biomedical databases by using the traditional term frequency-inverse document frequency (TF-IDF) formulation. This framework, though intuitive, not only ignores semantic relationships between words but also tends to produce high-dimensional vectors, and hence lacks the ability to precisely capture intrinsic semantic characteristics of biomedical documents. To overcome these limitations, we propose a framework called mimvec to analyze the human phenome by making use of the state-of-the-art deep learning technique in natural language processing. We converted 24,061 records in the Online Mendelian Inheritance in Man (OMIM) database to low-dimensional vectors using our method. We demonstrated that the vector presentation not only effectively enabled classification of phenotype records against gene ones, but also succeeded in discriminating diseases of different inheritance styles and different mechanisms. We further derived pairwise phenotype similarities between 7988 human inherited diseases using their vector presentations. With a joint analysis of this phenome with multiple genomic data, we showed that phenotype overlap indeed implied genotype overlap. We finally used the derived phenotype similarities with genomic data to prioritize candidate genes and demonstrated advantages of this method over existing ones. Our method is capable of not only capturing semantic relationships between words in biomedical records but also alleviating the dimensional disaster accompanying the traditional TF-IDF framework. With the approaching of precision medicine, there will be abundant electronic records of medicine and health awaiting for deep analysis, and we expect to see a wide spectrum of applications borrowing the idea of our method in the near future.
Utilizing Expert Knowledge in Estimating Future STS Costs
NASA Technical Reports Server (NTRS)
Fortner, David B.; Ruiz-Torres, Alex J.
2004-01-01
A method of estimating the costs of future space transportation systems (STSs) involves classical activity-based cost (ABC) modeling combined with systematic utilization of the knowledge and opinions of experts to extend the process-flow knowledge of existing systems to systems that involve new materials and/or new architectures. The expert knowledge is particularly helpful in filling gaps that arise in computational models of processes because of inconsistencies in historical cost data. Heretofore, the costs of planned STSs have been estimated following a "top-down" approach that tends to force the architectures of new systems to incorporate process flows like those of the space shuttles. In this ABC-based method, one makes assumptions about the processes, but otherwise follows a "bottoms up" approach that does not force the new system architecture to incorporate a space-shuttle-like process flow. Prototype software has been developed to implement this method. Through further development of software, it should be possible to extend the method beyond the space program to almost any setting in which there is a need to estimate the costs of a new system and to extend the applicable knowledge base in order to make the estimate.
A Hybrid Approach to Data Assimilation for Reconstructing the Evolution of Mantle Dynamics
NASA Astrophysics Data System (ADS)
Zhou, Quan; Liu, Lijun
2017-11-01
Quantifying past mantle dynamic processes represents a major challenge in understanding the temporal evolution of the solid earth. Mantle convection modeling with data assimilation is one of the most powerful tools to investigate the dynamics of plate subduction and mantle convection. Although various data assimilation methods, both forward and inverse, have been created, these methods all have limitations in their capabilities to represent the real earth. Pure forward models tend to miss important mantle structures due to the incorrect initial condition and thus may lead to incorrect mantle evolution. In contrast, pure tomography-based models cannot effectively resolve the fine slab structure and would fail to predict important subduction-zone dynamic processes. Here we propose a hybrid data assimilation approach that combines the unique power of the sequential and adjoint algorithms, which can properly capture the detailed evolution of the downgoing slab and the tomographically constrained mantle structures, respectively. We apply this new method to reconstructing mantle dynamics below the western U.S. while considering large lateral viscosity variations. By comparing this result with those from several existing data assimilation methods, we demonstrate that the hybrid modeling approach recovers the realistic 4-D mantle dynamics the best.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying
2011-08-01
The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pfefer, Joshua; Agrawal, Anant
2012-03-01
In recent years there has been increasing interest in development of consensus, tissue-phantom-based approaches for assessment of biophotonic imaging systems, with the primary goal of facilitating clinical translation of novel optical technologies. Well-characterized test methods based on tissue phantoms can provide useful tools for performance assessment, thus enabling standardization and device inter-comparison during preclinical development as well as quality assurance and re-calibration in the clinical setting. In this review, we study the role of phantom-based test methods as described in consensus documents such as international standards for established imaging modalities including X-ray CT, MRI and ultrasound. Specifically, we focus on three image quality characteristics - spatial resolution, spatial measurement accuracy and image uniformity - and summarize the terminology, metrics, phantom design/construction approaches and measurement/analysis procedures used to assess these characteristics. Phantom approaches described are those in routine clinical use and tend to have simplified morphology and biologically-relevant physical parameters. Finally, we discuss the potential for applying knowledge gained from existing consensus documents in the development of standardized, phantom-based test methods for optical coherence tomography.
NASA Astrophysics Data System (ADS)
Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas
2018-02-01
Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.
New Trends in Television Consumption.
ERIC Educational Resources Information Center
Richeri, Giuseppe
A phenomenon which tends to transform the function and methods of traditional television consumption is the gradual reduction of its "mass" dimensions, which tend to disappear for an increasing share of the audience. This reduction of the mass dimension ranges from fragmentation of the audience to its segmentation, and, in the most…
Bayesian calibration for forensic age estimation.
Ferrante, Luigi; Skrami, Edlira; Gesuita, Rosaria; Cameriere, Roberto
2015-05-10
Forensic medicine is increasingly called upon to assess the age of individuals. Forensic age estimation is mostly required in relation to illegal immigration and identification of bodies or skeletal remains. A variety of age estimation methods are based on dental samples and use of regression models, where the age of an individual is predicted by morphological tooth changes that take place over time. From the medico-legal point of view, regression models, with age as the dependent random variable entail that age tends to be overestimated in the young and underestimated in the old. To overcome this bias, we describe a new full Bayesian calibration method (asymmetric Laplace Bayesian calibration) for forensic age estimation that uses asymmetric Laplace distribution as the probability model. The method was compared with three existing approaches (two Bayesian and a classical method) using simulated data. Although its accuracy was comparable with that of the other methods, the asymmetric Laplace Bayesian calibration appears to be significantly more reliable and robust in case of misspecification of the probability model. The proposed method was also applied to a real dataset of values of the pulp chamber of the right lower premolar measured on x-ray scans of individuals of known age. Copyright © 2015 John Wiley & Sons, Ltd.
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
ERIC Educational Resources Information Center
Han, Chunyan; Wu, Zongjie
2015-01-01
The legitimate forms of knowledge recognised for language teachers in Chinese universities are differentiated by strong academic boundaries, subject only to disciplinary authority and fashion. Classroom practice on the whole tends not to be informed by research. For many language teachers, a disconnect exists between the academic discipline in…
School-Based Health Education Provision for Young People in Northern Ireland
ERIC Educational Resources Information Center
McAleavy, Gerry; McCrystal, Patrick
2007-01-01
In Northern Ireland, young people exist in a health environment where the experience of social disadvantage is translated into serious risks to health and personal development. The years of political conflict have tended to obscure these health problems, and it is important that the difficulties faced by young people are examined and…
Perceived Gender Based Stereotypes in Educational Technology Advertisements
ERIC Educational Resources Information Center
Bolliger, Doris U.
2008-01-01
Researchers point out gender differences in the adoption and use of technology. Men tend to be the early adopters of computer technologies, whereas women are thought of as laggards. Several writings exist that identified ads in the media as gender biased. Thomas and Treiber, who examined race, gender, and status in popular magazines, indicate that…
One Step Forward, Two Steps Back? Work Experience, Equal Opportunities and TVEI.
ERIC Educational Resources Information Center
Heath, Sue
1995-01-01
A case study of work experience provided in a British project committed to gender equality shows that the nature of work experience--its alliance with labor market needs--makes it virtually impossible to meet equal opportunity objectives. Work experience tends to reinforce existing gender divisions in the labor market. (SK)
NASA Technical Reports Server (NTRS)
Hermans, Thomas C. (Inventor); Wakeman, Thomas G. (Inventor); Hauser, Ambrose A. (Inventor)
1993-01-01
In one type of aircraft propulsion system, propeller blades are mounted on a ring which surrounds a turbine. An annular space exists between the turbine and the ring. If a propeller blade should break free, the unbalanced centrifugal load tends to deform the ring. The invention reduces the deformation, as by locating spacers between the turbine and the ring.
Appreciative Evaluation of Restorative Approaches in Schools
ERIC Educational Resources Information Center
Bevington, Terence J.
2015-01-01
A restorative approach to conflict is being increasingly applied in schools around the world. Existing evaluation evidence has tended to focus on the impact on quantifiable outcomes such as number of behaviour incidents and rates of attendance and exclusion. This case study aimed to broaden the evidence base to capture a richer picture of the…
ERIC Educational Resources Information Center
Lucek, Linda E.
Although the early settlements in cyberspace have tended to be male-dominated, diversity does exist on the Internet. In fact, a 1994-95 study revealed that women comprise 34% of Internet users. Feminism, as it came of age in the 1960s, often equated technoscience with the Vietnam War and with forces in opposition to nature and life. Postmodern…
The Financial Equity Debate. Pennsylvania Educational Policy Studies, Number 15.
ERIC Educational Resources Information Center
Cooley, William W.; Pomponio, Debra
Discussion of inequity in funding of Pennsylvania schools has tended to focus on differences between wealthy and poor school districts. In Pennsylvania, 180 school districts have filed a lawsuit challenging the constitutionality of the existing public school funding scheme. A study of the state's 500 school districts, grouped by market value of…
Symbolic Resources and Marketing Strategies in Ontario Higher Education: A Comparative Analysis
ERIC Educational Resources Information Center
Pizarro Milian, Roger; Davidson, Cliff
2018-01-01
Existing research on marketing within PSE tends to focus on homogeneous groups of high-status organisations. This study ameliorates this gap in the literature, conducting a comparative analysis of promotional materials produced by public universities and community colleges in Ontario, Canada. We find that these two groups draw on unique strategies…
Diversity in American Higher Education: Toward a More Comprehensive Approach
ERIC Educational Resources Information Center
Stulberg, Lisa M., Ed.; Weinberg, Sharon Lawner, Ed.
2011-01-01
Diversity has been a focus of higher education policy, law, and scholarship for decades, continually expanding to include not only race, ethnicity and gender, but also socioeconomic status, sexual and political orientation, and more. However, existing collections still tend to focus on a narrow definition of diversity in education, or in relation…
Writing Histories of Disability in India: Strategies of Inclusion
ERIC Educational Resources Information Center
Buckingham, Jane
2011-01-01
Existing historical understandings of disability are dominated by European and American experience and tend to assume Judeo-Christian ideas of stigma and exclusion are universal norms. This paper emphasises the unique experience of disability in India and the role of poverty, gender, caste and community in compounding the marginalisation felt by…
Quality and Equality in Internet-Based Higher Education: Benchmarks for Success.
ERIC Educational Resources Information Center
Merisotis, Jamie P.
The Institute for Higher Education Policy reviewed the research on quality and equality in Internet-based higher education and found a relative paucity of original research dedicated to explaining or predicting phenomena related to distance learning. The research that does exist has tended to emphasize student outcomes for individual courses,…
Enzyme processes for pulp and paper : a review of recent developments
William R. Kenealy; Thomas W. Jeffries
2003-01-01
The pulp and paper industry is applying new, ecologically sound technology in its manufacturing processes. Many interesting enzymatic applications have been proposed in the literature. Implemented technologies tend to change the existing industrial process as little as possible. Commercial applications include xylanases in prebleaching kraft pulps and various enzymes...
Communicating about Matter with Symbols: Evolving from Alchemy to Chemistry
ERIC Educational Resources Information Center
Fabbrizzi, Luigi
2008-01-01
Modern chemists know that alchemists were their historical predecessors, yet they are not proud of this relationship, which chemists today tend to hide or forget. However, no discontinuity exists between alchemy and chemistry and we still use laboratory techniques that were invented by alchemists hundreds or thousands of years ago. Alchemists used…
Revisiting Jack Goody to Rethink Determinisms in Literacy Studies
ERIC Educational Resources Information Center
Collin, Ross
2013-01-01
This article revisits Goody's arguments about literacy's influence on social arrangements, culture, cognition, economics, and other domains of existence. Whereas some of his arguments tend toward technological determinism (i.e., literacy causes change in the world), other of his arguments construe literacy as a force that shapes and is shaped by…
Student and Instructor Use of Comments on Business Communication Papers.
ERIC Educational Resources Information Center
Winter, Janet K.; Neal, Joan C.; Waner, Karen K.
1996-01-01
Surveys college students regarding their use of instructor comments written on their papers. Finds all students tend to use comments; no significant correlations exist between students' ability levels and their propensity to review, understand, and use comments; students were likely to review comments if they had to rewrite assignments; and…
African-American Women and Dissertation Chairs: Portraits of Successful Advising Relationships
ERIC Educational Resources Information Center
Kohlman, Antoinette
2013-01-01
By focusing on the problem of graduate student persistence, researchers have tended to either discount or ignore the impact and value of advising relationships as a context for the successful completion of a doctoral program. Little information exists regarding the advising experiences and relationships between African-American female doctoral…
Is there a role for termite alates in colony expansion in Wisconsin?
Frederick Green III; Rachel A. Arango; Glenn R. Esenther; Thomas G. Shelton
2014-01-01
Termite colonies in Wisconsin tend to be large and widely spread out geographically, and separated by distances up to 1342km. We recently completed a study to determine the genetic diversity and population substructure of thirteen existing colonies of Reticulitermes flavipes using amplified fragment length polymorphism to determine patterns of...
Geostatistics as a tool to define various categories of resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabourin, R.
1983-02-01
Definition of 'measured' and 'indicated' resources tend to be vague. Yet, the calculation of such categories of resources in a mineral deposit calls for specific technical criteria. The author discusses how a geostatistical methodology provides the technical criteria required to classify reasonably assured resources by levels of assurance of their existence.
Integrating Computer- and Teacher-Based Scaffolds in Science Inquiry
ERIC Educational Resources Information Center
Wu, Hui-Ling; Pedersen, Susan
2011-01-01
Because scaffolding is a crucial form of support for students engaging in complex learning environments, it is important that researchers determine which of the numerous kinds of scaffolding will allow them to educate students most effectively. The existing literature tends to focus on computer-based scaffolding by itself rather than integrating…
Degree Mobility from the Nordic Countries: Background and Employability
ERIC Educational Resources Information Center
Wiers-Jenssen, Jannecke
2013-01-01
Full-degree mobility from Western countries is a topic that has been little researched. Existing literature tends to be normative; mobility is seen as an advantage per se. In this article it is questioned whether mobility is an advantage when investigating degree mobility and employability of students from the Nordic countries. Results show that…
Impact of Incentives on the Use of Feedback in Educational Videogames. CRESST Report 813
ERIC Educational Resources Information Center
Delacruz, Girlie C.
2012-01-01
Educational videogames can be designed to provide instructional feedback that is responsive to specific actions. However, existing research indicates that students tend to ignore videogame feedback and subsequently use less effective help-seeking strategies. Research on help-seeking in learning environments has primarily focused on the role of…
The Adaptation of Migrant Children
ERIC Educational Resources Information Center
Portes, Alejandro; Rivas, Alejandro
2011-01-01
Alejandro Portes and Alejandro Rivas examine how young immigrants are adapting to life in the United States. They begin by noting the existence of two distinct pan-ethnic populations: Asian Americans, who tend to be the offspring of high-human-capital migrants, and Hispanics, many of whose parents are manual workers. Vast differences in each, both…
A Comparative Study of Parental and Filial Role Definitions
ERIC Educational Resources Information Center
Safilios Rothschild, Constantina; Georgiopoulos, John
1970-01-01
Data analysis for the American and Greek cultures indicate following trends: (1) parents of both sexes tend to define roles in terms of instrumental and expressive components, suggesting that Parsonian typology, if valid at all, applies more to lower and working class; (2) no significant social differences exist; and (3) family modernization along…
NASA Astrophysics Data System (ADS)
Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti
2016-04-01
Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.
NASA Astrophysics Data System (ADS)
Ree, J. H.; Kim, S.; Yoon, H. S.; Choi, B. K.; Park, P. H.
2017-12-01
The GPS-determined, pre-, co- and post-seismic crustal deformations of the Korean peninsula with respect to the 2011 Tohoku-Oki earthquake (Baek et al., 2012, Terra Nova; Kim et al., 2015, KSCE Jour. of Civil Engineering) are all stretching ones (extensional; horizontal stretching rate larger than horizontal shortening rate). However, focal mechanism solutions of earthquakes indicate that South Korea has been at compressional regime dominated by strike- and reverse-slip faultings. We reevaluated the velocity field of GPS data to see any effect of the Tohoku-Oki earthquake on the Korean crustal deformation and seismicity. To calculate the velocity gradient tensor of GPS sites, we used a gridding method based on least-square collocation (LSC). This LSC method can overcome shortcomings of the segmentation methods including the triangulation method. For example, an undesirable, abrupt change in components of velocity field occurs at segment boundaries in the segmentation methods. It is also known that LSC method is more useful in evaluating deformation patterns in intraplate areas with relatively small displacements. Velocity vectors of South Korea, pointing in general to 113° before the Tohoku-Oki earthquake, instantly changed their direction toward the epicenter (82° on average) during the Tohoku-Oki earthquake, and then gradually returned to the original position about 2 years after the Tohoku-Oki earthquake. Our calculation of velocity gradient tensors after the Tohoku-Oki earthquake shows that the stretching and rotating fields are quite heterogeneous, and that both stretching and shortening areas exist in South Korea. In particular, after the post-seismic relaxation ceased (i.e., from two years after the Tohoku-Oki earthquake), regions with thicker and thinner crusts tend to be shortening and stretching, respectively, in South Korea. Furthermore, the straining rate is larger in the regions with thinner crust. Although there is no meaningful correlation between seismicity and crustal straining pattern of South Korea at present, the seismicity tends to be localized along boundaries between areas with opposite vorticity, particularly for velocity field for one year after the Tohoku-Oki earthquake.
The dynamical properties of a Rydberg hydrogen atom between two parallel metal surfaces
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Hong-Yun; Yang, Shan-Ying; Lin, Sheng-Lu
2011-03-01
This paper presents the dynamical properties of a Rydberg hydrogen atom between two metal surfaces using phase space analysis methods. The dynamical behaviour of the excited hydrogen atom depends sensitively on the atom—surface distance d. There exists a critical atom—surface distance dc = 1586 a.u. When the atom—surface distance d is larger than the critical distance dc, the image charge potential is less important than the Coulomb potential, the system is near-integrable and the electron motion is regular. As the distance d decreases, the system will tend to be non-integrable and unstable, and the electron might be captured by the metal surfaces. Project supported by the National Natural Science Foundation of China (Grant No. 10774093) and the Natural Science Foundation of Shandong Province (Grant No. ZR2009FZ006).
Impact of prepackaging antimalarial drugs on cost to patients and compliance with treatment.
Yeboah-Antwi, K.; Gyapong, J. O.; Asare, I. K.; Barnish, G.; Evans, D. B.; Adjei, S.
2001-01-01
OBJECTIVE: To examine the extent to which district health teams could reduce the burden of malaria, a continuing major cause of mortality and morbidity, in a situation where severe resource constraints existed and integrated care was provided. METHODS: Antimalarial drugs were prepackaged into unit doses in an attempt to improve compliance with full courses of chemotherapy. FINDINGS: Compliance improved by approximately 20% in both adults and children. There were 50% reductions in cost to patients, waiting time at dispensaries and drug wastage at facilities. The intervention, which tended to improve both case and drug management at facilities, was well accepted by health staff and did not involve them in additional working time. CONCLUSION: The prepackaging of antimalarials at the district level offers the prospect of improved compliance and a reduction in the spread of resistance. PMID:11417034
Ancient Cosmology, superfine structure of the Universe and Anthropological Principle
NASA Astrophysics Data System (ADS)
Arakelyan, Hrant; Vardanyan, Susan
2015-07-01
The modern cosmology by its spirit, conception of the Big Bang is closer to the ancient cosmology, than to the cosmological paradigm of the XIX century. Repeating the speculations of the ancients, but using at the same time subtle mathematical methods and relying on the steadily accumulating empirical material, the modern theory tends to a quantitative description of nature, in which increasing role are playing the numerical ratios between the physical constants. The detailed analysis of the influence of the numerical values -- of physical quantities on the physical state of the universe revealed amazing relations called fine and hyperfine tuning. In order to explain, why the observable universe comes to be a certain set of interrelated fundamental parameters, in fact a speculative anthropic principle was proposed, which focuses on the fact of the existence of sentient beings.
Influence of silica nanospheres on corrosion behavior of magnesium matrix syntactic foam
NASA Astrophysics Data System (ADS)
Qureshi, W.; Kannan, S.; Vincent, S.; Eddine, N. N.; Muhammed, A.; Gupta, M.; Karthikeyan, R.; Badari, V.
2018-04-01
Over the years, the development of Magnesium alloys as biodegradable implants has seen significant advancements. Magnesium based materials tend to provide numerous advantages in the field of biomedical implants over existing materials such as titanium or stainless steel. The present research focuses on corrosive behavior of Magnesium reinforced with different volume percentages of Hollow Silica Nano Spheres (HSNS). These behaviors were tested in two different simulated body fluids (SBF) namely, Hank’s Buffered Saline Solution (HBSS) and Phosphate Buffered Solution (PBS). This corrosion study was done using the method of electrochemical polarization with a three-electrode configuration. Comparative studies were established by testing pure Mg which provided critical information on the effects of the reinforcing material. The HSNS reinforced Mg displayed desirable characteristics after corrosion experiments; increased corrosion resistance was witnessed with higher volume percentage of HSNS.
Raghava, Smita; Barua, Bipasha; Singh, Pradeep K.; Das, Mili; Madan, Lalima; Bhattacharyya, Sanchari; Bajaj, Kanika; Gopal, B.; Varadarajan, Raghavan; Gupta, Munishwar N.
2008-01-01
Many recombinant eukaryotic proteins tend to form insoluble aggregates called inclusion bodies, especially when expressed in Escherichia coli. We report the first application of the technique of three-phase partitioning (TPP) to obtain correctly refolded active proteins from solubilized inclusion bodies. TPP was used for refolding 12 different proteins overexpressed in E. coli. In each case, the protein refolded by TPP gave either higher refolding yield than the earlier reported method or succeeded where earlier efforts have failed. TPP-refolded proteins were characterized and compared to conventionally purified proteins in terms of their spectral characteristics and/or biological activity. The methodology is scaleable and parallelizable and does not require subsequent concentration steps. This approach may serve as a useful complement to existing refolding strategies of diverse proteins from inclusion bodies. PMID:18780821
Investigation of the Helmholtz-Kohlrausch effect using wide-gamut display
NASA Astrophysics Data System (ADS)
Oh, Semin; Kwak, Youngshin
2015-01-01
The aim of this study is to investigate whether the Helmholtz-Kohlrausch effect exists among the images having various luminance and chroma levels. Firstly, five images were selected. Then each image was adjusted to have 4 different average CIECAM02 C and 5 different average CIECAM02 J. In total 20 test images were generated per each image for the psychophysical experiment. The psychophysical experiment was done in a dark room using a LCD display. To evaluate the overall perceived brightness of images a magnitude estimation method was used. Fifteen participants evaluated the brightness of each image comparing with the reference image. As a result, participants tended to evaluate the brightness higher as the average CIECAM02 J and also CIECAM02 C of the image increases proving the Helmholtz- Kohlrausch effect in images.
"The Door Opens and the Tiger Leaps": Theory and Method in Comparative Education in the Global Era.
ERIC Educational Resources Information Center
Marginson, Simon; Mollis, Marcela
The field of international comparative education is constructed by relations of power and conflict. Comparative education contains an intrinsic tension between "sameness" and "difference." The dominant approach tends toward sameness and the elimination of variation, while one critique of the dominant approach tends toward an…
McClellan, Frank M; White, Augustus A; Jimenez, Ramon L; Fahmy, Sherin
2012-05-01
There is a perception that socioeconomically disadvantaged patients tend to sue their doctors more frequently. As a result, some physicians may be reluctant to treat poor patients or treat such patients differently from other patient groups in terms of medical care provided. We (1) examined existing literature to refute the notion that poor patients are inclined to sue doctors more than other patients, (2) explored unconscious bias as an explanation as to why the perception of the poor being more litigious may exist despite evidence to the contrary, and (3) assessed the role of culturally competent awareness and knowledge in confronting physician bias. We reviewed medical and social literature to identify studies that have examined differences in litigation rates and related medical malpractice claims among socioeconomically disadvantaged patients versus other groups of patients. Contrary to popular perception, existing studies show poor patients, in fact, tend to sue physicians less often. This may be related to a relative lack of access to legal resources and the nature of the contingency fee system in medical malpractice claims. Misperceptions such as the one examined in this article that assume a relationship between patient poverty and medical malpractice litigation may arise from unconscious physician bias and other social variables. Cultural competency can be helpful in mitigating such bias, improving medical care, and addressing the risk of medical malpractice claims.
Anota, Amélie; Barbieri, Antoine; Savina, Marion; Pam, Alhousseiny; Gourgou-Bourgade, Sophie; Bonnetain, Franck; Bascoul-Mollevi, Caroline
2014-12-31
Health-Related Quality of Life (HRQoL) is an important endpoint in oncology clinical trials aiming to investigate the clinical benefit of new therapeutic strategies for the patient. However, the longitudinal analysis of HRQoL remains complex and unstandardized. There is clearly a need to propose accessible statistical methods and meaningful results for clinicians. The objective of this study was to compare three strategies for longitudinal analyses of HRQoL data in oncology clinical trials through a simulation study. The methods proposed were: the score and mixed model (SM); a survival analysis approach based on the time to HRQoL score deterioration (TTD); and the longitudinal partial credit model (LPCM). Simulations compared the methods in terms of type I error and statistical power of the test of an interaction effect between treatment arm and time. Several simulation scenarios were explored based on the EORTC HRQoL questionnaires and varying the number of patients (100, 200 or 300), items (1, 2 or 4) and response categories per item (4 or 7). Five or 10 measurement times were considered, with correlations ranging from low to high between each measure. The impact of informative missing data on these methods was also studied to reflect the reality of most clinical trials. With complete data, the type I error rate was close to the expected value (5%) for all methods, while the SM method was the most powerful method, followed by LPCM. The power of TTD is low for single-item dimensions, because only four possible values exist for the score. When the number of items increases, the power of the SM approach remained stable, those of the TTD method increases while the power of LPCM remained stable. With 10 measurement times, the LPCM was less efficient. With informative missing data, the statistical power of SM and TTD tended to decrease, while that of LPCM tended to increase. To conclude, the SM model was the most powerful model, irrespective of the scenario considered, and the presence or not of missing data. The TTD method should be avoided for single-item dimensions of the EORTC questionnaire. While the LPCM model was more adapted to this kind of data, it was less efficient than the SM model. These results warrant validation through comparisons on real data.
Cosmology with nonminimal kinetic coupling and a Higgs-like potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, Jiro; Sushkov, Sergey V., E-mail: jmatsumoto@kpfu.ru, E-mail: sergey_sushkov@mail.ru
2015-11-01
We consider cosmological dynamics in the theory of gravity with the scalar field possessing the nonminimal kinetic coupling to curvature given as κG{sup μν}φ{sub ,μ}φ{sub ,ν}, and the Higgs-like potential . Using the dynamical system method, we analyze stationary points, their stability, and all possible asymptotical regimes of the model under consideration. We show that the Higgs field with the kinetic coupling provides an existence of accelerated regimes of the Universe evolution. There are three possible cosmological scenarios with acceleration: (i) The late-time de Sitter epoch when the Hubble parameter tends to the constant value, as t → ∞, while the scalarmore » field tends to zero, 0φ(t)→ , so that the Higgs potential reaches its local maximum . (ii) The Big Rip when H(t)∼(t{sub *}−t){sup −1} → ∞ and φ(t)∼(t{sub *}−t){sup −2} → ∞ as t → t{sub *}. (iii) The Little Rip when H(t)∼ t{sup 1/2} → ∞ and φ(t)∼ t{sup 1/4} → ∞ as t → ∞. Also, we derive modified slow-roll conditions for the Higgs field and demonstrate that they lead to the Little Rip scenario.« less
Brown, Patrick; Elston, Mary Ann; Gabe, Jonathan
2015-12-01
This article contributes to sociological debates about trends in the power and status of medical professionals, focussing on claims that deferent patient relations are giving way to a more challenging consumerism. Analysing data from a mixed methods study involving general practitioners in England, we found some support for the idea that an apparent 'golden age' of patient deference is receding. Although not necessarily expressing nostalgia for such doctor-patient relationships, most GPs described experiencing disruptive or verbally abusive interactions at least occasionally and suggested that these were becoming more common. Younger doctors tended to rate patients as less respectful than their older colleagues but were also more likely to be egalitarian in attitude. Our data suggest that GPs, especially younger ones, tend towards a more informal yet limited engagement with their patients and with the communities in which they work. These new relations might be a basis for mutual respect between professionals and patients in the consulting room, but may also generate uncertainty and misunderstanding. Such shifts are understood through an Eliasian framework as the functional-democratisation of patient-doctor relations via civilising processes, but with this shift existing alongside decivilising tendencies involving growing social distance across broader social figurations. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Obesity and industry self-regulation of food and beverage marketing: a literature review.
Ronit, K; Jensen, J D
2014-07-01
Obesity is a growing concern at national and international levels, and it is increasingly recognised that the industry has a role in and hence needs to be involved in halting the obesity epidemic. The objective of this study is to describe, analyse and evaluate research on industry self-regulation regarding food and beverage marketing and nutrition labelling. Five databases were searched for combinations of the search terms-obesity, nutrition, food, beverages, industry, self-regulation, labelling, advertising and marketing-and papers were selected on the basis of paper titles and, subsequently, on the basis of abstracts. Of the 4978 identified publications, 22 were included in the final review. The studies show that commitments in industry self-regulation schemes tend to be relatively vague and permissive, that the measurable effects of the self-regulations tend to be relatively small and that some extent of public regulation may catalyse the effectiveness of industry self-regulation. Although the reviewed studies vary in terms of analytic units and methods applied, they generally stress an ineffectiveness of existing self-regulation schemes. Food industry self-regulation in relation to obesity prevention is an emerging field of research, and further research is needed in such schemes' definitions of regulatory standards, their monitoring and sanctioning mechanisms, and their interactions with public regulation, if industry self-regulation of marketing behaviour is to become an effective and credible approach.
Dual diagnosis clients' treatment satisfaction - a systematic review
2011-01-01
Background The aim of this systematic review is to synthesize existing evidence about treatment satisfaction among clients with substance misuse and mental health co-morbidity (dual diagnoses, DD). Methods We examined satisfaction with treatment received, variations in satisfaction levels by type of treatment intervention and by diagnosis (i.e. DD clients vs. single diagnosis clients), and the influence of factors other than treatment type on satisfaction. Peer-reviewed studies published in English since 1970 were identified by searching electronic databases using pre-defined search strings. Results Across the 27 studies that met inclusion criteria, high average satisfaction scores were found. In most studies, integrated DD treatment yielded greater client satisfaction than standard treatment without explicit DD focus. In standard treatment without DD focus, DD clients tended to be less satisfied than single diagnosis clients. Whilst the evidence base on client and treatment variables related to satisfaction is small, it suggested client demographics and symptom severity to be unrelated to treatment satisfaction. However, satisfaction tended to be linked to other treatment process and outcome variables. Findings are limited in that many studies had very small sample sizes, did not use validated satisfaction instruments and may not have controlled for potential confounders. A framework for further research in this important area is discussed. Conclusions High satisfaction levels with current treatment provision, especially among those in integrated treatment, should enhance therapeutic optimism among practitioners dealing with DD clients. PMID:21501510
Krentel, Alison; Aunger, Robert
2012-08-01
Many public health programmes require individuals to comply with particular behaviours that are novel to them, for example, acquiring new eating habits, accepting immunizations or taking a new medication. In particular, mass drug administration programmes only work to reduce the prevalence of a disease if significant proportions of the target population take the drug in question. In such cases, knowledge of the factors most likely to lead to high levels of compliance is crucial to the programme's success. Existing models of compliance tend to either address interpersonal, organizational or psychological causes independently. Here, the authors present a formal method for analysing relevant factors in the situational context of the compliant behaviour, identifying how these factors may interact within the individual. This method was developed from semantic network analysis, augmented to include environmental and demographic variables to show causal linkages-hence the name 'causal chain mapping'. The ability of this method to provide significant insight into the actual behaviour of individuals is demonstrated with examples from a mass drug administration for lymphatic filariasis in Alor District, Indonesia. The use of this method is likely to help identify key components influencing compliance, and thus make any public health programme reliant on the adoption of novel behaviours more effective.
ERIC Educational Resources Information Center
Hartnett, Maggie; St. George, Alison; Dron, John
2011-01-01
Existing research into motivation in online environments has tended to use one of two approaches. The first adopts a trait-like model that views motivation as a relatively stable, personal characteristic of the learner. Research from this perspective has contributed to the notion that online learners are, on the whole, intrinsically motivated. The…
The Experience of Being a Junior Minority Female Faculty Member
ERIC Educational Resources Information Center
Boyd, Tammy; Cintron, Rosa; Alexander-Snow, Mia
2010-01-01
Much has been written about the trials and tribulations of junior tenure-track faculty; much has also been written about the difficulties faced by women and minority faculty. However, there is very little research about the experiences of minority women faculty who are also tenure-earning, but untenured; what little research does exist tends to…
Elder Abuse and Black Americans: Incidence, Correlates, Treatment and Prevention.
ERIC Educational Resources Information Center
Cazenave, Noel A.
Existing evidence on family violence rates by age and race as well as the available data on race and physical elder abuse incidence rates suggests that because such data are not based on random or representative samples and tend to reflect a "sampling artifact" of the particular client populations served by the professionals surveyed,…
ERIC Educational Resources Information Center
Azul, David
2016-01-01
Background: Transmasculine people assigned female gender at birth but who do not identify with this classification have traditionally received little consideration in the voice literature. Existing analyses tend to be focused on evaluating speaker voice characteristics, whereas other factors that contribute to the production of vocal gender have…
ERIC Educational Resources Information Center
Raptis, Helen
2011-01-01
Little empirical research has investigated the integration of Canada's Aboriginal children into provincial school systems. Furthermore, the limited existing research has tended to focus on policymakers and government officials at the national level. Thus, the policy shift from segregation to integration has generally been attributed to Canada's…
ERIC Educational Resources Information Center
Chowdhury, Monali; Benson, Betsey A.; Hillier, Ashleigh
2010-01-01
The existing literature suggests that while impairments in Autism Spectrum Disorders (ASDs) continue into adulthood, some behavioral symptoms tend to abate with age. However, there is a dearth of research examining changes in ASD symptoms from childhood to adulthood, especially for Restricted Repetitive Behaviors (RRBs). We examined age-related…
The American College Student Cell Phone Survey
ERIC Educational Resources Information Center
Emanuel, Richard C.
2013-01-01
This article reports on a study of cell phone use among college students. This group is considered particularly important because college students tend to be among the first to try new technology, are the group most likely to innovate new ways of using existing technology, and are most vocal about what they need and/or want to see changed…
A Social Role Theory Perspective on Gender Gaps in Political Attitudes
ERIC Educational Resources Information Center
Diekman, Amanda B.; Schneider, Monica C.
2010-01-01
Men and women tend to espouse different political attitudes, as widely noted by both journalists and social scientists. A deeper understanding of why and when gender gaps exist is necessary because at least some gender differences in the political realm are both pervasive and impactful. In this article, we apply a social role theory framework to…
Communicating for Quality in School Age Care Services
ERIC Educational Resources Information Center
Cartmel, Jennifer; Grieshaber, Susan
2014-01-01
School Age Care (SAC) services have existed in Australia for over 100 years but they have tended to take a back seat when compared with provision for school-aged children and those under school age using early childhood education and care (ECEC) services. Many SAC services are housed in shared premises and many children attending preparatory or…
THOMAS J. BRANDEIS; MARIA DEL ROCIO SUAREZ ROZO
2005-01-01
Total aboveground live tree biomass in Puerto Rican lower montane wet, subtropical wet, subtropical moist and subtropical dry forests was estimated using data from two forest inventories and published regression equations. Multiple potentially-applicable published biomass models existed for some forested life zones, and their estimates tended to diverge with increasing...
Thomas J. Brandeis; Maria Del Rocio; Suarez Rozo
2005-01-01
Total aboveground live tree biomass in Puerto Rican lower montane wet, subtropical wet, subtropical moist and subtropical dry forests was estimated using data from two forest inventories and published regression equations. Multiple potentially-applicable published biomass models existed for some forested life zones, and their estimates tended to diverge with increasing...
ERIC Educational Resources Information Center
Nusche, Deborah
2008-01-01
Higher education institutions (HEIs) have experienced increasing pressures to provide accountability data and consumer information on the quality of teaching and learning. Existing ratings and rankings of HEIs tend to neglect information on student learning outcomes. Instead, they focus on inputs, activities and research outputs, such as resources…
ERIC Educational Resources Information Center
Babb, Jeffry S., Jr.; Abdullat, Amjad
2012-01-01
Disruptive technologies, such as mobile applications development, will always present a dilemma for Information Systems educators as dominant paradigms in our environment will tend to favor the existing sustaining technologies that we have become known for in our discipline. In light of this friction, we share our approach in investigating and…
Teaching the Puritan Captivity Narrative: A History of the American Hero.
ERIC Educational Resources Information Center
Buckley, J. F.
How educators teach and talk about the Puritans tends to promulgate a view of them that does not exist in all their texts. From the beginning of the Puritans' arrival in 1630 in New England until Cotton Mather's 1702 publication "Magnalia Christi Americana," there are literary treatments of the idealism and the hardship constituting…
Teaching Autonomy: The Obligations of Liberal Education in Plural Societies
ERIC Educational Resources Information Center
Kerr, Donald
2006-01-01
Existing conceptions of autonomy tend to fall to one of two criticisms: they either fail to capture our intuitive understanding that autonomy implies an ability to act congruently with the demands of justice and equality, or they are unclear as to whether particular actions must be good by some standard to be considered autonomous. In this article…
NASA Astrophysics Data System (ADS)
Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.
2018-03-01
Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.
Prediction of hot spots in protein interfaces using a random forest model with hybrid features.
Wang, Lin; Liu, Zhi-Ping; Zhang, Xiang-Sun; Chen, Luonan
2012-03-01
Prediction of hot spots in protein interfaces provides crucial information for the research on protein-protein interaction and drug design. Existing machine learning methods generally judge whether a given residue is likely to be a hot spot by extracting features only from the target residue. However, hot spots usually form a small cluster of residues which are tightly packed together at the center of protein interface. With this in mind, we present a novel method to extract hybrid features which incorporate a wide range of information of the target residue and its spatially neighboring residues, i.e. the nearest contact residue in the other face (mirror-contact residue) and the nearest contact residue in the same face (intra-contact residue). We provide a novel random forest (RF) model to effectively integrate these hybrid features for predicting hot spots in protein interfaces. Our method can achieve accuracy (ACC) of 82.4% and Matthew's correlation coefficient (MCC) of 0.482 in Alanine Scanning Energetics Database, and ACC of 77.6% and MCC of 0.429 in Binding Interface Database. In a comparison study, performance of our RF model exceeds other existing methods, such as Robetta, FOLDEF, KFC, KFC2, MINERVA and HotPoint. Of our hybrid features, three physicochemical features of target residues (mass, polarizability and isoelectric point), the relative side-chain accessible surface area and the average depth index of mirror-contact residues are found to be the main discriminative features in hot spots prediction. We also confirm that hot spots tend to form large contact surface areas between two interacting proteins. Source data and code are available at: http://www.aporc.org/doc/wiki/HotSpot.
Vivodtzev, Isabelle; Gagnon, Philippe; Pepin, Véronique; Saey, Didier; Laviolette, Louis; Brouillard, Cynthia; Maltais, François
2011-01-01
Rationale The endurance time (Tend) during constant-workrate cycling exercise (CET) is highly variable in COPD. We investigated pulmonary and physiological variables that may contribute to these variations in Tend. Methods Ninety-two patients with COPD completed a CET performed at 80% of peak workrate capacity (Wpeak). Patients were divided into tertiles of Tend [Group 1: <4 min; Group 2: 4–6 min; Group 3: >6 min]. Disease severity (FEV1), aerobic fitness (Wpeak, peak oxygen consumption [ peak], ventilatory threshold [ VT]), quadriceps strength (MVC), symptom scores at the end of CET and exercise intensity during CET (heart rate at the end of CET to heart rate at peak incremental exercise ratio [HRCET/HRpeak]) were analyzed as potential variables influencing Tend. Results Wpeak, peak, VT, MVC, leg fatigue at end of CET, and HRCET/HRpeak were lower in group 1 than in group 2 or 3 (p≤0.05). VT and leg fatigue at end of CET independently predicted Tend in multiple regression analysis (r = 0.50, p = 0.001). Conclusion Tend was independently related to the aerobic fitness and to tolerance to leg fatigue at the end of exercise. A large fraction of the variability in Tend was not explained by the physiological parameters assessed in the present study. Individualization of exercise intensity during CET should help in reducing variations in Tend among patients with COPD. PMID:21386991
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
Niarchos, Athanasios; Siora, Anastasia; Konstantinou, Evangelia; Kalampoki, Vasiliki; Lagoumintzis, George; Poulas, Konstantinos
2017-01-01
During the last few decades, the recombinant protein expression finds more and more applications. The cloning of protein-coding genes into expression vectors is required to be directional for proper expression, and versatile in order to facilitate gene insertion in multiple different vectors for expression tests. In this study, the TA-GC cloning method is proposed, as a new, simple and efficient method for the directional cloning of protein-coding genes in expression vectors. The presented method features several advantages over existing methods, which tend to be relatively more labour intensive, inflexible or expensive. The proposed method relies on the complementarity between single A- and G-overhangs of the protein-coding gene, obtained after a short incubation with T4 DNA polymerase, and T and C overhangs of the novel vector pET-BccI, created after digestion with the restriction endonuclease BccI. The novel protein-expression vector pET-BccI also facilitates the screening of transformed colonies for recombinant transformants. Evaluation experiments of the proposed TA-GC cloning method showed that 81% of the transformed colonies contained recombinant pET-BccI plasmids, and 98% of the recombinant colonies expressed the desired protein. This demonstrates that TA-GC cloning could be a valuable method for cloning protein-coding genes in expression vectors.
Niarchos, Athanasios; Siora, Anastasia; Konstantinou, Evangelia; Kalampoki, Vasiliki; Poulas, Konstantinos
2017-01-01
During the last few decades, the recombinant protein expression finds more and more applications. The cloning of protein-coding genes into expression vectors is required to be directional for proper expression, and versatile in order to facilitate gene insertion in multiple different vectors for expression tests. In this study, the TA-GC cloning method is proposed, as a new, simple and efficient method for the directional cloning of protein-coding genes in expression vectors. The presented method features several advantages over existing methods, which tend to be relatively more labour intensive, inflexible or expensive. The proposed method relies on the complementarity between single A- and G-overhangs of the protein-coding gene, obtained after a short incubation with T4 DNA polymerase, and T and C overhangs of the novel vector pET-BccI, created after digestion with the restriction endonuclease BccI. The novel protein-expression vector pET-BccI also facilitates the screening of transformed colonies for recombinant transformants. Evaluation experiments of the proposed TA-GC cloning method showed that 81% of the transformed colonies contained recombinant pET-BccI plasmids, and 98% of the recombinant colonies expressed the desired protein. This demonstrates that TA-GC cloning could be a valuable method for cloning protein-coding genes in expression vectors. PMID:29091919
Marine Corps Dining Concepts in the 1990’s. Volume 3. The Systems Analysis
1988-10-01
which require immediate and long-term solutions. Among those problems are substantial competition created by introduction of fast food franchises on...method of cooking and tends to dry the surface of foods. This tends to work well with meats and bakery items. This is a highly used form of cookery
ERIC Educational Resources Information Center
Brown, Nicola
2017-01-01
While teaching methods tend to be updated frequently, the implementation of new innovative assessment tools is much slower. For example project based learning has become popular as a teaching technique, however, the assessment tends to be via traditional reports. This paper reports on the implementation and evaluation of using website development…
Predictors of urinary flame retardant concentration among pregnant women
Hoffman, Kate; Lorenzo, Amelia; Butt, Craig; Adair, Linda; Herring, Amy H.; Stapleton, Heather M.; Daniels, Julie
2016-01-01
Background Organophosphate compounds are commonly used in residential furniture, electronics, and baby products as flame retardants and are also used in other consumer products as plasticizers. Although the levels of exposure biomarkers are generally higher among children and decrease with age, relatively little is known about the individual characteristics associated with higher levels of exposure. Here, we investigate urinary metabolites of several organophosphate flame retardants (PFRs) in a cohort of pregnant women to evaluate patterns of exposure. Methods Pregnant North Carolina women (n=349) provided information on their individual characteristics (e.g. age and body mass index (BMI)) as a part of the Pregnancy Infection and Nutrition Study (2002–2005). Women also provided second trimester urine samples in which six PFR metabolites were measured using mass spectrometry methods. Results PFR metabolites were detected in every urine sample, with BDCIPP, DHPH, ip-PPP and BCIPHIPP detected in >80% of samples. Geometric mean concentrations were higher than what has been reported previously for similarly-timed cohorts. Women with higher pre-pregnancy BMI tended to have higher levels of urinary metabolites. For example, those classified as obese at the start of pregnancy had ip-PPP levels that were 1.52 times as high as normal weight range women (95% confidence interval: 1.23, 1.89). Women without previous children also tended to have higher urinary levels of DPHP, but lower levels of ip-PPP. In addition, we saw strong evidence of seasonal trends in metabolite concentrations (e.g. higher DPHP, BDCIPP, and BCIPHIPP in summer, and evidence of increasing ip-PPP between 2002 and 2005). Conclusions Our results indicate ubiquitous exposure to PFRs among NC women in the early 2000s. Additionally, our work suggests that individual characteristics are related to exposure and that temporal variation, both seasonal and annual, may exist. PMID:27745946
Latent Patient Cluster Discovery for Robust Future Forecasting and New-Patient Generalization
Masino, Aaron J.
2016-01-01
Commonly referred to as predictive modeling, the use of machine learning and statistical methods to improve healthcare outcomes has recently gained traction in biomedical informatics research. Given the vast opportunities enabled by large Electronic Health Records (EHR) data and powerful resources for conducting predictive modeling, we argue that it is yet crucial to first carefully examine the prediction task and then choose predictive methods accordingly. Specifically, we argue that there are at least three distinct prediction tasks that are often conflated in biomedical research: 1) data imputation, where a model fills in the missing values in a dataset, 2) future forecasting, where a model projects the development of a medical condition for a known patient based on existing observations, and 3) new-patient generalization, where a model transfers the knowledge learned from previously observed patients to newly encountered ones. Importantly, the latter two tasks—future forecasting and new-patient generalizations—tend to be more difficult than data imputation as they require predictions to be made on potentially out-of-sample data (i.e., data following a different predictable pattern from what has been learned by the model). Using hearing loss progression as an example, we investigate three regression models and show that the modeling of latent clusters is a robust method for addressing the more challenging prediction scenarios. Overall, our findings suggest that there exist significant differences between various kinds of prediction tasks and that it is important to evaluate the merits of a predictive model relative to the specific purpose of a prediction task. PMID:27636203
Latent Patient Cluster Discovery for Robust Future Forecasting and New-Patient Generalization.
Qian, Ting; Masino, Aaron J
2016-01-01
Commonly referred to as predictive modeling, the use of machine learning and statistical methods to improve healthcare outcomes has recently gained traction in biomedical informatics research. Given the vast opportunities enabled by large Electronic Health Records (EHR) data and powerful resources for conducting predictive modeling, we argue that it is yet crucial to first carefully examine the prediction task and then choose predictive methods accordingly. Specifically, we argue that there are at least three distinct prediction tasks that are often conflated in biomedical research: 1) data imputation, where a model fills in the missing values in a dataset, 2) future forecasting, where a model projects the development of a medical condition for a known patient based on existing observations, and 3) new-patient generalization, where a model transfers the knowledge learned from previously observed patients to newly encountered ones. Importantly, the latter two tasks-future forecasting and new-patient generalizations-tend to be more difficult than data imputation as they require predictions to be made on potentially out-of-sample data (i.e., data following a different predictable pattern from what has been learned by the model). Using hearing loss progression as an example, we investigate three regression models and show that the modeling of latent clusters is a robust method for addressing the more challenging prediction scenarios. Overall, our findings suggest that there exist significant differences between various kinds of prediction tasks and that it is important to evaluate the merits of a predictive model relative to the specific purpose of a prediction task.
An examination of the challenges influencing science instruction in Florida elementary classrooms
NASA Astrophysics Data System (ADS)
North, Stephanie Gwinn
It has been shown that the mechanical properties of thin films tend to differ from their bulk counterparts. Specifically, the bulge and microtensile testing of thin films used in MEMS have revealed that these films demonstrate an inverse relationship between thickness and strength. A film dimension is not a material property, but it evidently does affect the mechanical performance of materials at very small thicknesses. A hypothetical explanation for this phenomenon is that as the thickness dimension of the film decreases, it is statistically less likely that imperfections exist in the material. It would require a very small thickness (or volume) to limit imperfections in a material, which is why this phenomenon is seen in films with thicknesses on the order of 100 nm to a few microns. Another hypothesized explanation is that the surface tension that exists in bulk material also exists in thin films but has a greater impact at such a small scale. The goal of this research is to identify a theoretical prediction of the strength of thin films based on its microstructural properties such as grain size and film thickness. This would minimize the need for expensive and complicated tests such as the bulge and microtensile tests. In this research, data was collected from the bulge and microtensile testing of copper, aluminum, gold, and polysilicon free-standing thin films. Statistical testing of this data revealed a definitive inverse relationship between thickness and strength, as well as between grain size and strength, as expected. However, due to a lack of a standardized method for either test, there were significant variations in the data. This research compares and analyzes the methods used by other researchers to develop a suggested set of instructions for a standardized bulge test and standardized microtensile test. The most important parameters to be controlled in each test were found to be strain rate, temperature, film deposition method, film length, and strain measurement.
Synthesis of sub-millimeter calcite from aqueous solution
NASA Astrophysics Data System (ADS)
Reimi, M. A.; Morrison, J. M.; Burns, P. C.
2011-12-01
A novel aqueous synthesis that leads to the formation of calcite (CaCO3) crystals, up to 500μm in diameter, will be used to facilitate the study of contaminant transport in aqueous environmental systems. Existing processes tend to be complicated and often yield nanometer-sized or amorphous CaCO3. The synthesis method presented here, which involves slow mixing of concentrated solutions of CaCl2 ¬and (NH4)2CO3, produces single crystals of rhombohedral calcite in 2 to 4 days. Variations on the experimental method, including changes in pH and solution concentration, were explored to optimize the synthesis. Scanning Electron Microscope images show the differences in size and purity observed when the crystals are grown at pH values ranging from 2 to 6. The crystals grown from solutions of pH 2 were large (up to 500 micrometers in diameter) with minimal polycrystalline calcium carbonate, while crystals grown from solutions with pH values beyond 4 were smaller (up to 100 micrometers in diameter) with significant polycrystalline calcium carbonate. The synthesis method, materials characterization, and use in future actinide contaminant studies will be discussed.
Application of Visual Attention in Seismic Attribute Analysis
NASA Astrophysics Data System (ADS)
He, M.; Gu, H.; Wang, F.
2016-12-01
It has been proved that seismic attributes can be used to predict reservoir. The joint of multi-attribute and geological statistics, data mining, artificial intelligence, further promote the development of the seismic attribute analysis. However, the existing methods tend to have multiple solutions and insufficient generalization ability, which is mainly due to the complex relationship between seismic data and geological information, and undoubtedly own partly to the methods applied. Visual attention is a mechanism model of the human visual system which can concentrate on a few significant visual objects rapidly, even in a mixed scene. Actually, the model qualify good ability of target detection and recognition. In our study, the targets to be predicted are treated as visual objects, and an object representation based on well data is made in the attribute dimensions. Then in the same attribute space, the representation is served as a criterion to search the potential targets outside the wells. This method need not predict properties by building up a complicated relation between attributes and reservoir properties, but with reference to the standard determined before. So it has pretty good generalization ability, and the problem of multiple solutions can be weakened by defining the threshold of similarity.
NASA Astrophysics Data System (ADS)
Sævik, P. N.; Nixon, C. W.
2017-11-01
We demonstrate how topology-based measures of connectivity can be used to improve analytical estimates of effective permeability in 2-D fracture networks, which is one of the key parameters necessary for fluid flow simulations at the reservoir scale. Existing methods in this field usually compute fracture connectivity using the average fracture length. This approach is valid for ideally shaped, randomly distributed fractures, but is not immediately applicable to natural fracture networks. In particular, natural networks tend to be more connected than randomly positioned fractures of comparable lengths, since natural fractures often terminate in each other. The proposed topological connectivity measure is based on the number of intersections and fracture terminations per sampling area, which for statistically stationary networks can be obtained directly from limited outcrop exposures. To evaluate the method, numerical permeability upscaling was performed on a large number of synthetic and natural fracture networks, with varying topology and geometry. The proposed method was seen to provide much more reliable permeability estimates than the length-based approach, across a wide range of fracture patterns. We summarize our results in a single, explicit formula for the effective permeability.
Iterative dictionary construction for compression of large DNA data sets.
Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin
2012-01-01
Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.
An argument for renewed focus on epidemiology for public health
Rogawski, Elizabeth T.; Gray, Christine L.; Poole, Charles
2016-01-01
Purpose While epidemiology has an indispensable role in serving public health, the relative emphasis of applications of epidemiology often tend toward individual-level medicine over public health in terms of resources and impact. Methods We make distinctions between public health and medical applications of epidemiology to raise awareness among epidemiologists, many of whom came to the field with public health in mind. We discuss reasons for the overemphasis on medical epidemiology and suggest ways to counteract these incentives. Results Public health epidemiology informs interventions that are applied to populations or that confer benefits beyond the individual, while medical epidemiology informs interventions that improve the health of treated individuals. Available resources, new biomedical technologies, and existing epidemiologic methods favor medical applications of epidemiology. Focus on public health impact and methods suited to answer public health questions can create better balance and promote population-level improvements in public health. Conclusions By deliberately reflecting on research motivations and long-term goals, we hope the distinctions presented here will facilitate critical discussion and a greater consciousness of our potential impact on both individual and population-level health. Renewed intentions towards public health can help epidemiologists navigate potential projects and ultimately contribute to an epidemiology of consequence. PMID:27659585
Functional Linear Model with Zero-value Coefficient Function at Sub-regions.
Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin
2013-01-01
We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.
Appalachian residents’ experiences with and management of multiple morbidity
Schoenberg, Nancy E.; Bardach, Shoshana H.; Manchikanti, Kavita N.; Goodenow, Anne C.
2011-01-01
Approximately three quarters of middle aged and older adults have at least two simultaneously occurring chronic conditions (“multiple morbidity” or MM), a trend expected to increase dramatically throughout the world. Rural residents, who tend to have fewer personal and health resources, are more likely to experience MM. To improve our understanding of the ways in which vulnerable, rural residents in the U.S. experience and manage MM, we interviewed twenty rural Appalachian residents with MM. We identified the following themes; (a) MM has multifaceted challenges and is viewed as more than the sum of its parts; (b) numerous challenges exist to optimal MM self-management, particularly in a rural, under-resourced context; however, (c) participants described strategic methods of managing multiple chronic conditions, including prioritizing certain conditions and management strategies and drawing heavily on assistance from informal and formal sources. PMID:21263063
Elosúa, M. Rosa; Ciudad, María José; Contreras, María José
2017-01-01
Background/Aims To date, there are few studies on gender differences in patients with mild cognitive impairment (MCI) and Alzheimer disease (AD). In the present study, the existence of differences between sexes in verbal and visuospatial working memory tasks in the evolution of cognitive and pathological aging was examined. Method Ninety participants took part in this study: 30 AD, 30 MCI, and 30 healthy elderly participants (50% men and 50% women). Results There were no significant differences between men and women with AD in visuospatial tasks, whereas these differences were found within the MCI group, with the average of men achieving significantly higher results than women. In verbal tasks, there were no differences between sexes for any of the groups. Conclusion Execution in visuospatial tasks tends to depend on gender, whereas this does not occur for verbal tasks. PMID:28553312
Predicting Upscaled Behavior of Aqueous Reactants in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Wright, E. E.; Hansen, S. K.; Bolster, D.; Richter, D. H.; Vesselinov, V. V.
2017-12-01
When modeling reactive transport, reaction rates are often overestimated due to the improper assumption of perfect mixing at the support scale of the transport model. In reality, fronts tend to form between participants in thermodynamically favorable reactions, leading to segregation of reactants into islands or fingers. When such a configuration arises, reactions are limited to the interface between the reactive solutes. Closure methods for estimating control-volume-effective reaction rates in terms of quantities defined at the control volume scale do not presently exist, but their development is crucial for effective field-scale modeling. We attack this problem through a combination of analytical and numerical means. Specifically, we numerically study reactive transport through an ensemble of realizations of two-dimensional heterogeneous porous media. We then employ regression analysis to calibrate an analytically-derived relationship between reaction rate and various dimensionless quantities representing conductivity-field heterogeneity and the respective strengths of diffusion, reaction and advection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yun; Zhao, Mingyang; Khalid, Syed
The high voltage cathode material, LiMn 1.6Ni 0.4O 4, was prepared by a polymer-assisted method. The novelty of this paper is the substitution of Ni with Mn, which already exists in the crystal structure instead of other isovalent metal ion dopants which would result in capacity loss. The electrochemical performance testing including stability and rate capability was evaluated. The temperature was found to impose a change on the valence and structure of the cathode materials. Specifically, manganese tends to be reduced at a high temperature of 800 °C and leads to structural changes. The manganese substituted LiMn 1.5Ni 0.5O 4more » (LMN) has proved to be a good candidate material for Li-ion battery cathodes displaying good rate capability and capacity retention. Finally, the cathode materials processed at 550 °C showed a stable performance with negligible capacity loss for 400 cycles.« less
Team resilience for young restaurant workers: research-to-practice adaptation and assessment.
Bennett, Joel B; Aden, Charles A; Broome, Kirk; Mitchell, Kathryn; Rigdon, William D
2010-07-01
This paper describes a method for taking a known prevention intervention and modifying it to suit young restaurant workers. Such workers are at high risk for alcohol and other drug (AOD) abuse according to national surveys. While evidence-based programs for AOD prevention exist, they have not been delivered to restaurants. Accordingly, an adaptation methodology was developed by integrating curricula from a previous evidence-based program with research on resilience and input from stakeholders, such as young restaurant workers, their managers, trainers, and subject matter experts. A new curriculum (Team Resilience) maintained fidelity to the original program while incorporating stakeholder insights. At the end of each of three training sessions, participants (n = 124) rated their awareness of AOD risks, help-seeking orientation, and personal resilience. Ratings tended to increase across sessions, showing participants perceived benefits from Team Resilience. Discussion highlights the need for research-to-practice protocols in occupational health psychology.
A Generalized Mixture Framework for Multi-label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069
Laser shock microforming of aluminum foil with fs laser
NASA Astrophysics Data System (ADS)
Ye, Yunxia; Feng, Yayun; Xuan, Ting; Hua, Xijun; Hua, Yinqun
2014-12-01
Laser shock microforming of Aluminum(Al) foil through fs laser has been researched in this paper. The influences of confining layer, clamping method and impact times on induced dent depths were investigated experimentally. Microstructure of fs laser shock forming Al foil was observed through Transmission electron microscopy (TEM). Under the condition of tightly clamping, the dent depths increase with impact times and finally tend to saturating. Another new confining layer, the main component of which is polypropylene, was applied and the confining effect of it is better because of its higher impedance. TEM results show that dislocation is one of the main deformation mechanisms of fs laser shock forming Al foil. Specially, most of dislocations exist in the form of short and discrete dislocation lines. Parallel straight dislocation slip line also were observed. We analyzed that these unique dislocation arrangements are due to fs laser-induced ultra high strain rate.
Population Health Management for Older Adults
Tkatch, Rifky; Musich, Shirley; MacLeod, Stephanie; Alsgaard, Kathleen; Hawkins, Kevin; Yeh, Charlotte S.
2016-01-01
Background: The older adult population is expanding, living longer, with multiple chronic conditions. Understanding and managing their needs over time is an integral part of defining successful aging. Population health is used to describe the measurement and health outcomes of a population. Objectives: To define population health as applied to older adults, summarize lessons learned from current research, and identify potential interventions designed to promote successful aging and improved health for this population. Method: Online search engines were utilized to identify research on population health and health interventions for older adults. Results: Population health management (PHM) is one strategy to promote the health and well-being of target populations. Interventions promoting health across a continuum tend to be disease, risk, or health behavior specific rather than encompassing a global concept of health. Conclusion: Many existing interventions for older adults are simply research based with limited generalizability; as such, further work in this area is warranted. PMID:28680938
Eady, J. J.; Orta, T.; Dennis, M. F.; Stratford, M. R.; Peacock, J. H.
1995-01-01
Large fluctuations in glutathione content were observed on a daily basis using the Tietze enzyme recycling assay in a panel of six human cell lines of varying radiosensitivity. Glutathione content tended to increase to a maximum during exponential cell proliferation, and then decreased at different rates as the cells approached plateau phase. By reference to high-performance liquid chromatography and flow cytometry of the fluorescent bimane derivative we were able to verify that these changes were real. However, the Tietze assay was occasionally unable to detect glutathione in two of our cell lines (MGH-U1 and AT5BIVA), although the other methods indicated its presence. The existence of an inhibitory activity responsible for these anomalies was confirmed through spiking our samples with known amounts of glutathione. We were unable to detect a direct relationship between cellular glutathione concentration and aerobic radiosensitivity in our panel of cell lines. PMID:7577452
Physics Instructional Resource Usage by High-, Medium-, and Low-Skilled MOOC Students
ERIC Educational Resources Information Center
Balint, Trevor A.; Teodorescu, Raluca; Colvin, Kimberly; Choi, Youn-Jeng; Pritchard, David
2017-01-01
In this paper we examine how different types of participants in a physics Massive Open Online Course (MOOC) tend to use the existing course resources. We use data from the 2013 offering of the Massive Open Online Course 8.MReVx designed by the RELATE (REsearch in Learning Assessing and Tutoring Effectively) Group at the Massachusetts Institute of…
Intercultural Competency at the Geographic Combatant Command Level
2011-05-04
side. Individualistic cultures value self-reliance and initiative. Competition is generally accepted and expected. Collectivistic cultures tend to...investors. A significant historic example of collectivistic culture is the Japanese. During World War II, their emphasis was on the empire. This...to incorporate available intercultural competency assets at the GCC level to leverage existing cultural expertise: Foreign Area Officers (FAOs), DoD
Sensory and Perceptual Deprivation
1964-04-22
stimulation even in inane forms, and -- were more effectively persuaded by lectures advocating the existence of ghosts, poltergeists and extrasensory ... perception pbenomena. These provocative experiments at McGill were completed just about 10 years ago. What has happened in the decade since? Research...shown a greater change among isolated Ss in interest and belief in extra sensory perception topics (29, 56). Recent experiments have tended to confirm
ERIC Educational Resources Information Center
Van Ryzin, Mark J.; Vincent, Claudia G.
2017-01-01
Because students from American Indian/Alaska Native (AI/AN) backgrounds tend to lag behind their peers in academic achievement, researchers have recommended integrating Native Language and Culture (NLC) into instruction. However, existing evidence from large-scale studies finds a "negative" effect of the use of NLC on achievement,…
Back Eddies of Learning in the Recognition of Prior Learning: A Case Study
ERIC Educational Resources Information Center
Peruniak, Geoff; Powell, Rick
2007-01-01
The limited research that exists in the area of prior learning assessment (PLA) has tended to be descriptive and conceptual in nature. Where empirical studies have been done, they have focussed mainly on PLA as a means of credentialing rather than as a learning experience. Furthermore, there has been very little empirical research into the…
Critical Science Literacy: What Citizens and Journalists Need to Know to Make Sense of Science
ERIC Educational Resources Information Center
Priest, Susanna
2013-01-01
Increasing public knowledge of science is a widely recognized goal, but what that knowledge might consist of is rarely unpacked. Existing measures of science literacy tend to focus on textbook knowledge of science. Yet constructing a meaningful list of facts, even facts in application, is not only difficult but less than satisfying as an indicator…
ERIC Educational Resources Information Center
Vasi, Ion Bogdan
2007-01-01
The study of the adoption of activities to protect the natural environment has tended to focus on the role of organizational fields. This article advances existing research by simultaneously examining conflicting processes that operate in nested organizational fields at local, national and supra-national levels. It examines the recent spread of an…
ERIC Educational Resources Information Center
McCreath, Graham A.; Linehan, Cormac M. J.; Mar, Raymond A.
2017-01-01
Individuals who read more tend to have stronger verbal skills than those who read less. Interestingly, what you read may make a difference. Past studies have found that reading narrative fiction, but not expository nonfiction, predicts verbal ability. Why this difference exists is not known. Here we investigate one possibility: whether fiction…
ERIC Educational Resources Information Center
Skinner, Kate
2010-01-01
This article takes as its starting point a strike among African trainee literacy workers in the Northern Territories of the Gold Coast (now Ghana) in 1952. While the existing literature tends to concentrate on the tensions and contradictions in British colonial education policy, this article uses the strike to investigate how these agendas were…
ERIC Educational Resources Information Center
Friedman, Jonathan Z.; Miller-Idriss, Cynthia
2015-01-01
Existing studies of the internationalization of higher education have detailed the broad contours of change in the new "global" era, but they have told us much less about the individuals and processes underpinning these transformations. Moreover, they tend to treat internationalization as a recent or new phenomenon. There have been prior…
The Music of Form: Rethinking Organization in Writing
ERIC Educational Resources Information Center
Elbow, Peter
2006-01-01
Written words are laid out in space and exist on the page all at once, but a reader can only read a few words at a time. For readers, written words are trapped in the medium of time. So how can we best organize writing for readers? Traditional techniques of organization tend to stress the arrangement of parts in space and certain metadiscoursal…
USDA-ARS?s Scientific Manuscript database
The availability of whole genome sequence (WGS) data has made it possible to discover protein variants in silico. However, existing bovine WGS databases do not show data in a form conducive to protein variant analysis, and tend to under represent the breadth of genetic diversity in U.S. beef cattle...
ERIC Educational Resources Information Center
Felce, D.; Kerr, M.; Hastings, R. P.
2009-01-01
Background: Existing studies tend to show a positive association between mental illness and challenging behaviour among adults with intellectual disabilities (ID). However, whether the association is direct or artefactual is less clear. The purpose was to explore the association between psychiatric status and level of challenging behaviour, while…
Ungoverned Areas and Threats from Safe Havens
2008-01-01
reasonably well developed transportation and communication infrastructures tend to be more attractive to illicit actors than undeveloped places, for...into a broader UGA/SH strategy — or sequentially, to help strategists, planners, and regional or country teams develop a comprehensive UGA/SH strategy...need a reference for developing or revising existing products such as: " a country report on counterterrorism, drug enforcement, stabilization
ERIC Educational Resources Information Center
Sunardi; Maryardi; Sugini
2014-01-01
The previously UNESCO initiated policy of inclusive education has been adopted by the Indonesian government since 2003. As a new policy, inclusion will require many changes in the existing system of education which tends to be segregative. This research investigated the effects of a two-day workshop on parents attitudes, teachers' competence and…
Patterns of non-firearm homicide.
Henderson, J P; Morgan, S E; Patel, F; Tiplady, M E
2005-06-01
Sixty-two recent non-firearm homicides dealt with by an inner London public mortuary were studied. The majority of homicides involved stabbing--usually multiple wounds to the trunk. These were followed by blunt instrument homicides--nearly all involved multiple blows to the head, and asphyxiation--usually consisting of strangulation with a ligature being employed in the majority of cases. Homicides tended to occur during the evening and night in spring and early summer. Most victims were found to be in the 20-39 age group, with male victims outnumbering females in a 2:1 ratio. A marked difference in homicide pattern existed between the male and female victims. Males tended to fall victim to strangers encountered while socialising in and around bars and clubs. Females were most often killed by close acquaintances in domestic disputes at home.
Convolutional networks for vehicle track segmentation
Quach, Tu-Thach
2017-08-19
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are unable to capture natural track features such as continuity and parallelism. More powerful, but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3-by-3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate inmore » low power and have limited training data. As a result, we aim for small, efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our 6-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.« less
Convolutional networks for vehicle track segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quach, Tu-Thach
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are unable to capture natural track features such as continuity and parallelism. More powerful, but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3-by-3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate inmore » low power and have limited training data. As a result, we aim for small, efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our 6-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.« less
Safer one-pot synthesis of the ‘SHAPE’ reagent 1-methyl-7-nitroisatoic anhydride (1m7)
Turner, Rushia; Shefer, Kinneret; Ares, Manuel
2013-01-01
Estimating the reactivity of 2′-hydroxyl groups along an RNA chain of interest aids in the modeling of the folded RNA structure; flexible loops tend to be reactive, whereas duplex regions are generally not. Among the most useful reagents for probing 2′-hydroxyl reactivity is 1-methyl-7-nitroisatoic anhydride (1m7), but the absence of a reliable, inexpensive source has prevented widespread adoption. An existing protocol for the conversion of an inexpensive precursor 4-nitroisatoic anhydride (4NIA) recommends the use of NaH in dimethylformamide (DMF), a reagent combination that most molecular biology labs are not equipped to handle, and that does not scale safely in any case. Here we describe a safer, one-pot method for bulk conversion of 4NIA to 1m7 that reduces costs and bypasses the use of NaH. We show that 1m7 produced by this method is free of side products and can be used to probe RNA structure in vitro. PMID:24141619
Skin image illumination modeling and chromophore identification for melanoma diagnosis
NASA Astrophysics Data System (ADS)
Liu, Zhao; Zerubia, Josiane
2015-05-01
The presence of illumination variation in dermatological images has a negative impact on the automatic detection and analysis of cutaneous lesions. This paper proposes a new illumination modeling and chromophore identification method to correct lighting variation in skin lesion images, as well as to extract melanin and hemoglobin concentrations of human skin, based on an adaptive bilateral decomposition and a weighted polynomial curve fitting, with the knowledge of a multi-layered skin model. Different from state-of-the-art approaches based on the Lambert law, the proposed method, considering both specular reflection and diffuse reflection of the skin, enables us to address highlight and strong shading effects usually existing in skin color images captured in an uncontrolled environment. The derived melanin and hemoglobin indices, directly relating to the pathological tissue conditions, tend to be less influenced by external imaging factors and are more efficient in describing pigmentation distributions. Experiments show that the proposed method gave better visual results and superior lesion segmentation, when compared to two other illumination correction algorithms, both designed specifically for dermatological images. For computer-aided diagnosis of melanoma, sensitivity achieves 85.52% when using our chromophore descriptors, which is 8~20% higher than those derived from other color descriptors. This demonstrates the benefit of the proposed method for automatic skin disease analysis.
Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T
2013-01-01
Background The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. Objective This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. Methods We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Findings Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax. PMID:23792324
Gender gaps and gendered action in a first-year physics laboratory
NASA Astrophysics Data System (ADS)
Day, James; Stang, Jared B.; Holmes, N. G.; Kumar, Dhaneesh; Bonn, D. A.
2016-12-01
[This paper is part of the Focused Collection on Gender in Physics.] It is established that male students outperform female students on almost all commonly used physics concept inventories. However, there is significant variation in the factors that contribute to the gap, as well as the direction in which they influence it. It is presently unknown if such a gender gap exists on the relatively new Concise Data Processing Assessment (CDPA) and, therefore, whether gendered actions in the teaching lab might influence—or be influenced by—the gender gap. To begin to get an estimates of the gap, its predictors, and its correlates, we have measured performance on the CDPA at the pretest and post-test level. We have also made observations of how students in mixed-gender partnerships divide their time in the lab. We find a gender gap on the CDPA that persists from pre- to post-test and that is as big as, if not bigger than, similar reported gaps. We also observe compelling differences in how students divide their time in the lab. In mixed-gender pairs, male students tend to monopolize the computer, female and male students tend to share the equipment equally, and female students tend to spend more time on other activities that are not the equipment or computer, such as writing or speaking to peers. We also find no correlation between computer use, when students are presumably working with their data, and performance on the CDPA post-test. In parallel to our analysis, we scrutinize some of the more commonly used approaches to similar data. We argue in favor of more explicitly checking the assumptions associated with the statistical methods that are used and improved reporting and contextualization of effect sizes. Ultimately, we claim no evidence that female students are less capable of learning than their male peers, and we suggest caution when using gain measures to draw conclusions about differences in science classroom performance across gender.
Pesaranghader, Ahmad; Matwin, Stan; Sokolova, Marina; Beiko, Robert G
2016-05-01
Measures of protein functional similarity are essential tools for function prediction, evaluation of protein-protein interactions (PPIs) and other applications. Several existing methods perform comparisons between proteins based on the semantic similarity of their GO terms; however, these measures are highly sensitive to modifications in the topological structure of GO, tend to be focused on specific analytical tasks and concentrate on the GO terms themselves rather than considering their textual definitions. We introduce simDEF, an efficient method for measuring semantic similarity of GO terms using their GO definitions, which is based on the Gloss Vector measure commonly used in natural language processing. The simDEF approach builds optimized definition vectors for all relevant GO terms, and expresses the similarity of a pair of proteins as the cosine of the angle between their definition vectors. Relative to existing similarity measures, when validated on a yeast reference database, simDEF improves correlation with sequence homology by up to 50%, shows a correlation improvement >4% with gene expression in the biological process hierarchy of GO and increases PPI predictability by > 2.5% in F1 score for molecular function hierarchy. Datasets, results and source code are available at http://kiwi.cs.dal.ca/Software/simDEF CONTACT: ahmad.pgh@dal.ca or beiko@cs.dal.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mechanical and Statistical Evidence of Human-Caused Earthquakes - A Global Data Analysis
NASA Astrophysics Data System (ADS)
Klose, C. D.
2012-12-01
The causality of large-scale geoengineering activities and the occurrence of earthquakes with magnitudes of up to M=8 is discussed and mechanical and statistical evidence is provided. The earthquakes were caused by artificial water reservoir impoundments, underground and open-pit mining, coastal management, hydrocarbon production and fluid injections/extractions. The presented global earthquake catalog has been recently published in the Journal of Seismology and is available for the public at www.cdklose.com. The data show evidence that geomechanical relationships exist with statistical significance between a) seismic moment magnitudes of observed earthquakes, b) anthropogenic mass shifts on the Earth's crust, and c) lateral distances of the earthquake hypocenters to the locations of the mass shifts. Research findings depend on uncertainties, in particular, of source parameter estimations of seismic events before instrumental recoding. First analyses, however, indicate that that small- to medium size earthquakes (
Fatal child abuse in Japan: does a trend exist toward tougher sentencing?
Nambu, Saori; Nasu, Ayako; Nishimura, Shigeru; Nishimura, Akiyoshi; Fujiwara, Satoshi
2011-07-01
It has been pointed out in Japan that criminal punishment in domestic homicide cases, especially in fatal child abuse cases, tends to be more lenient than in public homicide cases that occur outside the home. In recent news accounts of fatal child abuse cases, however, the media has reported that court-imposed sentences have tended to be stricter every year. Using the online databases of three major Japanese newspapers, we collected articles about fatal child abuse cases that had been published from January 2008 to December 2009. We analyzed these articles to determine, whether a tendency towards tougher penalties, as was put forward by the media, actually exists at present time in the criminal system in Japan. We found 24 cases, out of which 20 involved only one offender and 4 involved two offenders. These 28 offenders comprised nine biological fathers, 11 biological mothers, and eight other male relatives of the child victims. We found that the sentences handed down by the court clearly tended to be more lenient for female offenders. A new system of criminal jurisprudence, the so-called saiban-in system wherein citizens serve as "lay judges" in criminal trials involving serious crimes, was implemented in Japan at the start of 2009. Each, district court has gradually adopted this new system after a preparation period of approximately five years starting in 2004. Many figures in the Japanese media predicted that the gap between social expectations and court sentences for sanction against domestic homicide cases would be filled with the present transitional period of the Japanese criminal system. However, the present study found no significant difference in the laws regarding sentencing in fatal child abuse cases before and after the preparation period of the saiban-in system.
Radar studies of the atmosphere using spatial and frequency diversity
NASA Astrophysics Data System (ADS)
Yu, Tian-You
This work provides results from a thorough investigation of atmospheric radar imaging including theory, numerical simulations, observational verification, and applications. The theory is generalized to include the existing imaging techniques of coherent radar imaging (CRI) and range imaging (RIM), which are shown to be special cases of three-dimensional imaging (3D Imaging). Mathematically, the problem of atmospheric radar imaging is posed as an inverse problem. In this study, the Fourier, Capon, and maximum entropy (MaxEnt) methods are proposed to solve the inverse problem. After the introduction of the theory, numerical simulations are used to test, validate, and exercise these techniques. Statistical comparisons of the three methods of atmospheric radar imaging are presented for various signal-to-noise ratio (SNR), receiver configuration, and frequency sampling. The MaxEnt method is shown to generally possess the best performance for low SNR. The performance of the Capon method approaches the performance of the MaxEnt method for high SNR. In limited cases, the Capon method actually outperforms the MaxEnt method. The Fourier method generally tends to distort the model structure due to its limited resolution. Experimental justification of CRI and RIM is accomplished using the Middle and Upper (MU) Atmosphere Radar in Japan and the SOUnding SYstem (SOUSY) in Germany, respectively. A special application of CRI to the observation of polar mesosphere summer echoes (PMSE) is used to show direct evidence of wave steepening and possibly explain gravity wave variations associated with PMSE.
Two-Year-Olds Exclude Novel Objects as Potential Referents of Novel Words Based on Pragmatics
ERIC Educational Resources Information Center
Grassmann, Susanne; Stracke, Maren; Tomasello, Michael
2009-01-01
Many studies have established that children tend to exclude objects for which they already have a name as potential referents of novel words. In the current study we asked whether this exclusion can be triggered by social-pragmatic context alone without pre-existing words as blockers. Two-year-old children watched an adult looking at a novel…
ERIC Educational Resources Information Center
Burnett, Rebeca Inali
2017-01-01
First-generation college students (FGCS) are a unique group and face their own set of unique challenges when they start college. For example, they may need support in basic campus navigational skills or help communicating with professors. Existing research on FGCS tends to examine topics such as socioeconomic backgrounds and graduation rates.…
ERIC Educational Resources Information Center
Heflinger, Craig Anne; Christens, Brian
2006-01-01
While psychology has tended to focus on urban issues in research and practice, rural areas have undergone a series of changes in recent years that have increased the need for behavioral health services. A variety of social and economic factors has contributed both to the increasing needs and to the inability of the existing services to meet them.…
ERIC Educational Resources Information Center
Corona, Guadalupe Rodriguez
2010-01-01
There is limited research that identifies the university, familial and community factors that support the persistence of Latinas in higher education from the first to second year. The research that does exist has tended to focus on how institutional programs and activities have failed to work for first-generation students. Therefore, there is a…
ERIC Educational Resources Information Center
Hardesty, Jacob; McWilliams, Jenna; Plucker, Jonathan A.
2014-01-01
Every country--and even every community--has populations of students who severely underperform relative to other groups and to their own potential. These performance differences are generally called achievement gaps, and they tend to focus on gaps at basic levels of academic proficiency. But such gaps also exist among the highest levels of…
Coaches Mentoring Coaches: Follow-Up on the Denver Conference on Forensics Education.
ERIC Educational Resources Information Center
Larson-Casselton, Cindy
The topic of mentoring is one which has drawn renewed interest in the forensic community. Mentoring does exist in forensic coaching. A mentor can be seen both as one who makes a map for the protege and as a guide more interested in developing the traveler than fixing the road. Experienced coaches tend to see themselves and their mentors as trusted…
Adaptive Acquisitions: Maintaining Military Dominance By Managing Innovation
2014-04-01
for the relatively unknown disruptive technologies , even for the technical experts. For example, in the early years of rocket research Jerome Hunsaker...improve along existing performance metrics.19 Since disruptive technologies generally underperform along these old value metrics, customers tend to...since the actual value of the innovation is difficult, if not impossible, to determine a priori. In fact, most of the claimed potential disruptive
Chemical warfare agents. Classes and targets.
Schwenk, Michael
2018-09-01
Synthetic toxic chemicals (toxicants) and biological poisons (toxins) have been developed as chemical warfare agents in the last century. At the time of their initial consideration as chemical weapon, only restricted knowledge existed about their mechanisms of action. There exist two different types of acute toxic action: nonspecific cytotoxic mechanisms with multiple chemo-biological interactions versus specific mechanisms that tend to have just a single or a few target biomolecules. TRPV1- and TRPA-receptors are often involved as chemosensors that induce neurogenic inflammation. The present work briefly surveys classes and toxicologically relevant features of chemical warfare agents and describes mechanisms of toxic action. Copyright © 2017 Elsevier B.V. All rights reserved.
The nature of expertise in fingerprint examiners.
Busey, Thomas A; Parada, Francisco J
2010-04-01
Latent print examinations involve a complex set of psychological and cognitive processes. This article summarizes existing work that has addressed how training and experience creates changes in latent print examiners. Experience appears to improve overall accuracy, increase visual working memory, and lead to configural processing of upright fingerprints. Experts also demonstrate a narrower visual filter and, as a group, tend to show greater consistency when viewing ink prints. These findings address recent criticisms of latent print evidence, but many open questions still exist. Cognitive scientists are well positioned to conduct studies that will improve the training and practices of latent print examiners, and suggestions for becoming involved in fingerprint research are provided.
Trapped particles at a magnetic discontinuity
NASA Technical Reports Server (NTRS)
Stern, D. P.
1972-01-01
At a tangential discontinuity between two constant magnetic fields a layer of trapped particles can exist, this work examines the conditions under which the current carried by such particles tends to maintain the discontinuity. Three cases are examined. If the discontinuity separates aligned vacuum fields, the only requirement is that they be antiparallel. With arbitrary relative orientations, the field must have equal intensities on both sides. Finally, with a guiding center plasma on both sides, the condition reduces to a relation which is also derivable from hydromagnetic theory. Arguments are presented for the occurrence of such trapped modes in the magnetopause and for the non-existence of specular particle reflection.
Peng, Wei; Wang, Jianxin; Cheng, Yingjiao; Lu, Yu; Wu, Fangxiang; Pan, Yi
2015-01-01
Prediction of essential proteins which are crucial to an organism's survival is important for disease analysis and drug design, as well as the understanding of cellular life. The majority of prediction methods infer the possibility of proteins to be essential by using the network topology. However, these methods are limited to the completeness of available protein-protein interaction (PPI) data and depend on the network accuracy. To overcome these limitations, some computational methods have been proposed. However, seldom of them solve this problem by taking consideration of protein domains. In this work, we first analyze the correlation between the essentiality of proteins and their domain features based on data of 13 species. We find that the proteins containing more protein domain types which rarely occur in other proteins tend to be essential. Accordingly, we propose a new prediction method, named UDoNC, by combining the domain features of proteins with their topological properties in PPI network. In UDoNC, the essentiality of proteins is decided by the number and the frequency of their protein domain types, as well as the essentiality of their adjacent edges measured by edge clustering coefficient. The experimental results on S. cerevisiae data show that UDoNC outperforms other existing methods in terms of area under the curve (AUC). Additionally, UDoNC can also perform well in predicting essential proteins on data of E. coli.
NDRC: A Disease-Causing Genes Prioritized Method Based on Network Diffusion and Rank Concordance.
Fang, Minghong; Hu, Xiaohua; Wang, Yan; Zhao, Junmin; Shen, Xianjun; He, Tingting
2015-07-01
Disease-causing genes prioritization is very important to understand disease mechanisms and biomedical applications, such as design of drugs. Previous studies have shown that promising candidate genes are mostly ranked according to their relatedness to known disease genes or closely related disease genes. Therefore, a dangling gene (isolated gene) with no edges in the network can not be effectively prioritized. These approaches tend to prioritize those genes that are highly connected in the PPI network while perform poorly when they are applied to loosely connected disease genes. To address these problems, we propose a new disease-causing genes prioritization method that based on network diffusion and rank concordance (NDRC). The method is evaluated by leave-one-out cross validation on 1931 diseases in which at least one gene is known to be involved, and it is able to rank the true causal gene first in 849 of all 2542 cases. The experimental results suggest that NDRC significantly outperforms other existing methods such as RWR, VAVIEN, DADA and PRINCE on identifying loosely connected disease genes and successfully put dangling genes as potential candidate disease genes. Furthermore, we apply NDRC method to study three representative diseases, Meckel syndrome 1, Protein C deficiency and Peroxisome biogenesis disorder 1A (Zellweger). Our study has also found that certain complex disease-causing genes can be divided into several modules that are closely associated with different disease phenotype.
Held, Elizabeth; Cape, Joshua; Tintle, Nathan
2016-01-01
Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.
Score As You Lift (SAYL): A Statistical Relational Learning Approach to Uplift Modeling.
Nassif, Houssam; Kuusisto, Finn; Burnside, Elizabeth S; Page, David; Shavlik, Jude; Costa, Vítor Santos
We introduce Score As You Lift (SAYL), a novel Statistical Relational Learning (SRL) algorithm, and apply it to an important task in the diagnosis of breast cancer. SAYL combines SRL with the marketing concept of uplift modeling, uses the area under the uplift curve to direct clause construction and final theory evaluation, integrates rule learning and probability assignment, and conditions the addition of each new theory rule to existing ones. Breast cancer, the most common type of cancer among women, is categorized into two subtypes: an earlier in situ stage where cancer cells are still confined, and a subsequent invasive stage. Currently older women with in situ cancer are treated to prevent cancer progression, regardless of the fact that treatment may generate undesirable side-effects, and the woman may die of other causes. Younger women tend to have more aggressive cancers, while older women tend to have more indolent tumors. Therefore older women whose in situ tumors show significant dissimilarity with in situ cancer in younger women are less likely to progress, and can thus be considered for watchful waiting. Motivated by this important problem, this work makes two main contributions. First, we present the first multi-relational uplift modeling system, and introduce, implement and evaluate a novel method to guide search in an SRL framework. Second, we compare our algorithm to previous approaches, and demonstrate that the system can indeed obtain differential rules of interest to an expert on real data, while significantly improving the data uplift.
As tall as my peers - similarity in body height between migrants and hosts.
Bogin, Barry; Hermanussen, Michael; Scheffler, Christiane
2018-01-12
Background: We define migrants as people who move from their place of birth to a new place of residence. Migration usually is directed by "Push-Pull" factors, for example to escape from poor living conditions or to find more prosperous socio-economic conditions. Migrant children tend to assimilate quickly, and soon perceive themselves as peers within their new social networks. Differences exist between growth of first generation and second generation migrants. Methods: We review body heights and height distributions of historic and modern migrant populations to test two hypotheses: 1) that migrant and adopted children coming from lower social status localities to higher status localities adjust their height growth toward the mean of the dominant recipient social network, and 2) social dominant colonial and military migrants display growth that significantly surpasses the median height of both the conquered population and the population of origin. Our analytical framework also considered social networks. Recent publications indicate that spatial connectedness (community effects) and social competitiveness can affect human growth. Results: Migrant children and adolescents of lower social status rapidly adjust in height towards average height of their hosts, but tend to mature earlier, and are prone to overweight. The mean height of colonial/military migrants does surpass that of the conquered and origin population. Conclusion: Observations on human social networks, non-human animal strategic growth adjustments, and competitive growth processes strengthen the concept of social connectedness being involved in the regulation of human migrant growth.
Bergerot, Benjamin; Hugueny, Bernard; Belliard, Jérôme
2013-01-01
Background Predicting which species are likely to go extinct is perhaps one of the most fundamental yet challenging tasks for conservation biologists. This is particularly relevant for freshwater ecosystems which tend to have the highest proportion of species threatened with extinction. According to metapopulation theories, local extinction and colonization rates of freshwater subpopulations can depend on the degree of regional occupancy, notably due to rescue effects. However, relationships between extinction, colonization, regional occupancy and the spatial scales at which they operate are currently poorly known. Methods And Findings: We used a large dataset of freshwater fish annual censuses in 325 stream reaches to analyse how annual extinction/colonization rates of subpopulations depend on the regional occupancy of species. For this purpose, we modelled the regional occupancy of 34 fish species over the whole French river network and we tested how extinction/colonization rates could be predicted by regional occupancy described at five nested spatial scales. Results show that extinction and colonization rates depend on regional occupancy, revealing existence a rescue effect. We also find that these effects are scale dependent and their absolute contribution to colonization and extinction tends to decrease from river section to larger basin scales. Conclusions In terms of management, we show that regional occupancy quantification allows the evaluation of local species extinction/colonization dynamics and reduction of local extinction risks for freshwater fish species implies the preservation of suitable habitats at both local and drainage basin scales. PMID:24367636
Constitutive model development for flows of granular materials
NASA Astrophysics Data System (ADS)
Chialvo, Sebastian
Granular flows are ubiquitous in both natural and industrial processes. When com- posed of dry, noncohesive particles, they manifest three different flow regimes---commonly referred to as the quasistatic, inertial, and intermediate regimes---each of which exhibits its own dependences on solids volume fraction, shear rate, and particle-level properties. The differences in these regimes can be attributed to microscale phenomena, with quasistatic flows being dominated by enduring, frictional contacts between grains, inertial flows by grain collisions, and intermediate flows by a combination of the two. Existing constitutive models for the solids-phase stress tend to focus on one or two regimes at a time, with a limited degree of success; the same is true of models for wall-boundary conditions for granular flows. Moreover, these models tend not to be based on detailed particle-level flow data, either from experiment or simulation. Clearly, a comprehensive modeling framework is lacking. The work in this thesis aims to address these issues by proposing continuum models constructed on the basis of discrete element method (DEM) simulations of granular shear flows. Specifically, we propose (a) a constitutive stress model that bridges the three dense flow regimes, (b) an modified kinetic-theory model that covers both the dense and dilute ends of the inertial regime, and (c) a boundary-condition model for dense, wall-bounded flows. These models facilitate the modeling of a wide range of flow systems of practical interest and provide ideas for further model development and refinement.
Ricker, Timothy J.; Cowan, Nelson
2014-01-01
Understanding forgetting from working memory, the memory used in ongoing cognitive processing, is critical to understanding human cognition. In the last decade a number of conflicting findings have been reported regarding the role of time in forgetting from working memory. This has led to a debate concerning whether longer retention intervals necessarily result in more forgetting. An obstacle to directly comparing conflicting reports is a divergence in methodology across studies. Studies which find no forgetting as a function of retention-interval duration tend to use sequential presentation of memory items, while studies which find forgetting as a function of retention-interval duration tend to use simultaneous presentation of memory items. Here, we manipulate the duration of retention and the presentation method of memory items, presenting items either sequentially or simultaneously. We find that these differing presentation methods can lead to different rates of forgetting because they tend to differ in the time available for consolidation into working memory. The experiments detailed here show that equating the time available for working memory consolidation equates the rates of forgetting across presentation methods. We discuss the meaning of this finding in the interpretation of previous forgetting studies and in the construction of working memory models. PMID:24059859
2013-01-01
The ability to interact with different partners is one of the most important features in proteins. Proteins that bind a large number of partners (hubs) have been often associated with intrinsic disorder. However, many examples exist of hubs with an ordered structure, and evidence of a general mechanism promoting promiscuity in ordered proteins is still elusive. An intriguing hypothesis is that promiscuous binding sites have specific dynamical properties, distinct from the rest of the interface and pre-existing in the protein isolated state. Here, we present the first comprehensive study of the intrinsic dynamics of promiscuous residues in a large protein data set. Different computational methods, from coarse-grained elastic models to geometry-based sampling methods and to full-atom Molecular Dynamics simulations, were used to generate conformational ensembles for the isolated proteins. The flexibility and dynamic correlations of interface residues with a different degree of binding promiscuity were calculated and compared considering side chain and backbone motions, the latter both on a local and on a global scale. The study revealed that (a) promiscuous residues tend to be more flexible than nonpromiscuous ones, (b) this additional flexibility has a higher degree of organization, and (c) evolutionary conservation and binding promiscuity have opposite effects on intrinsic dynamics. Findings on simulated ensembles were also validated on ensembles of experimental structures extracted from the Protein Data Bank (PDB). Additionally, the low occurrence of single nucleotide polymorphisms observed for promiscuous residues indicated a tendency to preserve binding diversity at these positions. A case study on two ubiquitin-like proteins exemplifies how binding promiscuity in evolutionary related proteins can be modulated by the fine-tuning of the interface dynamics. The interplay between promiscuity and flexibility highlighted here can inspire new directions in protein–protein interaction prediction and design methods. PMID:24250278
A study of the utilization of ERTS-1 data from the Wabash River Basin
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Nine projects are defined, five ERTS data applications experiments and four supporting technology tasks. The most significant applications results were achieved in the soil association mapping, earth surface feature identification, and urban land use mapping efforts. Four soil association boundaries were accurately delineated from ERTS-1 imagery. A data bank has been developed to test surface feature classifications obtained from ERTS-1 data. Preliminary forest cover classifications indicated that the number of acres estimated tended to be greater than actually existed by 25%. Urban land use analysis of ERTS-1 data indicated highly accurate classification could be obtained for many urban catagories. The wooded residential category tended to be misclassified as woods or agricultural land. Further statistical analysis revealed that these classes could be separated using sample variance.
Malaria Diagnosis Using a Mobile Phone Polarized Microscope
NASA Astrophysics Data System (ADS)
Pirnstill, Casey W.; Coté, Gerard L.
2015-08-01
Malaria remains a major global health burden, and new methods for low-cost, high-sensitivity, diagnosis are essential, particularly in remote areas with low-resource around the world. In this paper, a cost effective, optical cell-phone based transmission polarized light microscope system is presented for imaging the malaria pigment known as hemozoin. It can be difficult to determine the presence of the pigment from background and other artifacts, even for skilled microscopy technicians. The pigment is much easier to observe using polarized light microscopy. However, implementation of polarized light microscopy lacks widespread adoption because the existing commercial devices have complicated designs, require sophisticated maintenance, tend to be bulky, can be expensive, and would require re-training for existing microscopy technicians. To this end, a high fidelity and high optical resolution cell-phone based polarized light microscopy system is presented which is comparable to larger bench-top polarized microscopy systems but at much lower cost and complexity. The detection of malaria in fixed and stained blood smears is presented using both, a conventional polarized microscope and our cell-phone based system. The cell-phone based polarimetric microscopy design shows the potential to have both the resolution and specificity to detect malaria in a low-cost, easy-to-use, modular platform.
Hallett, Allen M.; Parker, Nathan; Kudia, Ousswa; Kao, Dennis; Modelska, Maria; Rifai, Hanadi; O’Connor, Daniel P.
2015-01-01
Objectives. We developed the policy indicator checklist (PIC) to identify and measure policies for calorie-dense foods and sugar-sweetened beverages to determine how policies are clustered across multiple settings. Methods. In 2012 and 2013 we used existing literature, policy documents, government recommendations, and instruments to identify key policies. We then developed the PIC to examine the policy environments across 3 settings (communities, schools, and early care and education centers) in 8 communities participating in the Childhood Obesity Research Demonstration Project. Results. Principal components analysis revealed 5 components related to calorie-dense food policies and 4 components related to sugar-sweetened beverage policies. Communities with higher youth and racial/ethnic minority populations tended to have fewer and weaker policy environments concerning calorie-dense foods and healthy foods and beverages. Conclusions. The PIC was a helpful tool to identify policies that promote healthy food environments across multiple settings and to measure and compare the overall policy environments across communities. There is need for improved coordination across settings, particularly in areas with greater concentration of youths and racial/ethnic minority populations. Policies to support healthy eating are not equally distributed across communities, and disparities continue to exist in nutrition policies. PMID:25790397
A framework for modeling scenario-based barrier island storm impacts
Mickey, Rangley; Long, Joseph W.; Dalyander, P. Soupy; Plant, Nathaniel G.; Thompson, David M.
2018-01-01
Methods for investigating the vulnerability of existing or proposed coastal features to storm impacts often rely on simplified parametric models or one-dimensional process-based modeling studies that focus on changes to a profile across a dune or barrier island. These simple studies tend to neglect the impacts to curvilinear or alongshore varying island planforms, influence of non-uniform nearshore hydrodynamics and sediment transport, irregular morphology of the offshore bathymetry, and impacts from low magnitude wave events (e.g. cold fronts). Presented here is a framework for simulating regionally specific, low and high magnitude scenario-based storm impacts to assess the alongshore variable vulnerabilities of a coastal feature. Storm scenarios based on historic hydrodynamic conditions were derived and simulated using the process-based morphologic evolution model XBeach. Model results show that the scenarios predicted similar patterns of erosion and overwash when compared to observed qualitative morphologic changes from recent storm events that were not included in the dataset used to build the scenarios. The framework model simulations were capable of predicting specific areas of vulnerability in the existing feature and the results illustrate how this storm vulnerability simulation framework could be used as a tool to help inform the decision-making process for scientists, engineers, and stakeholders involved in coastal zone management or restoration projects.
Malaria Diagnosis Using a Mobile Phone Polarized Microscope
Pirnstill, Casey W.; Coté, Gerard L.
2015-01-01
Malaria remains a major global health burden, and new methods for low-cost, high-sensitivity, diagnosis are essential, particularly in remote areas with low-resource around the world. In this paper, a cost effective, optical cell-phone based transmission polarized light microscope system is presented for imaging the malaria pigment known as hemozoin. It can be difficult to determine the presence of the pigment from background and other artifacts, even for skilled microscopy technicians. The pigment is much easier to observe using polarized light microscopy. However, implementation of polarized light microscopy lacks widespread adoption because the existing commercial devices have complicated designs, require sophisticated maintenance, tend to be bulky, can be expensive, and would require re-training for existing microscopy technicians. To this end, a high fidelity and high optical resolution cell-phone based polarized light microscopy system is presented which is comparable to larger bench-top polarized microscopy systems but at much lower cost and complexity. The detection of malaria in fixed and stained blood smears is presented using both, a conventional polarized microscope and our cell-phone based system. The cell-phone based polarimetric microscopy design shows the potential to have both the resolution and specificity to detect malaria in a low-cost, easy-to-use, modular platform. PMID:26303238
O'Connor, Amanda; Blewitt, Claire; Nolan, Andrea; Skouteris, Helen
2018-06-01
Supporting children's social and emotional learning benefits all elements of children's development and has been associated with positive mental health and wellbeing, development of values and life skills. However, literature relating to the creation of interventions designed for use within the early childhood education and care settings to support children's social and emotional skills and learning is lacking. Intervention Mapping (IM) is a systematic intervention development framework, utilising principles centred on participatory co-design methods, multiple theoretical approaches and existing literature to enable effective decision-making during the development process. Early childhood pedagogical programs are also shaped by these principles; however, educators tend to draw on implicit knowledge when working with families. IM offers this sector the opportunity to formally incorporate theoretical, evidence-based research into the development of early childhood education and care social and emotional interventions. Emerging literature indicates IM is useful for designing health and wellbeing interventions for children within early childhood education and care settings. Considering the similar underlying principles of IM, existing applications within early childhood education and care and development of interventions beyond health behaviour change, it is recommended IM be utilised to design early childhood education and care interventions focusing on supporting children's social and emotional development. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Koch, Bernhard; Farquhar, Sarah
2015-01-01
This article proposes that there exist "glass doors" impeding men from entering and participating in ECEC work. Across developed countries, men's participation as carers and teachers in early childhood education and care (ECEC) services tends to be viewed as highly desirable and much has been written about the importance of men in ECEC.…
Cyber-Forensic Research Experimentation and Test Environment (CREATE)
2002-10-01
group. The existence of this forum would tend to support the ISO 17025 provision mentioned earlier. It would also support...criteria encompass the requirements of ISO / IEC Guide 25, and the relevant requirements of ISO 9002 (ANSI/ASQC Q92-1987) as suppliers of calibration or...there are no current standards, they have decided to abide by the International Organization for Standards ( ISO ) 17025,25 which has a provision for
ERIC Educational Resources Information Center
Khalfaoui, Mouez
2011-01-01
The common understanding of Islam tends to consider religious conversion as a matter of individual and rational belief and consisting, first and foremost, of attesting to the oneness of God ("shahada"). In this paper I argue that divergences exist among schools of Islamic Law concerning the modes and types of conversion. Contrary to…
ERIC Educational Resources Information Center
McCulloch, Sharon
2013-01-01
Existing studies of source use in academic student writing tend to i), focus more on the writing than the reading end of the reading-to-write continuum and ii), involve the use of insufficiently "naturalistic" writing tasks. Thus, in order to explore the potential of an alternative approach, this paper describes an exploratory case study…
The failure of formal rights and equality in the clinic: a critique of bioethics.
Atkins, Chloe G K
2005-01-01
For communities which espouse egalitarian principles, the hierarchical nature of care-giving relationships poses an extraordinary challenge. Patients' accounts of their illnesses and of their medical care capture the latent tension which exists between notional, political equality and the need for dependency on care from others. I believe that the power imbalance in doctor-patient relationships has broad implications for liberal democracies. Professional and care-giving relationships almost always consist of an imbalance of knowledge and expertise which no template of egalitarian moralism can suppress. When we seek help or guidance from authority figures, we are at a disadvantage politically even though we may be equal citizens theoretically and legally. Hierarchic relationships persist within democracies. Moreover, they tend to exist within a realm of privacy which is only partially visible from the social realm. In the end, traditional notions of liberal autonomy and egalitarianism do not properly describe or monitor these interactions. Liberal rhetoric (i.e., terms such as equality, rights, consent, etc.) pervades much of bioethical literature and interventions but, this very language tends to mask the persistence of structural hierarchies in the clinic. The doctor-patient relationship forces democratic communities to confront the problem of continuing hierarchic power relations and challenges liberalism to revise its understanding of individual autonomies.
The Effects of Framing, Reflection, Probability, and Payoff on Risk Preference in Choice Tasks.
Kühberger; Schulte-Mecklenbeck; Perner
1999-06-01
A meta-analysis of Asian-disease-like studies is presented to identify the factors which determine risk preference. First the confoundings between probability levels, payoffs, and framing conditions are clarified in a task analysis. Then the role of framing, reflection, probability, type, and size of payoff is evaluated in a meta-analysis. It is shown that bidirectional framing effects exist for gains and for losses. Presenting outcomes as gains tends to induce risk aversion, while presenting outcomes as losses tends to induce risk seeking. Risk preference is also shown to depend on the size of the payoffs, on the probability levels, and on the type of good at stake (money/property vs human lives). In general, higher payoffs lead to increasing risk aversion. Higher probabilities lead to increasing risk aversion for gains and to increasing risk seeking for losses. These findings are confirmed by a subsequent empirical test. Shortcomings of existing formal theories, such as prospect theory, cumulative prospect theory, venture theory, and Markowitz's utility theory, are identified. It is shown that it is not probabilities or payoffs, but the framing condition, which explains most variance. These findings are interpreted as showing that no linear combination of formally relevant predictors is sufficient to capture the essence of the framing phenomenon. Copyright 1999 Academic Press.
Existing Whole-House Solutions Case Study: Retrofitting a 1960s Split-Level Cold-Climate Home
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puttagunta, S.
2015-08-01
National programs such as Home Performance with ENERGY STAR® and numerous other utility air sealing programs have brought awareness to homeowners of the benefits of energy efficiency retrofits. Yet, these programs tend to focus on the low-hanging fruit: air-sealing the thermal envelope and ductwork where accessible, switch to efficient lighting, and low-flow fixtures. At the other end of the spectrum, deep-energy retrofit programs are also being encouraged by various utilities across the country. While deep energy retrofits typically seek 50% energy savings, they are often quite costly and most applicable to gut-rehab projects. A significant potential for lowering energy usagemore » in existing homes lies between the low hanging fruit and deep energy retrofit approaches - retrofits that save approximately 30% in energy over the existing conditions.« less
NASA Astrophysics Data System (ADS)
Bie, Qunyi; Cui, Haibo; Wang, Qiru; Yao, Zheng-An
2017-10-01
The Cauchy problem for the compressible flow of nematic liquid crystals in the framework of critical spaces is considered. We first establish the existence and uniqueness of global solutions provided that the initial data are close to some equilibrium states. This result improves the work by Hu and Wu (SIAM J Math Anal 45(5):2678-2699, 2013) through relaxing the regularity requirement of the initial data in terms of the director field. Based on the global existence, we then consider the incompressible limit problem for ill prepared initial data. We prove that as the Mach number tends to zero, the global solution to the compressible flow of liquid crystals converges to the solution to the corresponding incompressible model in some function spaces. Moreover, the accurate converge rates are obtained.
Why are U.S. nuclear weapon modernization efforts controversial?
NASA Astrophysics Data System (ADS)
Acton, James
2016-03-01
U.S. nuclear weapon modernization programs are focused on extending the lives of existing warheads and developing new delivery vehicles to replace ageing bombers, intercontinental ballistic missiles, and ballistic missile submarines. These efforts are contested and controversial. Some critics argue that they are largely unnecessary, financially wasteful and potentially destabilizing. Other critics posit that they do not go far enough and that nuclear weapons with new military capabilities are required. At its core, this debate centers on three strategic questions. First, what roles should nuclear weapons be assigned? Second, what military capabilities do nuclear weapons need to fulfill these roles? Third, how severe are the unintended escalation risks associated with particular systems? Proponents of scaled-down modernization efforts generally argue for reducing the role of nuclear weapons but also that, even under existing policy, new military capabilities are not required. They also tend to stress the escalation risks of new--and even some existing--capabilities. Proponents of enhanced modernization efforts tend to advocate for a more expansive role for nuclear weapons in national security strategy. They also often argue that nuclear deterrence would be enhanced by lower yield weapons and/or so called bunker busters able to destroy more deeply buried targets. The debate is further fueled by technical disagreements over many aspects of ongoing and proposed modernization efforts. Some of these disagreements--such as the need for warhead life extension programs and their necessary scope--are essentially impossible to resolve at the unclassified level. By contrast, unclassified analysis can help elucidate--though not answer--other questions, such as the potential value of bunker busters.
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Silver, Matt; Montana, Giovanni
2012-01-01
Where causal SNPs (single nucleotide polymorphisms) tend to accumulate within biological pathways, the incorporation of prior pathways information into a statistical model is expected to increase the power to detect true associations in a genetic association study. Most existing pathways-based methods rely on marginal SNP statistics and do not fully exploit the dependence patterns among SNPs within pathways. We use a sparse regression model, with SNPs grouped into pathways, to identify causal pathways associated with a quantitative trait. Notable features of our “pathways group lasso with adaptive weights” (P-GLAW) algorithm include the incorporation of all pathways in a single regression model, an adaptive pathway weighting procedure that accounts for factors biasing pathway selection, and the use of a bootstrap sampling procedure for the ranking of important pathways. P-GLAW takes account of the presence of overlapping pathways and uses a novel combination of techniques to optimise model estimation, making it fast to run, even on whole genome datasets. In a comparison study with an alternative pathways method based on univariate SNP statistics, our method demonstrates high sensitivity and specificity for the detection of important pathways, showing the greatest relative gains in performance where marginal SNP effect sizes are small. PMID:22499682
Meng, Jun; Shi, Lin; Luan, Yushi
2014-01-01
Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153
Literature-based condition-specific miRNA-mRNA target prediction.
Oh, Minsik; Rhee, Sungmin; Moon, Ji Hwan; Chae, Heejoon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun
2017-01-01
miRNAs are small non-coding RNAs that regulate gene expression by binding to the 3'-UTR of genes. Many recent studies have reported that miRNAs play important biological roles by regulating specific mRNAs or genes. Many sequence-based target prediction algorithms have been developed to predict miRNA targets. However, these methods are not designed for condition-specific target predictions and produce many false positives; thus, expression-based target prediction algorithms have been developed for condition-specific target predictions. A typical strategy to utilize expression data is to leverage the negative control roles of miRNAs on genes. To control false positives, a stringent cutoff value is typically set, but in this case, these methods tend to reject many true target relationships, i.e., false negatives. To overcome these limitations, additional information should be utilized. The literature is probably the best resource that we can utilize. Recent literature mining systems compile millions of articles with experiments designed for specific biological questions, and the systems provide a function to search for specific information. To utilize the literature information, we used a literature mining system, BEST, that automatically extracts information from the literature in PubMed and that allows the user to perform searches of the literature with any English words. By integrating omics data analysis methods and BEST, we developed Context-MMIA, a miRNA-mRNA target prediction method that combines expression data analysis results and the literature information extracted based on the user-specified context. In the pathway enrichment analysis using genes included in the top 200 miRNA-targets, Context-MMIA outperformed the four existing target prediction methods that we tested. In another test on whether prediction methods can re-produce experimentally validated target relationships, Context-MMIA outperformed the four existing target prediction methods. In summary, Context-MMIA allows the user to specify a context of the experimental data to predict miRNA targets, and we believe that Context-MMIA is very useful for predicting condition-specific miRNA targets.
Li, Min; Li, Wenkai; Wu, Fang-Xiang; Pan, Yi; Wang, Jianxin
2018-06-14
Essential proteins are important participants in various life activities and play a vital role in the survival and reproduction of living organisms. Identification of essential proteins from protein-protein interaction (PPI) networks has great significance to facilitate the study of human complex diseases, the design of drugs and the development of bioinformatics and computational science. Studies have shown that highly connected proteins in a PPI network tend to be essential. A series of computational methods have been proposed to identify essential proteins by analyzing topological structures of PPI networks. However, the high noise in the PPI data can degrade the accuracy of essential protein prediction. Moreover, proteins must be located in the appropriate subcellular localization to perform their functions, and only when the proteins are located in the same subcellular localization, it is possible that they can interact with each other. In this paper, we propose a new network-based essential protein discovery method based on sub-network partition and prioritization by integrating subcellular localization information, named SPP. The proposed method SPP was tested on two different yeast PPI networks obtained from DIP database and BioGRID database. The experimental results show that SPP can effectively reduce the effect of false positives in PPI networks and predict essential proteins more accurately compared with other existing computational methods DC, BC, CC, SC, EC, IC, NC. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multi-exposure high dynamic range image synthesis with camera shake correction
NASA Astrophysics Data System (ADS)
Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie
2017-10-01
Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.
Distance learning in academic health education.
Mattheos, N; Schittek, M; Attström, R; Lyon, H C
2001-05-01
Distance learning is an apparent alternative to traditional methods in education of health care professionals. Non-interactive distance learning, interactive courses and virtual learning environments exist as three different generations in distance learning, each with unique methodologies, strengths and potential. Different methodologies have been recommended for distance learning, varying from a didactic approach to a problem-based learning procedure. Accreditation, teamwork and personal contact between the tutors and the students during a course provided by distance learning are recommended as motivating factors in order to enhance the effectiveness of the learning. Numerous assessment methods for distance learning courses have been proposed. However, few studies report adequate tests for the effectiveness of the distance-learning environment. Available information indicates that distance learning may significantly decrease the cost of academic health education at all levels. Furthermore, such courses can provide education to students and professionals not accessible by traditional methods. Distance learning applications still lack the support of a solid theoretical framework and are only evaluated to a limited extent. Cases reported so far tend to present enthusiastic results, while more carefully-controlled studies suggest a cautious attitude towards distance learning. There is a vital need for research evidence to identify the factors of importance and variables involved in distance learning. The effectiveness of distance learning courses, especially in relation to traditional teaching methods, must therefore be further investigated.
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Takada, Masahiro; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma
2017-09-01
We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations and their inherent halo catalogues. Using the mock catalogue to study the error covariance matrix of galaxy-galaxy weak lensing, we compare the full covariance with the 'jackknife' (JK) covariance, the method often used in the literature that estimates the covariance from the resamples of the data itself. We show that there exists the variation of JK covariance over realizations of mock lensing measurements, while the average JK covariance over mocks can give a reasonably accurate estimation of the true covariance up to separations comparable with the size of JK subregion. The scatter in JK covariances is found to be ∼10 per cent after we subtract the lensing measurement around random points. However, the JK method tends to underestimate the covariance at the larger separations, more increasingly for a survey with a higher number density of source galaxies. We apply our method to the Sloan Digital Sky Survey (SDSS) data, and show that the 48 mock SDSS catalogues nicely reproduce the signals and the JK covariance measured from the real data. We then argue that the use of the accurate covariance, compared to the JK covariance, allows us to use the lensing signals at large scales beyond a size of the JK subregion, which contains cleaner cosmological information in the linear regime.
Leibowitz, Scott F; Spack, Norman P
2011-10-01
Few interdisciplinary treatment programs that tend to the needs of youth with gender nonconforming behaviors, expressions, and identities exist in academic medical centers with formal residency training programs. Despite this, the literature provides evidence that these youth have higher rates of poor psychosocial adjustment and suicide attempts. This article explores the logistical considerations involved in developing a specialized interdisciplinary service to these gender minority youth in accordance with the existing treatment guidelines.Demographic data will be presented and treatment issues will be explored. The impact that a specialized interdisciplinary treatment program has on clinical expansion, research development, education and training, and community outreach initiatives is discussed.
Young Disadvantaged Men as Fathers
Berger, Lawrence M.; Langton, Callie
2010-01-01
This article reviews the existing literature on young disadvantaged fathers’ involvement with children. It first outlines the predominant theoretical perspectives regarding father involvement among resident (married and cohabiting) biological fathers, resident social fathers (unrelated romantic partners of children’s mothers), and nonresident biological fathers. Second, it presents a brief discussion of the ways in which fathers contribute to childrearing. Third, it describes the socioeconomic characteristics of men who enter fatherhood at a young age, highlighting that they tend to be socioeconomically disadvantaged. Fourth, it reviews the empirical research on both antecedents of father involvement and patterns of involvement across father types. Finally, it describes the limitations of existing research and provides suggestions for future research and policy. PMID:21643452
ERIC Educational Resources Information Center
Markle, Gail
2017-01-01
Undergraduate social science research methods courses tend to have higher than average rates of failure and withdrawal. Lack of success in these courses impedes students' progression through their degree programs and negatively impacts institutional retention and graduation rates. Grounded in adult learning theory, this mixed methods study…
Do `negative' temperatures exist?
NASA Astrophysics Data System (ADS)
Lavenda, B. H.
1999-06-01
A modification of the second law is required for a system with a bounded density of states and not the introduction of a `negative' temperature scale. The ascending and descending branches of the entropy versus energy curve describe particle and hole states, having thermal equations of state that are given by the Fermi and logistic distributions, respectively. Conservation of energy requires isentropic states to be isothermal. The effect of adiabatically reversing the field is entirely mechanical because the only difference between the two states is their energies. The laws of large and small numbers, leading to the normal and Poisson approximations, characterize statistically the states of infinite and zero temperatures, respectively. Since the heat capacity also vanishes in the state of maximum disorder, the third law can be generalized in systems with a bounded density of states: the entropy tends to a constant as the temperature tends to either zero or infinity.
Personality and defensive reactions: fear, trait anxiety, and threat magnification.
Perkins, Adam M; Cooper, Andrew; Abdelall, Maura; Smillie, Luke D; Corr, Philip J
2010-06-01
The revised Reinforcement Sensitivity Theory (rRST) of personality (Gray & McNaughton, 2000) maintains that trait individual differences in the operation of defensive systems relate to facets of human personality, most notably anxiety and fear. We investigated this theory in 2 separate studies (total N=270) using a threat scenario research strategy (Blanchard, Hynd, Minke, Minemoto, & Blanchard, 2001). Consistent with rRST, results showed that individuals with high fear questionnaire scores tended to select defensive responses entailing orientation away from threat (e.g., run away) and that fear-prone individuals also tended to perceive threats as magnified. The extent of this threat magnification mediated the positive association observed between fear and orientation away from threat. Overall, results suggest that interindividual variance in defensive reactions is associated with a variety of existing personality constructs but that further research is required to determine the precise relationship between personality and defensive reactions.
East-West Cultural Differences in Context-sensitivity are Evident in Early Childhood
Imada, Toshie; Carlson, Stephanie M.; Itakura, Shoji
2018-01-01
Accumulating evidence suggests North Americans tend to focus on central objects whereas East Asians tend to pay more attention to contextual information in a visual scene. Although it is generally believed that such culturally divergent attention tendencies develop through socialization, existing evidence largely depends on adult samples. Moreover, no past research has investigated the relation between context-sensitivity and other domains of cognitive development. The present study investigated children in the United States and Japan (N = 175, age 4–9 years) to examine the developmental pattern in context-sensitivity and its relation to executive function. The study found that context-sensitivity increased with age across cultures. Nevertheless, Japanese children showed significantly greater context-sensitivity than American children. Also, context-sensitivity fully mediated the cultural difference in a set-shifting executive function task, which might help explain past findings that East-Asian children outperformed their American counterparts on executive function. PMID:23432830
Farmer, Jane; Currie, Margaret; Kenny, Amanda; Munoz, Sarah-Anne
2015-09-01
This article explores what happened, over the longer term, after a community participation exercise to design future rural service delivery models, and considers perceptions of why more follow-up actions did or did not happen. The study, which took place in 2014, revisits three Scottish communities that engaged in a community participation research method (2008-2010) intended to design rural health services. Interviews were conducted with 22 citizens, healthcare practitioners, managers and policymakers all of whom were involved in, or knew about, the original project. Only one direct sustained service change was found - introduction of a volunteer first responder scheme in one community. Sustained changes in knowledge were found. The Health Authority that part-funded development of the community participation method, through the original project, had not adopted the new method. Community members tended to attribute lack of further impact to low participation and methods insufficiently attuned to the social nuances of very small rural communities. Managers tended to blame insufficient embedding in the healthcare system and issues around power over service change and budgets. In the absence of convincing formal community governance mechanisms for health issues, rural health practitioners tended to act as conduits between citizens and the Health Authority. The study provides new knowledge about what happens after community participation and highlights a need for more exploration. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hsueh, Yu-Li; Rogge, Matthew S.; Shaw, Wei-Tao; Kim, Jaedon; Yamamoto, Shu; Kazovsky, Leonid G.
2005-09-01
A simple and cost-effective upgrade of existing passive optical networks (PONs) is proposed, which realizes service overlay by novel spectral-shaping line codes. A hierarchical coding procedure allows processing simplicity and achieves desired long-term spectral properties. Different code rates are supported, and the spectral shape can be properly tailored to adapt to different systems. The computation can be simplified by quantization of trigonometric functions. DC balance is achieved by passing the dc residual between processing windows. The proposed line codes tend to introduce bit transitions to avoid long consecutive identical bits and facilitate receiver clock recovery. Experiments demonstrate and compare several different optimized line codes. For a specific tolerable interference level, the optimal line code can easily be determined, which maximizes the data throughput. The service overlay using the line-coding technique leaves existing services and field-deployed fibers untouched but fully functional, providing a very flexible and economic way to upgrade existing PONs.
Trapped Modes in a Three-Layer Fluid
NASA Astrophysics Data System (ADS)
Saha, Sunanda; Bora, Swaroop Nandan
2018-03-01
In this work, trapped mode frequencies are computed for a submerged horizontal circular cylinder with the hydrodynamic set-up involving an infinite depth three-layer incompressible fluid with layer-wise different densities. The impermeable cylinder is fully immersed in either the bottom layer or the upper layer. The effect of surface tension at the surface of separation is neglected. In this set-up, there exist three wave numbers: the lowest one on the free surface and the other two on the internal interfaces. For each wave number, there exist two modes for which trapped waves exist. The existence of these trapped modes is shown by numerical evidence. We investigate the variation of these trapped modes subject to change in the depth of the middle layer as well as the submergence depth. We show numerically that two-layer and single-layer results cannot be recovered in the double and single limiting cases of the density ratios tending to unity. The existence of trapped modes shows that in general, a radiation condition for the waves at infinity is insufficient for the uniqueness of the solution of the scattering problem.
Davis, Lloyd L
2013-11-05
Insensitive explosive compositions were prepared by reacting di-isocyanate and/or poly-isocyanate monomers with an explosive diamine monomer. Prior to a final cure, the compositions are extrudable. The di-isocyanate monomers tend to produce tough, rubbery materials while polyfunctional monomers (i.e. having more than two isocyanate groups) tend to form rigid products. The extrudable form of the composition may be used in a variety of applications including rock fracturing.
Davis, Lloyd L.
2015-07-28
Insensitive explosive compositions were prepared by reacting di-isocyanate and/or poly-isocyanate monomers with an explosive diamine monomer. Prior to a final cure, the compositions are extrudable. The di-isocyanate monomers tend to produce tough, rubbery materials while polyfunctional monomers (i.e. having more than two isocyanate groups) tend to form rigid products. The extrudable form of the composition may be used in a variety of applications including rock fracturing.
Howe, Adina; Chain, Patrick S. G.
2015-07-09
Metagenomic investigations hold great promise for informing the genetics, physiology, and ecology of environmental microorganisms. Current challenges for metagenomic analysis are related to our ability to connect the dots between sequencing reads, their population of origin, and their encoding functions. Assembly-based methods reduce dataset size by extending overlapping reads into larger contiguous sequences (contigs), providing contextual information for genetic sequences that does not rely on existing references. These methods, however, tend to be computationally intensive and are again challenged by sequencing errors as well as by genomic repeats. While numerous tools have been developed based on these methodological concepts, theymore » present confounding choices and training requirements to metagenomic investigators. To help with accessibility to assembly tools, this review also includes an IPython Notebook metagenomic assembly tutorial. This tutorial has instructions for execution any operating system using Amazon Elastic Cloud Compute and guides users through downloading, assembly, and mapping reads to contigs of a mock microbiome metagenome. Despite its challenges, metagenomic analysis has already revealed novel insights into many environments on Earth. As software, training, and data continue to emerge, metagenomic data access and its discoveries will to grow.« less
NASA Astrophysics Data System (ADS)
Zhou, Ruchao; Si, Shaoxiong; Zhang, Qiyi
2012-02-01
A novel and effective method for the preparation of water-dispersible nano-hydroxyapatite (nHAp) particles was reported. nHAp was prepared in the presence of grape seed polyphenol (GSP) solution with different concentrations. Chemical precipitation method was adopted to produce pure nHAp and modified nHAp (nHAp-GSP) at 60 °C for 2 h. The chemical nature of the products was detected by Fourier transform infrared spectroscopy (FTIR) and thermal gravimetric analysis (TGA). Moreover, the crystal structure and morphology of particles was confirmed by X-ray diffraction (XRD) and scanning electron microscopy (SEM). The results indicated that the spherical nHAp particles with a diameter of 20-50 nm could be synthesized at 60 °C. The zeta potential values of pure nHAp and nHAp-GSP are -0.36 mV and -26.1 mV respectively. According to the sedimentary time, the colloidal stability of nHAp-GSP in water could be improved dramatically with the increase of GSP content and the particles tended to exist as dispersive nanoparticles without aggregation. All the results indicated that GSP exhibited strong binding to nHAp and enhanced the colloidal stability of nHAp particles.
A comprehensive comparison of network similarities for link prediction and spurious link elimination
NASA Astrophysics Data System (ADS)
Zhang, Peng; Qiu, Dan; Zeng, An; Xiao, Jinghua
2018-06-01
Identifying missing interactions in complex networks, known as link prediction, is realized by estimating the likelihood of the existence of a link between two nodes according to the observed links and nodes' attributes. Similar approaches have also been employed to identify and remove spurious links in networks which is crucial for improving the reliability of network data. In network science, the likelihood for two nodes having a connection strongly depends on their structural similarity. The key to address these two problems thus becomes how to objectively measure the similarity between nodes in networks. In the literature, numerous network similarity metrics have been proposed and their accuracy has been discussed independently in previous works. In this paper, we systematically compare the accuracy of 18 similarity metrics in both link prediction and spurious link elimination when the observed networks are very sparse or consist of inaccurate linking information. Interestingly, some methods have high prediction accuracy, they tend to perform low accuracy in identification spurious interaction. We further find that methods can be classified into several cluster according to their behaviors. This work is useful for guiding future use of these similarity metrics for different purposes.
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
Maskless and low-destructive nanofabrication on quartz by friction-induced selective etching
2013-01-01
A low-destructive friction-induced nanofabrication method is proposed to produce three-dimensional nanostructures on a quartz surface. Without any template, nanofabrication can be achieved by low-destructive scanning on a target area and post-etching in a KOH solution. Various nanostructures, such as slopes, hierarchical stages and chessboard-like patterns, can be fabricated on the quartz surface. Although the rise of etching temperature can improve fabrication efficiency, fabrication depth is dependent only upon contact pressure and scanning cycles. With the increase of contact pressure during scanning, selective etching thickness of the scanned area increases from 0 to 2.9 nm before the yield of the quartz surface and then tends to stabilise after the appearance of a wear. Refabrication on existing nanostructures can be realised to produce deeper structures on the quartz surface. Based on Arrhenius fitting of the etching rate and transmission electron microscopy characterization of the nanostructure, fabrication mechanism could be attributed to the selective etching of the friction-induced amorphous layer on the quartz surface. As a maskless and low-destructive technique, the proposed friction-induced method will open up new possibilities for further nanofabrication. PMID:23531381
The relationship between physical fitness and academic achievement among adolescent in South Korea.
Han, Gun-Soo
2018-04-01
[Purpose] The purpose of this study was to identify the relationship between physical fitness level and academic achievement in middle school students. [Subjects and Methods] A total of 236 students aged 13-15 from three middle schools in D city, South Korea, were selected using a random sampling method. Academic achievement was measured by students' 2014 fall-semester final exam scores and the level of physical fitness was determined according to the PAPS (Physical Activity Promotion System) score administrated by the Korean Ministry of Education. A Pearson correlation test with SPSS 20.0 was employed. [Results] The Pearson correlation test revealed a significant correlation between physical fitness and academic achievement. Specifically, students with higher levels of physical fitness tend to have higher academic performance. In addition, final exam scores of core subjects (e.g., English, mathematics, and science) were significantly related to the PAPS score. [Conclusion] Results of this study can be used to develop more effective physical education curricula. In addition, the data can also be applied to recreation and sport programs for other populations (e.g., children and adult) as well as existing national physical fitness data in various countries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howe, Adina; Chain, Patrick S. G.
Metagenomic investigations hold great promise for informing the genetics, physiology, and ecology of environmental microorganisms. Current challenges for metagenomic analysis are related to our ability to connect the dots between sequencing reads, their population of origin, and their encoding functions. Assembly-based methods reduce dataset size by extending overlapping reads into larger contiguous sequences (contigs), providing contextual information for genetic sequences that does not rely on existing references. These methods, however, tend to be computationally intensive and are again challenged by sequencing errors as well as by genomic repeats. While numerous tools have been developed based on these methodological concepts, theymore » present confounding choices and training requirements to metagenomic investigators. To help with accessibility to assembly tools, this review also includes an IPython Notebook metagenomic assembly tutorial. This tutorial has instructions for execution any operating system using Amazon Elastic Cloud Compute and guides users through downloading, assembly, and mapping reads to contigs of a mock microbiome metagenome. Despite its challenges, metagenomic analysis has already revealed novel insights into many environments on Earth. As software, training, and data continue to emerge, metagenomic data access and its discoveries will to grow.« less
Rothenberger, Aribert; Fillmer-Heise, Anke; Roessner, Veit; Sergeant, Joseph; Tannock, Rosemary; Banaschewski, Tobias
2017-01-01
Objective Attention Deficit / Hyperactivity Disorder (ADHD) and Chronic Tic Disorder (CTD) are two common and frequently co-existing disorders, probably following an additive model. But this is not yet clear for the basic sensory function of colour processing sensitive to dopaminergic functioning in the retina and higher cognitive functions like attention and interference control. The latter two reflect important aspects for psychoeducation and behavioural treatment approaches. Methods Colour discrimination using the Farnsworth-Munsell 100-hue Test, sustained attention during the Frankfurt Attention Inventory (FAIR), and interference liability during Colour- and Counting-Stroop-Tests were assessed to further clarify the cognitive profile of the co-existence of ADHD and CTD. Altogether 69 children were classified into four groups: ADHD (N = 14), CTD (N = 20), ADHD+CTD (N = 20) and healthy Controls (N = 15) and compared in cognitive functioning in a 2×2-factorial statistical model. Results Difficulties with colour discrimination were associated with both ADHD and CTD factors following an additive model, but in ADHD these difficulties tended to be more pronounced on the blue-yellow axis. Attention problems were characteristic for ADHD but not CTD. Interference load was significant in both Colour- and Counting-Stroop-Tests and unrelated to colour discrimination. Compared to Controls, interference load in the Colour-Stroop was higher in pure ADHD and in pure CTD, but not in ADHD+CTD, following a sub-additive model. In contrast, interference load in the Counting-Stroop did not reveal ADHD or CTD effects. Conclusion The co-existence of ADHD and CTD is characterized by additive as well as sub-additive performance impairments, suggesting that their co-existence may show simple additive characteristics of both disorders or a more complex interaction, depending on demand. The equivocal findings on interference control may indicate limited validity of the Stroop-Paradigm for clinical assessments. PMID:28594866
Guidance/Navigation Requirements Study Final Report. Volume III. Appendices
1978-04-30
shown Figure G-2. The free-flight simulation program FFSIM uses quaternions to calculate the body attitude as a function of time. To calculate the...the lack of open-loop damping, the existence of a feedback controller which will stabilize the closed-loon system depends upon the satisfaction of a...re-entry vehicle has dynamic pecularitles which tend to discourage the use of "linear-quadratic" feedback regulators in guidance. The disadvantageous
Survival Evasion Resistance Escape (SERE) Operations
2017-03-27
result of fear and anger, for example, tend to increase alertness and provide extra energy to either run away or fight. These and other mechanisms can...People have been known to complete a fight with a fractured hand, to run on a fractured or sprained ankle, to land an aircraft despite severely burned...IP cannot run away from fear and must take action to control it. Appropriate actions include: • Understanding fear. • Admitting that it exists
Cohen, J; Stewart, I
2001-02-22
Interest in extraterrestrial life has tended to focus on a search for extrasolar planets similar to the Earth. But what of forms of intelligent life that are very different from those found on Earth? Some features of life will not be peculiar to our planet, and alien life will resemble ours in such universals. But if intelligent, non-humanoid aliens exist, where might they be? Would they wish to visit Earth and would we know if they did?
Arnold, David T; Rowen, Donna; Versteegh, Matthijs M; Morley, Anna; Hooper, Clare E; Maskell, Nicholas A
2015-01-23
In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in patients in poor health states. The aim of this study was to test all existing mapping algorithms of QLQ-C30 onto EQ-5D, in a dataset of patients with malignant pleural mesothelioma, an invariably fatal malignancy where no previous mapping estimation has been published. Health related quality of life (HRQoL) data where both the EQ-5D and QLQ-C30 were used simultaneously was obtained from the UK-based prospective observational SWAMP (South West Area Mesothelioma and Pemetrexed) trial. In the original trial 73 patients with pleural mesothelioma were offered palliative chemotherapy and their HRQoL was assessed across five time points. This data was used to test the nine available mapping algorithms found in the literature, comparing predicted against observed EQ-5D values. The ability of algorithms to predict the mean, minimise error and detect clinically significant differences was assessed. The dataset had a total of 250 observations across 5 timepoints. The linear regression mapping algorithms tested generally performed poorly, over-estimating the predicted compared to observed EQ-5D values, especially when observed EQ-5D was below 0.5. The best performing algorithm used a response mapping method and predicted the mean EQ-5D with accuracy with an average root mean squared error of 0.17 (Standard Deviation; 0.22). This algorithm reliably discriminated between clinically distinct subgroups seen in the primary dataset. This study tested mapping algorithms in a population with poor health states, where they have been previously shown to perform poorly. Further research into EQ-5D estimation should be directed at response mapping methods given its superior performance in this study.
Enhanced Condensation Heat Transfer On Patterned Surfaces
NASA Astrophysics Data System (ADS)
Alizadeh-Birjandi, Elaheh; Kavehpour, H. Pirouz
2017-11-01
Transition from film to drop wise condensation can improve the efficiency of thermal management applications and result in considerable savings in investments and operating costs by millions of dollars every year. The current methods available are either hydrophobic coating or nanostructured surfaces. The former has little adhesion to the structure which tends to detach easily under working conditions, the fabrication techniques of the latter are neither cost-effective nor scalable, and both are made with low thermal conductivity materials that would negate the heat transfer enhancement by drop wise condensation. Therefore, the existing technologies have limitations in enhancing vapor-to-liquid condensation. This work focuses on development of surfaces with wettability contrast to boost drop wise condensation, which its overall heat transfer efficiency is 2-3 times film wise condensation, while maintaining high conduction rate through the surface at low manufacturing costs. The variation in interfacial energy is achieved through crafting hydrophobic patterns to the surface of the metal via scalable fabrication techniques. The results of experimental and surface optimization studies are also presented.
Health system productivity change in Zambia: A focus on the child health services.
Achoki, Tom; Kinfu, Yohannes; Masiye, Felix; Frederix, Geert W J; Hovels, Anke; Leufkens, Hubert G
2017-02-01
Efficiency and productivity improvement have become central in global health debates. In this study, we explored productivity change, particularly the contribution of technological progress and efficiency gains associated with improvements in child survival in Zambia (population 15 million). Productivity was measured by applying the Malmquist productivity index on district-level panel data. The effect of socioeconomic factors was further analyzed by applying an ordinary least squares regression technique. During 2004-2009, overall productivity in Zambia increased by 5.0 per cent, a change largely attributed to technological progress rather than efficiency gains. Within-country productivity comparisons revealed wide heterogeneity in favor of more urbanized and densely populated districts. Improved cooking methods, improved sanitation, and better educated populations tended to improve productive gains, whereas larger household size had an adverse effect. Addressing such district-level factors and ensuring efficient delivery and optimal application of existing health technologies offer a practical pathway for further improving population health.
Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen
2016-08-18
The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.
The Sequelae of Acute Purulent Meningitis in Childhood
Hutchison, Patricia A.; Kovacs, Michael C.
1963-01-01
Of a series of 122 children suffering from acute purulent meningitis at the Children's Hospital, Winnipeg, in the years 1952-56, 12 (9.8%) succumbed, all deaths occurring in those 12 months of age or less. Fortyone of the survivors were re-studied 2.5 to 7.5 years after their acute illness to assess the nature and incidence of sequelae, the relationship of sequelae to the severity of the acute illness, and the correlation between the various methods of identifying sequelae. Five children exhibited psychiatric evidence of organic brain damage; seven, neurological abnormality; 11, electroencephalographic abnormality. Three had defective intelligence and nine psychological test evidence of organic brain damage. Children with sequelae tended to have several abnormal test results, the total number with neuropsychiatric and/or psychological sequelae being 11 (26%). There was a positive correlation between the severity of the acute illness and the presence of neuropsychiatric sequelae; also between neuropsychiatric sequelae, defective intelligence and psychological evidence of brain damage. No correlation existed between the electroencephalographic abnormality and neuropsychiatric defect. PMID:13955939
On the discretization and control of an SEIR epidemic model with a periodic impulsive vaccination
NASA Astrophysics Data System (ADS)
Alonso-Quesada, S.; De la Sen, M.; Ibeas, A.
2017-01-01
This paper deals with the discretization and control of an SEIR epidemic model. Such a model describes the transmission of an infectious disease among a time-varying host population. The model assumes mortality from causes related to the disease. Our study proposes a discretization method including a free-design parameter to be adjusted for guaranteeing the positivity of the resulting discrete-time model. Such a method provides a discrete-time model close to the continuous-time one without the need for the sampling period to be as small as other commonly used discretization methods require. This fact makes possible the design of impulsive vaccination control strategies with less burden of measurements and related computations if one uses the proposed instead of other discretization methods. The proposed discretization method and the impulsive vaccination strategy designed on the resulting discretized model are the main novelties of the paper. The paper includes (i) the analysis of the positivity of the obtained discrete-time SEIR model, (ii) the study of stability of the disease-free equilibrium point of a normalized version of such a discrete-time model and (iii) the existence and the attractivity of a globally asymptotically stable disease-free periodic solution under a periodic impulsive vaccination. Concretely, the exposed and infectious subpopulations asymptotically converge to zero as time tends to infinity while the normalized subpopulations of susceptible and recovered by immunization individuals oscillate in the context of such a solution. Finally, a numerical example illustrates the theoretic results.
Flanders, W Dana; Strickland, Matthew J; Klein, Mitchel
2017-05-15
Methods exist to detect residual confounding in epidemiologic studies. One requires a negative control exposure with 2 key properties: 1) conditional independence of the negative control and the outcome (given modeled variables) absent confounding and other model misspecification, and 2) associations of the negative control with uncontrolled confounders and the outcome. We present a new method to partially correct for residual confounding: When confounding is present and our assumptions hold, we argue that estimators from models that include a negative control exposure with these 2 properties tend to be less biased than those from models without it. Using regression theory, we provide theoretical arguments that support our claims. In simulations, we empirically evaluated the approach using a time-series study of ozone effects on asthma emergency department visits. In simulations, effect estimators from models that included the negative control exposure (ozone concentrations 1 day after the emergency department visit) had slightly or modestly less residual confounding than those from models without it. Theory and simulations show that including the negative control can reduce residual confounding, if our assumptions hold. Our method differs from available methods because it uses a regression approach involving an exposure-based indicator rather than a negative control outcome to partially correct for confounding. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Guo, Wei-Feng; Zhang, Shao-Wu; Shi, Qian-Qian; Zhang, Cheng-Ming; Zeng, Tao; Chen, Luonan
2018-01-19
The advances in target control of complex networks not only can offer new insights into the general control dynamics of complex systems, but also be useful for the practical application in systems biology, such as discovering new therapeutic targets for disease intervention. In many cases, e.g. drug target identification in biological networks, we usually require a target control on a subset of nodes (i.e., disease-associated genes) with minimum cost, and we further expect that more driver nodes consistent with a certain well-selected network nodes (i.e., prior-known drug-target genes). Therefore, motivated by this fact, we pose and address a new and practical problem called as target control problem with objectives-guided optimization (TCO): how could we control the interested variables (or targets) of a system with the optional driver nodes by minimizing the total quantity of drivers and meantime maximizing the quantity of constrained nodes among those drivers. Here, we design an efficient algorithm (TCOA) to find the optional driver nodes for controlling targets in complex networks. We apply our TCOA to several real-world networks, and the results support that our TCOA can identify more precise driver nodes than the existing control-fucus approaches. Furthermore, we have applied TCOA to two bimolecular expert-curate networks. Source code for our TCOA is freely available from http://sysbio.sibcb.ac.cn/cb/chenlab/software.htm or https://github.com/WilfongGuo/guoweifeng . In the previous theoretical research for the full control, there exists an observation and conclusion that the driver nodes tend to be low-degree nodes. However, for target control the biological networks, we find interestingly that the driver nodes tend to be high-degree nodes, which is more consistent with the biological experimental observations. Furthermore, our results supply the novel insights into how we can efficiently target control a complex system, and especially many evidences on the practical strategic utility of TCOA to incorporate prior drug information into potential drug-target forecasts. Thus applicably, our method paves a novel and efficient way to identify the drug targets for leading the phenotype transitions of underlying biological networks.
Predictability of Conversation Partners
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Nakamura, Mitsuhiro; Sato, Nobuo; Yano, Kazuo; Masuda, Naoki
2011-08-01
Recent developments in sensing technologies have enabled us to examine the nature of human social behavior in greater detail. By applying an information-theoretic method to the spatiotemporal data of cell-phone locations, [C. Song , ScienceSCIEAS0036-8075 327, 1018 (2010)] found that human mobility patterns are remarkably predictable. Inspired by their work, we address a similar predictability question in a different kind of human social activity: conversation events. The predictability in the sequence of one’s conversation partners is defined as the degree to which one’s next conversation partner can be predicted given the current partner. We quantify this predictability by using the mutual information. We examine the predictability of conversation events for each individual using the longitudinal data of face-to-face interactions collected from two company offices in Japan. Each subject wears a name tag equipped with an infrared sensor node, and conversation events are marked when signals are exchanged between sensor nodes in close proximity. We find that the conversation events are predictable to a certain extent; knowing the current partner decreases the uncertainty about the next partner by 28.4% on average. Much of the predictability is explained by long-tailed distributions of interevent intervals. However, a predictability also exists in the data, apart from the contribution of their long-tailed nature. In addition, an individual’s predictability is correlated with the position of the individual in the static social network derived from the data. Individuals confined in a community—in the sense of an abundance of surrounding triangles—tend to have low predictability, and those bridging different communities tend to have high predictability.
Measuring the food service environment: development and implementation of assessment tools.
Minaker, Leia M; Raine, Kim D; Cash, Sean B
2009-01-01
The food environment is increasingly being implicated in the obesity epidemic, though few reported measures of it exist. In order to assess the impact of the food environment on food intake, valid measures must be developed and tested. The current study describes the development of a food service environment assessment tool and its implementation in a community setting. A descriptive study with mixed qualitative and quantitative methods at a large, North American university campus was undertaken. Measures were developed on the basis of a conceptual model of nutrition environments. Measures of community nutrition environment were the number, type and hours of operation of each food service outlet on campus. Measures of consumer nutrition environment were food availability, food affordability, food promotion and nutrition information availability. Seventy-five food service outlets within the geographic boundaries were assessed. Assessment tools could be implemented in a reasonable amount of time and showed good face and content validity. The food environments were described and measures were grouped so that food service outlet types could be compared in terms of purchasing convenience, cost/value, healthy food promotion and health. Food service outlet types that scored higher in purchasing convenience and cost/value tended to score lower in healthy food promotion and health. This study adds evidence that food service outlet types that are convenient to consumers and supply high value (in terms of calories per dollar) tend to be less health-promoting. Results from this study also suggest the possibility of characterizing the food environment according to the type of food service outlet observed.
Terror Attacks Increase the Risk of Vascular Injuries
Heldenberg, Eitan; Givon, Adi; Simon, Daniel; Bass, Arie; Almogy, Gidon; Peleg, Kobi
2014-01-01
Objectives: Extensive literature exists about military trauma as opposed to the very limited literature regarding terror-related civilian trauma. However, terror-related vascular trauma (VT), as a unique type of injury, is yet to be addressed. Methods: A retrospective analysis of the Israeli National Trauma Registry was performed. All patients in the registry from 09/2000 to 12/2005 were included. The subgroup of patients with documented VT (N = 1,545) was analyzed and further subdivided into those suffering from terror-related vascular trauma (TVT) and non-terror-related vascular trauma (NTVT). Both groups were analyzed according to mechanism of trauma, type and severity of injury and treatment. Results: Out of 2,446 terror-related trauma admissions, 243 sustained TVT (9.9%) compared to 1302 VT patients from non-terror trauma (1.1%). TVT injuries tend to be more complex and most patients were operated on. Intensive care unit admissions and hospital length of stay was higher in the TVT group. Penetrating trauma was the prominent cause of injury among the TVT group. TVT group had a higher proportion of patients with severe injuries (ISS ≥ 16) and mortality. Thorax injuries were more frequent in the TVT group. Extremity injuries were the most prevalent vascular injuries in both groups; however NTVT group had more upper extremity injuries, while the TVT group had significantly much lower extremity injuries. Conclusion: Vascular injuries are remarkably more common among terror attack victims than among non-terror trauma victims and the injuries of terror casualties tend to be more complex. The presence of a vascular surgeon will ensure a comprehensive clinical care. PMID:24910849
Cömert, Itır Tarı; Özyeşil, Zümra Atalay; Burcu Özgülük, S
2016-02-01
The aim of the current study was to investigate the contributions of sad childhood experiences, depression, anxiety, and stress, existence of a sense of meaning, and pursuit of meaning in explaining life satisfaction of young adults in Turkey. The sample comprised 400 undergraduate students ( M age = 20.2 yr.) selected via random cluster sampling. There were no statistically significant differences between men and women in terms of their scores on depression, existence of meaning, pursuit of meaning, and life satisfaction scores. However, there were statistically significant differences between men and women on the sad childhood experiences, anxiety and stress. In heirarchical regression analysis, the model as a whole was significant. Depression and existence of meaning in life made unique significant contributions to the variance in satisfaction in life. Students with lower depression and with a sense of meaning in life tended to be more satisfied with life.
Rethinking our approach to gender and disasters: Needs, responsibilities, and solutions.
Montano, Samantha; Savitt, Amanda
2016-01-01
To explore how the existing literature has discussed the vulnerability and needs of women in a disaster context. It will consider the literature's suggestions of how to minimize vulnerability and address the needs of women, including who involved in emergency management should be responsible for such efforts. Empirical journal articles and book chapters from disaster literature were collected that focused on "women" or "gender," and their results and recommendations were analyzed. This review found existing empirical research on women during disasters focuses on their vulnerabilities more than their needs. Second, when researchers do suggest solutions, they tend not to be comprehensive or supported by empirical evidence. Finally, it is not clear from existing research who is responsible for addressing these needs and implementing solutions. Future research should study the intersection of gender and disasters in terms of needs and solutions including who is responsible for implementing solutions.
Social media for public health: an exploratory policy analysis.
Fast, Ingrid; Sørensen, Kristine; Brand, Helmut; Suggs, L Suzanne
2015-02-01
To accomplish the aims of public health practice and policy today, new forms of communication and education are being applied. Social media are increasingly relevant for public health and used by various actors. Apart from benefits, there can also be risks in using social media, but policies regulating engagement in social media is not well researched. This study examined European public health-related organizations' social media policies and describes the main components of existing policies. This research used a mixed methods approach. A content analysis of social media policies from European institutions, non-government organizations (NGOs) and social media platforms was conducted. Next, individuals responsible for social media in their organization or projects completed a survey about their social media policy. Seventy-five per cent of institutions, NGOs and platforms had a social media policy available. The primary aspects covered within existing policies included data and privacy protection, intellectual property and copyright protection and regulations for the engagement in social media. Policies were intended to regulate staff use, to secure the liability of the institution and social responsibility. Respondents also stressed the importance of self-responsibility when using social media. This study of social media policies for public health in Europe provides a first snapshot of the existence and characteristics of social media policies among European health organizations. Policies tended to focus on legal aspects, rather than the health of the social media user. The effect of such policies on social media adoption and usage behaviour remains to be examined. © The Author 2014. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
MOLECULAR DIVERSITY OF DRINKING WATER MICROBIAL COMMUNITIES: A PHYLOGENETIC APPROACH
Culture-based methods are traditionally used to determine microbiological quality of drinking water even though these methods are highly selective and tend to underestimate the densities and diversity bacterial populations inhabiting distribution systems. In order to better under...
NASA Astrophysics Data System (ADS)
Sugiman, Gozali, M. Hulaifi; Setyawan, Paryanto Dwi
2016-03-01
Glass fiber reinforced polymer has been widely used in chemical industry and transportation due to lightweight and cost effective manufacturing. However due to the ability to absorb water from the environment, the durability issue is of interest for up to days. This paper investigated the water uptake and the effect of absorbed water on the tensile properties and the translaminar fracture toughness of glass fiber reinforced unsaturated polyester composites (GFRP) aged in distilled and salt water up to 30 days at a temperature of 50°C. It has been shown that GFRP absorbed more water in distilled water than in salt water. In distilled water, the tensile strength of GFRP tends to decrease steeply at 7 days and then slightly recovered for further immersion time. In salt water, the tensile strength tends to decrease continually up to 30 days immersion. The translaminar fracture toughness of GFRP aged in both distilled and salt-water shows the similar behavior. The translaminar fracture toughness increases after 7 days immersion and then tends to decrease beyond that immersion time. In the existence of ionics content in salt water, it causes more detrimental effect on the mechanical properties of fiberglass/unsaturated polyester composites compared to that of distilled water.
The Psychology of Yoga Practitioners: A Cluster Analysis.
Genovese, Jeremy E C; Fondran, Kristine M
2017-11-01
Yoga practitioners (N = 261) completed the revised Expression of Spirituality Inventory (ESI) and the Multidimensional Body-Self Relations Questionnaire. Cluster analysis revealed three clusters: Cluster A scored high on all four spiritual constructs. They had high positive evaluations of their appearance, but a lower orientation towards their appearance. They tended to have a high evaluation of their fitness and health, and higher body satisfaction. Cluster B showed lower scores on the spiritual constructs. Like Cluster A, members of Cluster B tended to show high positive evaluations of appearance and fitness. They also had higher body satisfaction. Members of Cluster B had a higher fitness orientation and a higher appearance orientation than members of Cluster A. Members of Cluster C had low scores for all spiritual constructs. They had a low evaluation of, and unhappiness with, their appearance. They were unhappy with the size and appearance of their bodies. They tended to see themselves as overweight. There was a significant difference in years of practice between the three groups (Kruskall -Wallis, p = .0041). Members of Cluster A have the most years of yoga experience and members of Cluster B have more yoga experience than members of Cluster C. These results suggest the possible existence of a developmental trajectory for yoga practitioners. Such a developmental sequence may have important implications for yoga practice and instruction.
The Psychology of Yoga Practitioners: A Cluster Analysis.
Genovese, Jeremy E C; Fondran, Kristine M
2017-03-30
Yoga practitioners (N = 261) completed the revised Expression of Spirituality Inventory (ESI) and the Multidimensional Body-Self Relations Questionnaire. Cluster analysis revealed three clusters: Cluster A scored high on all four spiritual constructs. They had high positive evaluations of their appearance, but a lower orientation towards their appearance. They tended to have a high evaluation of their fitness and health, and higher body satisfaction. Cluster B showed lower scores on the spiritual constructs. Like Cluster A, members of Cluster B tended to show high positive evaluations of appearance and fitness. They also had higher body satisfaction. Members of Cluster B had a higher fitness orientation and a higher appearance orientation than members of Cluster A. Members of Cluster C had low scores for all spiritual constructs. They had a low evaluation of, and unhappiness with, their appearance. They were unhappy with the size and appearance of their bodies. They tended to see themselves as overweight. There was a significant difference in years of practice between the three groups (Kruskall-Wallis, p = .0041). Members of Cluster A have the most years of yoga experience and members of Cluster B have more yoga experience than members of Cluster C. These results suggest the possible existence of a developmental trajectory for yoga practitioners. Such a developmental sequence may have important implications for yoga practice and instruction.
NASA Technical Reports Server (NTRS)
Farner, Bruce
2013-01-01
A moveable valve for controlling flow of a pressurized working fluid was designed. This valve consists of a hollow, moveable floating piston pressed against a stationary solid seat, and can use the working fluid to seal the valve. This open/closed, novel valve is able to use metal-to-metal seats, without requiring seat sliding action; therefore there are no associated damaging effects. During use, existing standard high-pressure ball valve seats tend to become damaged during rotation of the ball. Additionally, forces acting on the ball and stem create large amounts of friction. The combination of these effects can lead to system failure. In an attempt to reduce damaging effects and seat failures, soft seats in the ball valve have been eliminated; however, the sliding action of the ball across the highly loaded seat still tends to scratch the seat, causing failure. Also, in order to operate, ball valves require the use of large actuators. Positioning the metal-to-metal seats requires more loading, which tends to increase the size of the required actuator, and can also lead to other failures in other areas such as the stem and bearing mechanisms, thus increasing cost and maintenance. This novel non-sliding seat surface valve allows metal-to-metal seats without the damaging effects that can lead to failure, and enables large seating forces without damaging the valve. Additionally, this valve design, even when used with large, high-pressure applications, does not require large conventional valve actuators and the valve stem itself is eliminated. Actuation is achieved with the use of a small, simple solenoid valve. This design also eliminates the need for many seals used with existing ball valve and globe valve designs, which commonly cause failure, too. This, coupled with the elimination of the valve stem and conventional valve actuator, improves valve reliability and seat life. Other mechanical liftoff seats have been designed; however, they have only resulted in increased cost, and incurred other reliability issues. With this novel design, the seat is lifted by simply removing the working fluid pressure that presses it against the seat and no external force is required. By eliminating variables associated with existing ball and globe configurations that can have damaging effects upon a valve, this novel design reduces downtime in rocket engine test schedules and maintenance costs.
Scientific collaboration and endorsement: Network analysis of coauthorship and citation networks
Ding, Ying
2010-01-01
Scientific collaboration and endorsement are well-established research topics which utilize three kinds of methods: survey/questionnaire, bibliometrics, and complex network analysis. This paper combines topic modeling and path-finding algorithms to determine whether productive authors tend to collaborate with or cite researchers with the same or different interests, and whether highly cited authors tend to collaborate with or cite each other. Taking information retrieval as a test field, the results show that productive authors tend to directly coauthor with and closely cite colleagues sharing the same research interests; they do not generally collaborate directly with colleagues having different research topics, but instead directly or indirectly cite them; and highly cited authors do not generally coauthor with each other, but closely cite each other. PMID:21344057
USDA-ARS?s Scientific Manuscript database
Traditionally, regulatory monitoring of veterinary drug residues in food animal tissues involves the use of several single-class methods to cover a wide analytical scope. Multiclass, multiresidue methods of analysis tend to provide greater overall laboratory efficiency than the use of multiple meth...
Technical Matters: Method, Knowledge and Infrastructure in Twentieth-Century Life Science
Creager, Angela N. H.; Landecker, Hannah
2010-01-01
Conceptual breakthroughs in science tend to garner accolades and attention. But, as the invention of tissue culture and the development of isotopic tracers show, innovative methods open up new fields and enable the solution of longstanding problems. PMID:19953684
USDA-ARS?s Scientific Manuscript database
Near-surface geophysical methods have become have become important tools for agriculture. Geophysics employed for agriculture tends to be heavily focused on a 2 m zone directly beneath the ground surface, which includes the crop root zone and all, or at least most, of the soil profile. Resistivity...
The United States Air Force Small Business Innovation Research Program
1990-01-01
impossible to draw, and very - satellite’s mass consists of the exists in the metal iridium , which difficult to machine. They also 0 propellant...needed for orbit is ductile and pore free. Iridium tend to be expensive, so that insertion and altitude control, bonds to, but does not react with, cutting...thin layers lifespan is 2400F and 10 hours coatings of iridium , but this track of exotic materials with precisely respectively, record was reversed in
Integration of the Peruvian Air Force Information Systems through an Integrated LAN/WAN
1991-03-01
telecommunication systems are virtually indistinguishable from computer systems. These two technologies meet to work together. 3. Types of Telecommunications...information are virtually out of control. What limits on access exist tend to be the result of habit and tradition, as well as of the sheer difficulty...organization cannot be related to one another, it is virtually impossible for information to be shared or accessed in a timely manner. D. PZRUVIAN AIR FORCE
1984-03-26
with little j likelihood of criticism from the claimant. Taken together, p. these factors tend to give examiners little motivation to estab- lish wage...claims examiners are expected to place a high priority on these tasks. Similar motivation does not exist for wage earning capacity determinations. Low 4...8217- pS4 -% ..:4.: .,.,, O :,% ’’- ,’ , , ;" , ’:",,’,i k. ’, ’, , ’ .vv
Fluctuating asymmetry and testing isolation of Montana grizzly bear populations
Picton, Harold D.; Palmisciano, Daniel A.; Nelson, Gerald
1990-01-01
Fluctuating asymmetry of adult skulls was used to test he genetic isolation of the Yellowstone grizzly bear population from its nearest neighbor. An overall summary statistic was used in addition to 16 other parameters. Tests found the males of the Yellowstone populaion to be more vaiable than those of the North Conitinental Divide Exosystem. Evidence for precipitaiton effects is also included. This test tends to support the existing management haypothesis that the Yellowstone population is isolatied.
Aggregate modeling of fast-acting demand response and control under real-time pricing
Chassin, David P.; Rondeau, Daniel
2016-08-24
This paper develops and assesses the performance of a short-term demand response (DR) model for utility load control with applications to resource planning and control design. Long term response models tend to underestimate short-term demand response when induced by prices. This has two important consequences. First, planning studies tend to undervalue DR and often overlook its benefits in utility demand management program development. Second, when DR is not overlooked, the open-loop DR control gain estimate may be too low. This can result in overuse of load resources, control instability and excessive price volatility. Our objective is therefore to develop amore » more accurate and better performing short-term demand response model. We construct the model from first principles about the nature of thermostatic load control and show that the resulting formulation corresponds exactly to the Random Utility Model employed in economics to study consumer choice. The model is tested against empirical data collected from field demonstration projects and is shown to perform better than alternative models commonly used to forecast demand in normal operating conditions. Finally, the results suggest that (1) existing utility tariffs appear to be inadequate to incentivize demand response, particularly in the presence of high renewables, and (2) existing load control systems run the risk of becoming unstable if utilities close the loop on real-time prices.« less
Aggregate modeling of fast-acting demand response and control under real-time pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Rondeau, Daniel
This paper develops and assesses the performance of a short-term demand response (DR) model for utility load control with applications to resource planning and control design. Long term response models tend to underestimate short-term demand response when induced by prices. This has two important consequences. First, planning studies tend to undervalue DR and often overlook its benefits in utility demand management program development. Second, when DR is not overlooked, the open-loop DR control gain estimate may be too low. This can result in overuse of load resources, control instability and excessive price volatility. Our objective is therefore to develop amore » more accurate and better performing short-term demand response model. We construct the model from first principles about the nature of thermostatic load control and show that the resulting formulation corresponds exactly to the Random Utility Model employed in economics to study consumer choice. The model is tested against empirical data collected from field demonstration projects and is shown to perform better than alternative models commonly used to forecast demand in normal operating conditions. Finally, the results suggest that (1) existing utility tariffs appear to be inadequate to incentivize demand response, particularly in the presence of high renewables, and (2) existing load control systems run the risk of becoming unstable if utilities close the loop on real-time prices.« less
Aggregate modeling of fast-acting demand response and control under real-time pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Rondeau, Daniel
This paper develops and assesses the performance of a short-term demand response (DR) model for utility load control with applications to resource planning and control design. Long term response models tend to underestimate short-term demand response when induced by prices. This has two important consequences. First, planning studies tend to undervalue DR and often overlook its benefits in utility demand management program development. Second, when DR is not overlooked, the open-loop DR control gain estimate may be too low. This can result in overuse of load resources, control instability and excessive price volatility. Our objective is therefore to develop amore » more accurate and better performing short-term demand response model. We construct the model from first principles about the nature of thermostatic load control and show that the resulting formulation corresponds exactly to the Random Utility Model employed in economics to study consumer choice. The model is tested against empirical data collected from field demonstration projects and is shown to perform better than alternative models commonly used to forecast demand in normal operating conditions. The results suggest that (1) existing utility tariffs appear to be inadequate to incentivize demand response, particularly in the presence of high renewables, and (2) existing load control systems run the risk of becoming unstable if utilities close the loop on real-time prices.« less
NASA Astrophysics Data System (ADS)
Gillette, Tammy J.
2009-12-01
The purpose of this proposed research study was to identify actual teaching practices/instructional strategies for online science courses. The identification of these teaching practices/instructional strategies could be used to compile a set of teaching practices/instructional strategies for virtual high school and online academy science instructors. This study could assist online science instructors by determining which teaching practices/instructional strategies were preferred for the online teaching environment. The literature reviewed the role of online and face-to-face instructional strategies, then discussed and elaborated on the science instructional strategies used by teachers, specifically at the secondary level. The current literature did not reflect an integration of these areas of study. Therefore, the connectedness of these two types of instructional strategies and the creation of a set of preferred instructional practices for online science instruction was deemed necessary. For the purpose of this study, the researcher designed a survey for face-to-face and online teachers to identify preferred teaching practices, instructional strategies, and types of technology used when teaching high school science students. The survey also requested demographic data information from the faculty members, including years of experience, subject(s) taught, and whether the teacher taught in a traditional classroom or online, to determine if any of those elements affect differences in faculty perceptions with regard to the questions under investigation. The findings from the current study added to the literature by demonstrating the differences and the similarities that exist between online and face-to-face instruction. Both forms of instruction tend to rely on student-centered approaches to teaching. There were many skills that were similar in that both types of instructors tend to focus on implementing the scientific method. The primary difference is the use of technology tools that were used by online instructors. Online instructors tend to rely on more technological tools such as virtual labs. A list of preferred instructional practices was generated from the qualitative responses to the open-ended questions. Research concerned with this line of inquiry should continue in order to enhance both theory and practice in regard to online instruction.
Structure of initial crystals formed during human amelogenesis
NASA Astrophysics Data System (ADS)
Cuisinier, F. J. G.; Voegel, J. C.; Yacaman, J.; Frank, R. M.
1992-02-01
X-ray diffraction analysis revealed only the existence of carbonated hydroxyapatite (c.HA) during amelogenesis, whereas conventional transmission electron microscopy investigations showed that developing enamel crystals have a ribbon-like habit. The described compositional changes could be an indication for the presence of minerals different from c.HA. However, the absence of identification of such a mineral shows the need of studies by high resolution electron microscopy (HREM) of initial formed human enamel crystals. We demonstrate the existence of two crystal families involved in the early stages of biomineralization: (a) nanometer-size particles which appeared as a precursor phase; (b) ribbon-like crystals, with a structure closely related to c.HA, which by a progressive thickening process tend to attain the mature enamel crystal habit.
Development of a standardized control module for dc-to-dc converters
NASA Technical Reports Server (NTRS)
Yu, Y.; Iwens, R. I.; Lee, F. C.; Inouye, L. Y.
1977-01-01
The electrical performance of a power processor depends on the quality of its control system. Most of the existing control circuits suffer one or more of the following imperfections that tend to restrict their respective utility: (1) inability to perform different modes of duty cycle control; (2) lack of immunity to output filter parameter changes, and (3) lack of capability to provide power component stress limiting on an instantaneous basis. The three lagging aspects of existing control circuits have been used to define the major objectives of the current Standardized Control Module (SCM) Program. Detailed information on the SCM functional block diagram, its universality, and performance features, circuit description, test results, and modeling and analysis efforts are presented.
Zhang, Huisheng; Zhang, Ying; Xu, Dongpo; Liu, Xiaodong
2015-06-01
It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.
Subotin, Michael; Davis, Anthony R
2016-09-01
Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2017-05-01
Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.
Rodriguez, Ana C.; Burk, Robert D.; Hildesheim, Allan; Herrero, Rolando; Wacholder, Sholom; Hutchinson, Martha; Schiffman, Mark
2012-01-01
Background. Few studies have addressed the timing of cervical cytologic abnormalities and human papillomavirus (HPV) positivity during the course of an infection. It remains largely unknown how infections detected by HPV and cytology wax and wane relative to each other. The aim of this analysis was to assess the longitudinal relationship of abnormal cytology and HPV positivity in a 7-year prospective study of 2500 women in Guanacaste, Costa Rica. Methods. At each semiannual or annual visit, cervical specimens were screened using liquid-based cytology and tested for >40 HPV types with use of MY09/MY11 L1 degenerate primer polymerase chain reaction–based methods. On the basis of previous work, we separated prevalent and newly detected infections in younger and older women. Results. Among newly detected HPV- and/or cytology-positive events, HPV and cytology appeared together ∼60% of the time; when discordant, HPV tended to appear before cytology in younger and older women. Combining newly and prevalently detected events, HPV and cytology disappeared at the same time >70% of the time. When discordant, HPV tended to disappear after cytology in younger and older women. Conclusions. Detection of HPV DNA and associated cytological abnormalities tend to come and leave together; however, when discordant, detection of HPV DNA tends to precede and/or last longer than associated cytologic abnormalities. PMID:22147792
Evaluating Hierarchical Structure in Music Annotations
McFee, Brian; Nieto, Oriol; Farbood, Morwaread M.; Bello, Juan Pablo
2017-01-01
Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement. PMID:28824514
Introduction strategies raise key questions.
Finger, W R; Keller, S
1995-09-01
Key issues that must be considered before a new contraceptive is introduced center on the need for a trained provider to begin or terminate the method, its side effects, duration of use, method's ability to meet users' needs and preferences, and extra training or staff requirements. Logistics and economic issues to consider are identifying a dependable way of effectively supplying commodities, planning extra services needed for the method, and cost of providing the method. Each contraceptive method presents a different side effect pattern and burdens the service delivery setting differently. The strategy developed to introduce or expand the 3-month injectable Depo-Provera (DMPA) can be used for any method. It includes a needs assessment and addresses regulatory issues, service delivery policies and procedures, information and training, evaluation, and other concerns. Viet Nam's needs assessment showed that Norplant should not be introduced until the service delivery system becomes stronger. Any needs assessment for expansion of contraceptive services should cover sexually transmitted disease/HIV issues. A World Health Organization strategy helps officials identify the best method mix for local situations. Introductory strategies must aim to improve the quality of family planning programs and expand choices. Many begin by examining existing data and conducting interviews with policymakers, users, providers, and women's health advocates. Introductory programs for Norplant focus on provider training, adequate counseling and informed consent for users, and ready access to removal. They need a well-prepared service delivery infrastructure. The first phase of the DMPA introductory strategy for the Philippines comprised a social marketing campaign and DMPA introduction at public clinics in 10 pilot areas with strong service delivery. Successful AIDS prevention programs show that people tend to use barrier methods when they are available. USAID is currently studying whether or not women in developing countries will use the female condom.
Hard-tip, soft-spring lithography.
Shim, Wooyoung; Braunschweig, Adam B; Liao, Xing; Chai, Jinan; Lim, Jong Kuk; Zheng, Gengfeng; Mirkin, Chad A
2011-01-27
Nanofabrication strategies are becoming increasingly expensive and equipment-intensive, and consequently less accessible to researchers. As an alternative, scanning probe lithography has become a popular means of preparing nanoscale structures, in part owing to its relatively low cost and high resolution, and a registration accuracy that exceeds most existing technologies. However, increasing the throughput of cantilever-based scanning probe systems while maintaining their resolution and registration advantages has from the outset been a significant challenge. Even with impressive recent advances in cantilever array design, such arrays tend to be highly specialized for a given application, expensive, and often difficult to implement. It is therefore difficult to imagine commercially viable production methods based on scanning probe systems that rely on conventional cantilevers. Here we describe a low-cost and scalable cantilever-free tip-based nanopatterning method that uses an array of hard silicon tips mounted onto an elastomeric backing. This method-which we term hard-tip, soft-spring lithography-overcomes the throughput problems of cantilever-based scanning probe systems and the resolution limits imposed by the use of elastomeric stamps and tips: it is capable of delivering materials or energy to a surface to create arbitrary patterns of features with sub-50-nm resolution over centimetre-scale areas. We argue that hard-tip, soft-spring lithography is a versatile nanolithography strategy that should be widely adopted by academic and industrial researchers for rapid prototyping applications.
ERIC Educational Resources Information Center
Trowler, Paul Richard
2014-01-01
Social practice theory addresses both theoretical and method/ological agendas. To date priority has been given to the former, with writing on the latter tending often to be an afterthought to theoretical expositions or fieldwork accounts. This article gives sustained attention to the method/ological corollaries of a social practice perspective. It…
Paradigm Shift In Assessment Methodology for Law Students In South Africa
ERIC Educational Resources Information Center
Joubert, Deidre
2013-01-01
This paper addresses the insufficient traditional method of assessment of tests and examination, which is purely the regurgitation of information. Unfortunately some lecturers tend to cling to the traditional method of assessment as it is an easy route for them to follow. The said method does not encourage the students to become critical thinkers…
Wang, Yong; Wu, Qiao-Feng; Chen, Chen; Wu, Ling-Yun; Yan, Xian-Zhong; Yu, Shu-Guang; Zhang, Xiang-Sun; Liang, Fan-Rong
2012-01-01
Acupuncture has been practiced in China for thousands of years as part of the Traditional Chinese Medicine (TCM) and has gradually accepted in western countries as an alternative or complementary treatment. However, the underlying mechanism of acupuncture, especially whether there exists any difference between varies acupoints, remains largely unknown, which hinders its widespread use. In this study, we develop a novel Linear Programming based Feature Selection method (LPFS) to understand the mechanism of acupuncture effect, at molecular level, by revealing the metabolite biomarkers for acupuncture treatment. Specifically, we generate and investigate the high-throughput metabolic profiles of acupuncture treatment at several acupoints in human. To select the subsets of metabolites that best characterize the acupuncture effect for each meridian point, an optimization model is proposed to identify biomarkers from high-dimensional metabolic data from case and control samples. Importantly, we use nearest centroid as the prototype to simultaneously minimize the number of selected features and the leave-one-out cross validation error of classifier. We compared the performance of LPFS to several state-of-the-art methods, such as SVM recursive feature elimination (SVM-RFE) and sparse multinomial logistic regression approach (SMLR). We find that our LPFS method tends to reveal a small set of metabolites with small standard deviation and large shifts, which exactly serves our requirement for good biomarker. Biologically, several metabolite biomarkers for acupuncture treatment are revealed and serve as the candidates for further mechanism investigation. Also biomakers derived from five meridian points, Zusanli (ST36), Liangmen (ST21), Juliao (ST3), Yanglingquan (GB34), and Weizhong (BL40), are compared for their similarity and difference, which provide evidence for the specificity of acupoints. Our result demonstrates that metabolic profiling might be a promising method to investigate the molecular mechanism of acupuncture. Comparing with other existing methods, LPFS shows better performance to select a small set of key molecules. In addition, LPFS is a general methodology and can be applied to other high-dimensional data analysis, for example cancer genomics.
2012-01-01
Background Acupuncture has been practiced in China for thousands of years as part of the Traditional Chinese Medicine (TCM) and has gradually accepted in western countries as an alternative or complementary treatment. However, the underlying mechanism of acupuncture, especially whether there exists any difference between varies acupoints, remains largely unknown, which hinders its widespread use. Results In this study, we develop a novel Linear Programming based Feature Selection method (LPFS) to understand the mechanism of acupuncture effect, at molecular level, by revealing the metabolite biomarkers for acupuncture treatment. Specifically, we generate and investigate the high-throughput metabolic profiles of acupuncture treatment at several acupoints in human. To select the subsets of metabolites that best characterize the acupuncture effect for each meridian point, an optimization model is proposed to identify biomarkers from high-dimensional metabolic data from case and control samples. Importantly, we use nearest centroid as the prototype to simultaneously minimize the number of selected features and the leave-one-out cross validation error of classifier. We compared the performance of LPFS to several state-of-the-art methods, such as SVM recursive feature elimination (SVM-RFE) and sparse multinomial logistic regression approach (SMLR). We find that our LPFS method tends to reveal a small set of metabolites with small standard deviation and large shifts, which exactly serves our requirement for good biomarker. Biologically, several metabolite biomarkers for acupuncture treatment are revealed and serve as the candidates for further mechanism investigation. Also biomakers derived from five meridian points, Zusanli (ST36), Liangmen (ST21), Juliao (ST3), Yanglingquan (GB34), and Weizhong (BL40), are compared for their similarity and difference, which provide evidence for the specificity of acupoints. Conclusions Our result demonstrates that metabolic profiling might be a promising method to investigate the molecular mechanism of acupuncture. Comparing with other existing methods, LPFS shows better performance to select a small set of key molecules. In addition, LPFS is a general methodology and can be applied to other high-dimensional data analysis, for example cancer genomics. PMID:23046877
Alternative industrial carbon emissions benchmark based on input-output analysis
NASA Astrophysics Data System (ADS)
Han, Mengyao; Ji, Xi
2016-12-01
Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.
Tan, J Y; Chua, C K; Leong, K F
2013-02-01
Advanced scaffold fabrication techniques such as Rapid Prototyping (RP) are generally recognized to be advantageous over conventional fabrication methods in terms architectural control and reproducibility. Yet, most RP techniques tend to suffer from resolution limitations which result in scaffolds with uncontrollable, random-size pores and low porosity, albeit having interconnected channels which is characteristically present in most RP scaffolds. With the increasing number of studies demonstrating the profound influences of scaffold pore architecture on cell behavior and overall tissue growth, a scaffold fabrication method with sufficient architectural control becomes imperative. The present study demonstrates the use of RP fabrication techniques to create scaffolds having interconnected channels as well as controllable micro-size pores. Adopted from the concepts of porogen leaching and indirect RP techniques, the proposed fabrication method uses monodisperse microspheres to create an ordered, hexagonal closed packed (HCP) array of micro-pores that surrounds the existing channels of the RP scaffold. The pore structure of the scaffold is shaped using a single sacrificial construct which comprises the microspheres and a dissolvable RP mold that were sintered together. As such, the size of pores as well as the channel configuration of the scaffold can be tailored based on the design of the RP mold and the size of microspheres used. The fabrication method developed in this work can be a promising alternative way of preparing scaffolds with customized pore structures that may be required for specific studies concerning cell-scaffold interactions.
Glass wool filters for concentrating waterborne viruses and agricultural zoonotic pathogens
USDA-ARS?s Scientific Manuscript database
The key first step in evaluating pathogen levels in suspected contaminated water is concentration. Concentration methods tend to be specific for a particular pathogen group or genus, for example viruses or Cryptosporidium, requiring multiple methods if the sampling program is targeting more than on...
Determining Fuzzy Membership for Sentiment Classification: A Three-Layer Sentiment Propagation Model
Zhao, Chuanjun; Wang, Suge; Li, Deyu
2016-01-01
Enormous quantities of review documents exist in forums, blogs, twitter accounts, and shopping web sites. Analysis of the sentiment information hidden in these review documents is very useful for consumers and manufacturers. The sentiment orientation and sentiment intensity of a review can be described in more detail by using a sentiment score than by using bipolar sentiment polarity. Existing methods for calculating review sentiment scores frequently use a sentiment lexicon or the locations of features in a sentence, a paragraph, and a document. In order to achieve more accurate sentiment scores of review documents, a three-layer sentiment propagation model (TLSPM) is proposed that uses three kinds of interrelations, those among documents, topics, and words. First, we use nine relationship pairwise matrices between documents, topics, and words. In TLSPM, we suppose that sentiment neighbors tend to have the same sentiment polarity and similar sentiment intensity in the sentiment propagation network. Then, we implement the sentiment propagation processes among the documents, topics, and words in turn. Finally, we can obtain the steady sentiment scores of documents by a continuous iteration process. Intuition might suggest that documents with strong sentiment intensity make larger contributions to classification than those with weak sentiment intensity. Therefore, we use the fuzzy membership of documents obtained by TLSPM as the weight of the text to train a fuzzy support vector machine model (FSVM). As compared with a support vector machine (SVM) and four other fuzzy membership determination methods, the results show that FSVM trained with TLSPM can enhance the effectiveness of sentiment classification. In addition, FSVM trained with TLSPM can reduce the mean square error (MSE) on seven sentiment rating prediction data sets. PMID:27846225
Estimating crop net primary production using inventory data and MODIS-derived parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bandaru, Varaprasad; West, Tristram O.; Ricciuto, Daniel M.
2013-06-03
National estimates of spatially-resolved cropland net primary production (NPP) are needed for diagnostic and prognostic modeling of carbon sources, sinks, and net carbon flux. Cropland NPP estimates that correspond with existing cropland cover maps are needed to drive biogeochemical models at the local scale and over national and continental extents. Existing satellite-based NPP products tend to underestimate NPP on croplands. A new Agricultural Inventory-based Light Use Efficiency (AgI-LUE) framework was developed to estimate individual crop biophysical parameters for use in estimating crop-specific NPP. The method is documented here and evaluated for corn and soybean crops in Iowa and Illinois inmore » years 2006 and 2007. The method includes a crop-specific enhanced vegetation index (EVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS), shortwave radiation data estimated using Mountain Climate Simulator (MTCLIM) algorithm and crop-specific LUE per county. The combined aforementioned variables were used to generate spatially-resolved, crop-specific NPP that correspond to the Cropland Data Layer (CDL) land cover product. The modeling framework represented well the gradient of NPP across Iowa and Illinois, and also well represented the difference in NPP between years 2006 and 2007. Average corn and soybean NPP from AgI-LUE was 980 g C m-2 yr-1 and 420 g C m-2 yr-1, respectively. This was 2.4 and 1.1 times higher, respectively, for corn and soybean compared to the MOD17A3 NPP product. Estimated gross primary productivity (GPP) derived from AgI-LUE were in close agreement with eddy flux tower estimates. The combination of new inputs and improved datasets enabled the development of spatially explicit and reliable NPP estimates for individual crops over large regional extents.« less
Knowledge, use and management of native wild edible plants from a seasonal dry forest (NE, Brazil)
2013-01-01
Background Despite being an ancient practice that satisfies basic human needs, the use of wild edible plants tends to be forgotten along with associated knowledge in rural communities. The objective of this work is to analyze existing relationships between knowledge, use, and management of native wild edible plants and socioeconomic factors such as age, gender, family income, individual income, past occupation and current occupation. Methods The field work took place between 2009 and 2010 in the community of Carão, Altinho municipality, in the state of Pernambuco in northeastern Brazil. We conducted semi-structured interviews with 39 members of the community regarding knowledge, use and management of 14 native wild edible plants from the Caatinga region, corresponding to 12 vegetable species. In parallel, we documented the socioeconomic aspects of the interviewed population (age, gender, family income, individual income, past occupation and current occupation). Results Knowledge about edible plants was related to age but not to current occupation or use. Current use was not associated with age, gender or occupation. The association between age and past use may indicate abandonment of these resources. Conclusion Because conservation of the species is not endangered by their use but by deforestation of the ecosystems in which these plants grow, we suggest that the promotion and consumption of the plants by community members is convenient and thereby stimulates the appropriation and consequent protection of the ecosystem. To promote consumption of these plants, it is important to begin by teaching people about plant species that can be used for their alimentation, disproving existing myths about plant use, and encouraging diversification of use by motivating the invention of new preparation methods. An example of how this can be achieved is through events like the “Preserves Festival”. PMID:24279311
Education in Health Research Methodology: Use of a Wiki for Knowledge Translation
Hamm, Michele P.; Klassen, Terry P.; Scott, Shannon D.; Moher, David; Hartling, Lisa
2013-01-01
Introduction A research-practice gap exists between what is known about conducting methodologically rigorous randomized controlled trials (RCTs) and what is done. Evidence consistently shows that pediatric RCTs are susceptible to high risk of bias; therefore novel methods of influencing the design and conduct of trials are required. The objective of this study was to develop and pilot test a wiki designed to educate pediatric trialists and trainees in the principles involved in minimizing risk of bias in RCTs. The focus was on preliminary usability testing of the wiki. Methods The wiki was developed through adaptation of existing knowledge translation strategies and through tailoring the site to the identified needs of the end-users. The wiki was evaluated for usability and user preferences regarding the content and formatting. Semi-structured interviews were conducted with 15 trialists and systematic reviewers, representing varying levels of experience with risk of bias or the conduct of trials. Data were analyzed using content analysis. Results Participants found the wiki to be well organized, easy to use, and straightforward to navigate. Suggestions for improvement tended to focus on clarification of the text or on esthetics, rather than on the content or format. Participants liked the additional features of the site that were supplementary to the text, such as the interactive examples, and the components that focused on practical applications, adding relevance to the theory presented. While the site could be used by both trialists and systematic reviewers, the lack of a clearly defined target audience caused some confusion among participants. Conclusions Participants were supportive of using a wiki as a novel educational tool. The results of this pilot test will be used to refine the risk of bias wiki, which holds promise as a knowledge translation intervention for education in medical research methodology. PMID:23741424
Online Mental Health Resources in Rural Australia: Clinician Perceptions of Acceptability
Holloway, Kristi; Riley, Geoffrey; Auret, Kirsten
2013-01-01
Background Online mental health resources have been proposed as an innovative means of overcoming barriers to accessing rural mental health services. However, clinicians tend to express lower satisfaction with online mental health resources than do clients. Objective To understand rural clinicians’ attitudes towards the acceptability of online mental health resources as a treatment option in the rural context. Methods In-depth interviews were conducted with 21 rural clinicians (general practitioners, psychologists, psychiatrists, and clinical social workers). Interviews were supplemented with rural-specific vignettes, which described clinical scenarios in which referral to online mental health resources might be considered. Symbolic interactionism was used as the theoretical framework for the study, and interview transcripts were thematically analyzed using a constant comparative method. Results Clinicians were optimistic about the use of online mental health resources into the future, showing a preference for integration alongside existing services, and use as an adjunct rather than an alternative to traditional approaches. Key themes identified included perceptions of resources, clinician factors, client factors, and the rural and remote context. Clinicians favored resources that were user-friendly and could be integrated into their clinical practice. Barriers to use included a lack of time to explore resources, difficulty accessing training in the rural environment, and concerns about the lack of feedback from clients. Social pressure exerted within professional clinical networks contributed to a cautious approach to referring clients to online resources. Conclusions Successful implementation of online mental health resources in the rural context requires attention to clinician perceptions of acceptability. Promotion of online mental health resources to rural clinicians should include information about resource effectiveness, enable integration with existing services, and provide opportunities for renegotiating the socially defined role of the clinician in the eHealth era. PMID:24007949
2012-01-01
Factor VIIa tended to primarily impact clotting time, thrombin peak time, and maximum slope of the thrombin curve, whereas in the case of PCC- FVII ...constituents of existing PCCs are the four coagulation factors (F) II (prothrombin), FVII , FIX, and FX.3 Notably, FVII inhibits thrombin generation by...proposed PCC composition (coagulation factors [F] II, IX, and X and the anticoagulant antithrombin), designated PCC-AT, was compared with that of
2015-08-01
average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data ...SELECT data (incident prostate cancer). We found that the odds of low-grade prostate cancer tended to increase with the number of biopsy cores with...merged pathology-PCPT-SELECT data from the review of the H&E stained slide images (Task 7a), and performed the statistical analysis of the merged
1988-10-01
sites tend to be colonized by aquatic organisms adapted to an intertidal existence and by typical wetland plants. These plants may or may not be the...the upland site and within the intertidal range for the wetland site. Site Selection and Design Site selection 15. Acceptable sites for upland and... intertidal range. Along one side of the site, a sandbag dike was constructed that could be removed after filling to provide easy tidal interchange
NASA Technical Reports Server (NTRS)
1993-01-01
A complex of high pressure piping at Stennis Space Center carries rocket propellants and other fluids/gases through the Center's Component Test Facility. Conventional clamped connectors tend to leak when propellant lines are chilled to extremely low temperatures. Reflange, Inc. customized an existing piping connector to include a secondary seal more tolerant of severe thermal gradients for Stennis. The T-Con connector solved the problem, and the company is now marketing a commercial version that permits testing, monitoring or collecting any emissions that may escape the primary seal during severe thermal transition.
Tapered undulator for SASE FELs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fawley, William M.; Huang, Zhirong; Kim, Kwang-Je
We discuss the use of tapered undulators to enhance the performance of free-electron lasers (FELs) based upon self-amplified spontaneous emission (SASE), where the radiation tends to have a relatively broad bandwidth, limited temporal phase coherence, and large amplitude fluctuations. Using the polychromatic FEL simulation code GINGER, we numerically demonstrate the effectiveness of a tapered undulator for parameters corresponding to the existing Argonne low-energy undulator test line (LEUTL) FEL. We also study possible tapering options for proposed x-ray FELs such as the Linac Coherent Light Source (LCLS).
Is an immune reaction required for malignant transformation and cancer growth?
Prehn, Richmond T; Prehn, Liisa M
2012-07-01
Increasing evidence has shown that probably all malignant mouse cells, even those of spontaneous sporadic cancers, are endowed with tumor-specific antigens. Stimulation of cancer growth, rather than inhibition by the immune reaction, is seemingly the prevalent effect in the animal of origin (the autochthonous animal). Small initial dosages of even strong tumor antigens tend to produce stimulatory immune reactions rather than tumor inhibition in any animal. Thus, an immune response at a low level may be an essential growth-driving feature of nascent cancers, and this may be why all cancers apparently have tumor-specific antigens. Inasmuch as a low level of immunity is stimulatory to tumor growth while larger dosages are inhibitory, immuno-selection via this low response may tend to keep the antitumor immune reaction weak and at a nearly maximal stimulatory level throughout most of a tumor's existence. These facts suggest that both suppression of tumor immunity and a heightened immune reaction might each be therapeutic although very contrasting modalities.
A Cultural Psychology of Agency: Morality, Motivation, and Reciprocity.
Miller, Joan G; Goyal, Namrata; Wice, Matthew
2017-09-01
We highlight the need to culturally broaden psychological theories of social development in providing an overview of our programs of cross-cultural research on interpersonal morality, motivation, and reciprocity. Our research demonstrates that whereas Americans tend to treat interpersonal morality as a matter of personal choice, Indians tend to treat it as a role-related duty. Furthermore, Americans associate greater satisfaction with acting autonomously than with acting to fulfill social expectations, whereas Indians associate high levels of satisfaction with both types of cases. We also demonstrate that cultural variation exists in reliance on communal norms versus reciprocal exchange norms in everyday social support interactions among American, Indian, and Japanese populations, with these norms providing a background for contrasting experiences of agency. In conclusion, we highlight the contributions of cultural research to basic psychological theory. Although cultural research provides greater awareness of diversity in psychological functioning, its fundamental value is to contribute new insights into the theoretical formulations and methodological stances adopted in the discipline more generally.
Heim, Derek; Ross, Alastair; Eadie, Douglas; MacAskill, Susan; Davies, John B; Hastings, Gerard; Haw, Sally
2009-12-01
Introduction of smoke-free legislation presents a unique opportunity to study how population-level interventions can challenge existing smoking norms. Our study examined support and opposition to the Scottish legislation and ascertained the relative importance of social and health factors in shaping attitudes among bar customers. Repeat (pre-/post-legislation) recorded and transcribed semistructured interviews with customers (n = 67/62) of eight community bars in contrasting settings were conducted, and data were analyzed thematically. While the legislation was marketed primarily in terms of gains to public and individual health, supportive and opposing responses to the legislation tended to be framed around libertarian and practical factors. Attitudes tended to be stable across both waves of data collection. It is concluded that reasons for smoking were not challenged by promotion of the legislation. In addition to a focus on health gains, social marketing of smoke-free legislation and initiatives may therefore benefit from a stronger focus on social and contextual effects of such policies.
Comparisons between data assimilated HYCOM output and in situ Argo measurements in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Wilson, E. A.; Riser, S.
2014-12-01
This study evaluates the performance of data assimilated Hybrid Coordinate Ocean Model (HYCOM) output for the Bay of Bengal from September 2008 through July 2013. We find that while HYCOM assimilates Argo data, the model still suffers from significant temperature and salinity biases in this region. These biases are most severe in the northern Bay of Bengal, where the model tends to be too saline near the surface and too fresh at depth. The maximum magnitude of these biases is approximately 0.6 PSS. We also find that the model's salinity biases have a distinct seasonal cycle. The most problematic periods are the months following the summer monsoon (Oct-Jan). HYCOM's near surface temperature estimates compare more favorably with Argo, but significant errors exist at deeper levels. We argue that optimal interpolation will tend to induce positive salinity biases in the northern regions of the Bay. Further, we speculate that these biases are introduced when the model relaxes to climatology and assimilates real-time data.
NASA Astrophysics Data System (ADS)
Zaslavskii, O. B.
Recently, it was found that in the vicinity of the black hole horizon of a rotating black hole two particles can collide in such a way that the energy in their centre of mass frame becomes infinite (so-called BSW effect). I give a brief review of basic features of this effect and show that this is a generic property of rotating black holes. In addition, there exists its counterpart for radial motion of charged particles in the charged black hole background. Simple kinematic explanation is suggested that is based on observation that all massive particles fall in two classes. In the first case (by definition, "usual particles"), the velocity approaches that of light on the horizon in the locally-nonrotating frame due to special relationship between the energy and the angular momentum. In the second case, it tends to some value less than speed of light. As a result, the relative velocity also tends to the speed of light with infinitely growing Lorentz factor.
NASA Astrophysics Data System (ADS)
Zaslavskii, O. B.
2011-06-01
Recently, it was found that in the vicinity of the black hole horizon of a rotating black hole two particles can collide in such a way that the energy in their centre of mass frame becomes infinite (so-called BSW effect). I give a brief review of basic features of this effect and show that this is a generic property of rotating black holes. In addition, there exists its counterpart for radial motion of charged particles in the charged black hole background. Simple kinematic explanation is suggested that is based on observation that all massive particles fall in two classes. In the first case (by definition, "usual particles"), the velocity approaches that of light on the horizon in the locally-nonrotating frame due to special relationship between the energy and the angular momentum. In the second case, it tends to some value less than speed of light. As a result, the relative velocity also tends to the speed of light with infinitely growing Lorentz factor.
Diversity of social ties in scientific collaboration networks
NASA Astrophysics Data System (ADS)
Shi, Quan; Xu, Bo; Xu, Xiaomin; Xiao, Yanghua; Wang, Wei; Wang, Hengshan
2011-11-01
Diversity is one of the important perspectives to characterize behaviors of individuals in social networks. It is intuitively believed that diversity of social ties accounts for competition advantage and idea innovation. However, quantitative evidences in a real large social network can be rarely found in the previous research. Thanks to the availability of scientific publication records on WWW; now we can construct a large scientific collaboration network, which provides us a chance to gain insight into the diversity of relationships in a real social network through statistical analysis. In this article, we dedicate our efforts to perform empirical analysis on a scientific collaboration network extracted from DBLP, an online bibliographic database in computer science, in a systematical way, finding the following: distributions of diversity indices tend to decay in an exponential or Gaussian way; diversity indices are not trivially correlated to existing vertex importance measures; authors of diverse social ties tend to connect to each other and these authors are generally more competitive than others.
Narrating practice: reflective accounts and the textual construction of reality.
Taylor, Carolyn
2003-05-01
Two approaches dominate current thinking in health and welfare: evidence-based practice and reflective practice. Whilst there is debate about the merits of evidence-based practice, reflective practice is generally accepted with critical debate as an important educational tool. Where critique does exist it tends to adopt a Foucauldian approach, focusing on the surveillance and self-regulatory aspects of reflective practice. This article acknowledges the critical purchase on the concept of reflective practice offered by Foucauldian approaches but argues that microsociological and discourse analytic approaches can further illuminate the subject and thus serve as a complement to them. The claims of proponents of reflective practice are explored, in opposition to the technical-rational approach of evidence-based practice. Reflective practice tends to adopt a naive or romantic realist position and fails to acknowledge the ways in which reflective accounts construct the world of practice. Microsociological approaches can help us to understand reflective accounts as examples of case-talk, constructed in a narrative form in the same way as case records and presentations.
A typology for strategies to connect citizen science and management.
Freitag, Amy
2016-09-01
One of the often cited benefits of citizen science is better connecting citizens and their science to adaptive management outcomes. However, there is no consensus as to whether this is a reasonable expectation, and if so, how best to approach creating a successful link to management. This review finds cases where the citizen science-management link is explicitly discussed and places each case into a meta-analysis framework that will help define some general successful approaches to forming such a link. We categorize the types of linkages between citizen science and management along two main axes: cooperative to adversarial and deliberate to serendipitous. Cooperative and deliberate types of linkages are the most common, likely due to a mix of causes: that such links are the most commonly written about in the scientific literature, because such links tend to exist for longer amounts of time, and because other types of links tend to drift toward the cooperative/deliberate approach over time.
Automated problem list generation and physicians perspective from a pilot study.
Devarakonda, Murthy V; Mehta, Neil; Tsou, Ching-Huei; Liang, Jennifer J; Nowacki, Amy S; Jelovsek, John Eric
2017-09-01
An accurate, comprehensive and up-to-date problem list can help clinicians provide patient-centered care. Unfortunately, problem lists created and maintained in electronic health records by providers tend to be inaccurate, duplicative and out of date. With advances in machine learning and natural language processing, it is possible to automatically generate a problem list from the data in the EHR and keep it current. In this paper, we describe an automated problem list generation method and report on insights from a pilot study of physicians' assessment of the generated problem lists compared to existing providers-curated problem lists in an institution's EHR system. The natural language processing and machine learning-based Watson 1 method models clinical thinking in identifying a patient's problem list using clinical notes and structured data. This pilot study assessed the Watson method and included 15 randomly selected, de-identified patient records from a large healthcare system that were each planned to be reviewed by at least two internal medicine physicians. The physicians created their own problem lists, and then evaluated the overall usefulness of their own problem lists (P), Watson generated problem lists (W), and the existing EHR problem lists (E) on a 10-point scale. The primary outcome was pairwise comparisons of P, W, and E. Six out of the 10 invited physicians completed 27 assessments of P, W, and E, and in process evaluated 732 Watson generated problems and 444 problems in the EHR system. As expected, physicians rated their own lists, P, highest. However, W was rated higher than E. Among 89% of assessments, Watson identified at least one important problem that physicians missed. Cognitive computing systems like this Watson system hold the potential for accurate, problem-list-centered summarization of patient records, potentially leading to increased efficiency, better clinical decision support, and improved quality of patient care. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Identifying novel drug indications through automated reasoning.
Tari, Luis; Vo, Nguyen; Liang, Shanshan; Patel, Jagruti; Baral, Chitta; Cai, James
2012-01-01
With the large amount of pharmacological and biological knowledge available in literature, finding novel drug indications for existing drugs using in silico approaches has become increasingly feasible. Typical literature-based approaches generate new hypotheses in the form of protein-protein interactions networks by means of linking concepts based on their cooccurrences within abstracts. However, this kind of approaches tends to generate too many hypotheses, and identifying new drug indications from large networks can be a time-consuming process. In this work, we developed a method that acquires the necessary facts from literature and knowledge bases, and identifies new drug indications through automated reasoning. This is achieved by encoding the molecular effects caused by drug-target interactions and links to various diseases and drug mechanism as domain knowledge in AnsProlog, a declarative language that is useful for automated reasoning, including reasoning with incomplete information. Unlike other literature-based approaches, our approach is more fine-grained, especially in identifying indirect relationships for drug indications. To evaluate the capability of our approach in inferring novel drug indications, we applied our method to 943 drugs from DrugBank and asked if any of these drugs have potential anti-cancer activities based on information on their targets and molecular interaction types alone. A total of 507 drugs were found to have the potential to be used for cancer treatments. Among the potential anti-cancer drugs, 67 out of 81 drugs (a recall of 82.7%) are indeed known cancer drugs. In addition, 144 out of 289 drugs (a recall of 49.8%) are non-cancer drugs that are currently tested in clinical trials for cancer treatments. These results suggest that our method is able to infer drug indications (original or alternative) based on their molecular targets and interactions alone and has the potential to discover novel drug indications for existing drugs.
Motivation Gets You Going and Habit Gets You There
ERIC Educational Resources Information Center
Van Twembeke, Ellen; Goeman, Katie
2018-01-01
Background: Educational changes often face resistance as lecturers tend to stand by familiar methods of instruction. This reluctance presents a challenge for programme coordinators who wish to introduce other methods, such as flipped classrooms, and seek to motivate lecturers to embrace educational change. Research into lecturers' motivational…
Vertical Enhancement of Second-Year Psychology Research
ERIC Educational Resources Information Center
Morys-Carter, Wakefield L.; Paltoglou, Aspasia E.; Davies, Emma L.
2015-01-01
Statistics and Research Methods modules are often unpopular with psychology students; however, at Oxford Brookes University the seminar component of the second-year research methods module tends to get very positive feedback. Over half of the seminars work towards the submission of a research-based experimental lab report. This article introduces…
A COMPARISON OF FLARE FORECASTING METHODS. I. RESULTS FROM THE “ALL-CLEAR” WORKSHOP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, G.; Leka, K. D.; Dunn, T.
2016-10-01
Solar flares produce radiation that can have an almost immediate effect on the near-Earth environment, making it crucial to forecast flares in order to mitigate their negative effects. The number of published approaches to flare forecasting using photospheric magnetic field observations has proliferated, with varying claims about how well each works. Because of the different analysis techniques and data sets used, it is essentially impossible to compare the results from the literature. This problem is exacerbated by the low event rates of large solar flares. The challenges of forecasting rare events have long been recognized in the meteorology community, butmore » have yet to be fully acknowledged by the space weather community. During the interagency workshop on “all clear” forecasts held in Boulder, CO in 2009, the performance of a number of existing algorithms was compared on common data sets, specifically line-of-sight magnetic field and continuum intensity images from the Michelson Doppler Imager, with consistent definitions of what constitutes an event. We demonstrate the importance of making such systematic comparisons, and of using standard verification statistics to determine what constitutes a good prediction scheme. When a comparison was made in this fashion, no one method clearly outperformed all others, which may in part be due to the strong correlations among the parameters used by different methods to characterize an active region. For M-class flares and above, the set of methods tends toward a weakly positive skill score (as measured with several distinct metrics), with no participating method proving substantially better than climatological forecasts.« less
Supervised group Lasso with applications to microarray data analysis
Ma, Shuangge; Song, Xiao; Huang, Jian
2007-01-01
Background A tremendous amount of efforts have been devoted to identifying genes for diagnosis and prognosis of diseases using microarray gene expression data. It has been demonstrated that gene expression data have cluster structure, where the clusters consist of co-regulated genes which tend to have coordinated functions. However, most available statistical methods for gene selection do not take into consideration the cluster structure. Results We propose a supervised group Lasso approach that takes into account the cluster structure in gene expression data for gene selection and predictive model building. For gene expression data without biological cluster information, we first divide genes into clusters using the K-means approach and determine the optimal number of clusters using the Gap method. The supervised group Lasso consists of two steps. In the first step, we identify important genes within each cluster using the Lasso method. In the second step, we select important clusters using the group Lasso. Tuning parameters are determined using V-fold cross validation at both steps to allow for further flexibility. Prediction performance is evaluated using leave-one-out cross validation. We apply the proposed method to disease classification and survival analysis with microarray data. Conclusion We analyze four microarray data sets using the proposed approach: two cancer data sets with binary cancer occurrence as outcomes and two lymphoma data sets with survival outcomes. The results show that the proposed approach is capable of identifying a small number of influential gene clusters and important genes within those clusters, and has better prediction performance than existing methods. PMID:17316436
Mihara, Satoko; Higuchi, Susumu
2017-07-01
The diagnostic criteria of Internet gaming disorder (IGD) have been included in section III of DSM-5. This study aims to systematically review both cross-sectional and longitudinal epidemiological studies of IGD. All publications included in PubMed and PsychINFO up to May 2016 were systematically searched to identify cross-sectional studies on prevalence and longitudinal studies of IGD. In the process of identification, articles in non-English languages and studies focusing solely on the use of gaming were excluded, and those meeting the methodological requirements set by this review were included. As a result, 37 cross-sectional and 13 longitudinal studies were selected for review. The prevalence of IGD in the total samples ranged from 0.7% to 27.5%. The prevalence was higher among males than females in the vast majority of studies and tended to be higher among younger rather than older people in some studies. Geographical region made little difference to prevalence. Factors associated with IGD were reported in 28 of 37 cross-sectional studies. These were diverse and covered gaming, demographic and familial factors, interpersonal relations, social and school functioning, personality, psychiatric comorbidity, and physical health conditions. Longitudinal studies identified risk and protective factors, and health and social consequences of IGD. The natural course of IGD was diverse but tended to be more stable among adolescents compared to adults. Although existing epidemiological studies have provided useful data, differences in methodologies make it difficult to compare the findings of these studies when drawing consensus. Future international studies using reliable and uniform methods are warranted. © 2017 The Authors. Psychiatry and Clinical Neurosciences © 2017 Japanese Society of Psychiatry and Neurology.
Nonmelanoma skin cancer and risk of all-cause and cancer-related mortality: a systematic review.
Barton, Virginia; Armeson, Kent; Hampras, Shalaka; Ferris, Laura K; Visvanathan, Kala; Rollison, Dana; Alberg, Anthony J
2017-05-01
Some reports suggest that a history of nonmelanoma skin cancer (NMSC) may be associated with increased mortality. NMSCs have very low fatality rates, but the high prevalence of NMSC elevates the importance of the possibility of associated subsequent mortality from other causes. The variable methods and findings of existing studies leave the significance of these results uncertain. To provide clarity, we conducted a systematic review to characterize the evidence on the associations of NMSC with: (1) all-cause mortality, (2) cancer-specific mortality, and (3) cancer survival. Bibliographic databases were searched through February 2016. Cohort studies published in English were included if adequate data were provided to estimate mortality ratios in patients with-versus-without NMSC. Data were abstracted from the total of eight studies from independent data sources that met inclusion criteria (n = 3 for all-cause mortality, n = 2 for cancer-specific mortality, and n = 5 for cancer survival). For all-cause mortality, a significant increased risk was observed for patients with a history of squamous cell carcinoma (SCC) (mortality ratio estimates (MR) 1.25 and 1.30), whereas no increased risk was observed for patients with a history of basal cell carcinoma (BCC) (MRs 0.96 and 0.97). Based on one study, the association with cancer-specific mortality was stronger for SCC (MR 2.17) than BCC (MR 1.15). Across multiple types of cancer both SCC and BCC tended to be associated with poorer survival from second primary malignancies. Multiple studies support an association between NMSC and fatal outcomes; the associations tend to be more potent for SCC than BCC. Additional investigation is needed to more precisely characterize these associations and elucidate potential underlying mechanisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bednarz, Bryan; Xu, X. George
2008-07-15
A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetusmore » and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loisel, V.; Abbas, M., E-mail: micheline.abbas@ensiacet.fr; Masbernat, O.
Laminar pressure-driven suspension flows are studied in the situation of neutrally buoyant particles at finite Reynolds number. The numerical method is validated for homogeneous particle distribution (no lateral migration across the channel): the increase of particle slip velocities and particle stress with inertia and concentration is in agreement with former works in the literature. In the case of a two-phase channel flow with freely moving particles, migration towards the channel walls due to the Segré-Silberberg effect is observed, leading to the development of a non-uniform concentration profile in the wall-normal direction (the concentration peaks in the wall region and tendsmore » towards zero in the channel core). The particle accumulation in the region of highest shear favors the shear-induced particle interactions and agitation, the profile of which appears to be correlated to the concentration profile. A 1D model predicting particle agitation, based on the kinetic theory of granular flows in the quenched state regime when Stokes number St = O(1) and from numerical simulations when St < 1, fails to reproduce the agitation profile in the wall normal direction. Instead, the existence of secondary flows is clearly evidenced by long time simulations. These are composed of a succession of contra-rotating structures, correlated with the development of concentration waves in the transverse direction. The mechanism proposed to explain the onset of this transverse instability is based on the development of a lift force induced by spanwise gradient of the axial velocity fluctuations. The establishment of the concentration profile in the wall-normal direction therefore results from the combination of the mean flow Segré-Silberberg induced migration, which tends to stratify the suspension and secondary flows which tend to mix the particles over the channel cross section.« less
Complementing Gender Analysis Methods.
Kumar, Anant
2016-01-01
The existing gender analysis frameworks start with a premise that men and women are equal and should be treated equally. These frameworks give emphasis on equal distribution of resources between men and women and believe that this will bring equality which is not always true. Despite equal distribution of resources, women tend to suffer and experience discrimination in many areas of their lives such as the power to control resources within social relationships, and the need for emotional security and reproductive rights within interpersonal relationships. These frameworks believe that patriarchy as an institution plays an important role in women's oppression, exploitation, and it is a barrier in their empowerment and rights. Thus, some think that by ensuring equal distribution of resources and empowering women economically, institutions like patriarchy can be challenged. These frameworks are based on proposed equality principle which puts men and women in competing roles. Thus, the real equality will never be achieved. Contrary to the existing gender analysis frameworks, the Complementing Gender Analysis framework proposed by the author provides a new approach toward gender analysis which not only recognizes the role of economic empowerment and equal distribution of resources but suggests to incorporate the concept and role of social capital, equity, and doing gender in gender analysis which is based on perceived equity principle, putting men and women in complementing roles that may lead to equality. In this article the author reviews the mainstream gender theories in development from the viewpoint of the complementary roles of gender. This alternative view is argued based on existing literature and an anecdote of observations made by the author. While criticizing the equality theory, the author offers equity theory in resolving the gender conflict by using the concept of social and psychological capital.
Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B
2017-01-01
Objective: Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. Materials and Methods: The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Results: Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% (P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). Discussion: Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Conclusion: Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. PMID:27655861
Sedimentation in small reservoirs on the San Rafael Swell, Utah
King, Norman Julius; Mace, Mervyn M.
1953-01-01
Movement of sediment from upland areas and eventually into main drainages and rivers is by no means through continuous transportation of material from the source to the delta. Instead it consists of a series of intermittent erosional and depositional phases that present a pulsating movement. Hence, sediment carried off upland areas may be deposited in lower reaches or along main drainages if an existing combination of factors tend to effect deposition. During this period actual sediment movement out of the basin may be relatively small. Following any change in existing conditions, however, these unconsolidated alluvial fills may be subjected to rapid removal; thus, for a limited time, abnormally high sediment production rates occur until the deposits are either removed or another cycle of deposition is started.
Non-Kondo many-body physics in a Majorana-based Kondo type system
NASA Astrophysics Data System (ADS)
van Beek, Ian J.; Braunecker, Bernd
2016-09-01
We carry out a theoretical analysis of a prototypical Majorana system, which demonstrates the existence of a Majorana-mediated many-body state and an associated intermediate low-energy fixed point. Starting from two Majorana bound states, hosted by a Coulomb-blockaded topological superconductor and each coupled to a separate lead, we derive an effective low-energy Hamiltonian, which displays a Kondo-like character. However, in contrast to the Kondo model which tends to a strong- or weak-coupling limit under renormalization, we show that this effective Hamiltonian scales to an intermediate fixed point, whose existence is contingent upon teleportation via the Majorana modes. We conclude by determining experimental signatures of this fixed point, as well as the exotic many-body state associated with it.
Zhiyong Cai; Michael O. Hunt; Robert J. Ross; Lawrence A. Soltis
1999-01-01
To date, there is no standard method for evaluating the structural integrity of wood floor systems using nondestructive techniques. Current methods of examination and assessment are often subjective and therefore tend to yield imprecise or variable results. For this reason, estimates of allowable wood floor loads are often conservative. The assignment of conservatively...
Ullah, Sami; Daud, Hanita; Dass, Sarat C; Khan, Habib Nawaz; Khalil, Alamgir
2017-11-06
Ability to detect potential space-time clusters in spatio-temporal data on disease occurrences is necessary for conducting surveillance and implementing disease prevention policies. Most existing techniques use geometrically shaped (circular, elliptical or square) scanning windows to discover disease clusters. In certain situations, where the disease occurrences tend to cluster in very irregularly shaped areas, these algorithms are not feasible in practise for the detection of space-time clusters. To address this problem, a new algorithm is proposed, which uses a co-clustering strategy to detect prospective and retrospective space-time disease clusters with no restriction on shape and size. The proposed method detects space-time disease clusters by tracking the changes in space-time occurrence structure instead of an in-depth search over space. This method was utilised to detect potential clusters in the annual and monthly malaria data in Khyber Pakhtunkhwa Province, Pakistan from 2012 to 2016 visualising the results on a heat map. The results of the annual data analysis showed that the most likely hotspot emerged in three sub-regions in the years 2013-2014. The most likely hotspots in monthly data appeared in the month of July to October in each year and showed a strong periodic trend.
Counteracting estimation bias and social influence to improve the wisdom of crowds.
Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D
2018-04-01
Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).
A Content Analysis of Physical Activity in TV Shows Popular Among Adolescents
Gietzen, Megan S.; Gollust, Sarah E.; Linde, Jennifer A.; Neumark-Sztainer, Dianne; Eisenberg, Marla E.
2017-01-01
Purpose Previous research demonstrates that television has the potential to influence youth behaviors, but little evidence exists on how television depicts physical activity (PA), an important public health priority for youth. This mixed-methods study investigates depictions of television characters’ participation in PA in the top 25 favorite shows ranked by a diverse sample of 2,793 adolescents. Method Randomly selected episodes from each show were content analyzed for PA incidents, reasons and context, and in relation to the gender and weight status of participating characters. Results A total of 374 incidents of PA were coded across 75 episodes, with an average of 5.0 incidents per episode. Although male and female characters were equally likely to engage in at least one incident of PA, male characters were involved in a statistically significantly larger proportion of PA incidents than female characters and were more likely to engage in PA for competitive sport. There was no statistically significant difference in engagement in PA or the proportion of PA incidents for characters coded as overweight compared to non-overweight characters. Conclusions Although female characters tended to be underrepresented in PA, this study reveals positive messages for how gender and weight are portrayed in relation to PA on TV. PMID:28151062
Effect of Different Groundwater Levels on Seismic Dynamic Response and Failure Mode of Sandy Slope
Huang, Shuai; Lv, Yuejun; Peng, Yanju; Zhang, Lifang; Xiu, Liwei
2015-01-01
Heavy seismic damage tends to occur in slopes when groundwater is present. The main objectives of this paper are to determine the dynamic response and failure mode of sandy slope subjected simultaneously to seismic forces and variable groundwater conditions. This paper applies the finite element method, which is a fast and efficient design tool in modern engineering analysis, to evaluate dynamic response of the slope subjected simultaneously to seismic forces and variable groundwater conditions. Shaking table test is conducted to analyze the failure mode and verify the accuracy of the finite element method results. The research results show that dynamic response values of the slope have different variation rules under near and far field earthquakes. And the damage location and pattern of the slope are different in varying groundwater conditions. The destruction starts at the top of the slope when the slope is in no groundwater, which shows that the slope appears obvious whipping effect under the earthquake. The destruction starts at the toe of the slope when the slope is in the high groundwater levels. Meanwhile, the top of the slope shows obvious seismic subsidence phenomenon after earthquake. Furthermore, the existence of the groundwater has a certain effect of damping. PMID:26560103
Analyzing chromatographic data using multilevel modeling.
Wiczling, Paweł
2018-06-01
It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.
Greenes, R A
1991-11-01
Education and decision-support resources useful to radiologists are proliferating for the personal computer/workstation user or are potentially accessible via high-speed networks. These resources are typically made available through a set of application programs that tend to be developed in isolation and operate independently. Nonetheless, there is a growing need for an integrated environment for access to these resources in the context of professional work, during clinical problem-solving and decision-making activities, and for use in conjunction with other information resources. New application development environments are required to provide these capabilities. One such architecture for applications, which we have implemented in a prototype environment called DeSyGNER, is based on separately delineating the component information resources required for an application, termed entities, and the user interface and organizational paradigms, or composition methods, by which the entities are used to provide particular kinds of capability. Examples include composition methods to support query, book browsing, hyperlinking, tutorials, simulations, or question/answer testing. Future steps must address true integration of such applications with existing clinical information systems. We believe that the most viable approach for evolving this capability is based on the use of new software engineering methodologies, open systems, client-server communication, and delineation of standard message protocols.
Ecoregions and ecoregionalization: geographical and ecological perspectives
Loveland, Thomas R.; Merchant, James W.
2005-01-01
Ecoregions, i.e., areas exhibiting relative homogeneity of ecosystems, are units of analysis that are increasingly important in environmental assessment and management. Ecoregions provide a holistic framework for flexible, comparative analysis of complex environmental problems. Ecoregions mapping has intellectual foundations in both geography and ecology. However, a hallmark of ecoregions mapping is that it is a truly interdisciplinary endeavor that demands the integration of knowledge from a multitude of sciences. Geographers emphasize the role of place, scale, and both natural and social elements when delineating and characterizing regions. Ecologists tend to focus on environmental processes with special attention given to energy flows and nutrient cycling. Integration of disparate knowledge from the many key sciences has been one of the great challenges of ecoregions mapping, and may lie at the heart of the lack of consensus on the “optimal” approach and methods to use in such work. Through a review of the principal existing US ecoregion maps, issues that should be addressed in order to advance the state of the art are identified. Research related to needs, methods, data sources, data delivery, and validation is needed. It is also important that the academic system foster education so that there is an infusion of new expertise in ecoregion mapping and use.
An argument for renewed focus on epidemiology for public health.
Rogawski, Elizabeth T; Gray, Christine L; Poole, Charles
2016-10-01
Although epidemiology has an indispensable role in serving public health, the relative emphasis of applications of epidemiology often tend toward individual-level medicine over public health in terms of resources and impact. We make distinctions between public health and medical applications of epidemiology to raise awareness among epidemiologists, many of whom came to the field with public health in mind. We discuss reasons for the overemphasis on medical epidemiology and suggest ways to counteract these incentives. Public health epidemiology informs interventions that are applied to populations or that confer benefits beyond the individual, whereas medical epidemiology informs interventions that improve the health of treated individuals. Available resources, new biomedical technologies, and existing epidemiologic methods favor medical applications of epidemiology. Focus on public health impact and methods suited to answer public health questions can create better balance and promote population-level improvements in public health. By deliberately reflecting on research motivations and long-term goals, we hope the distinctions presented here will facilitate critical discussion and a greater consciousness of our potential impact on both individual and population-level health. Renewed intentions towards public health can help epidemiologists navigate potential projects and ultimately contribute to an epidemiology of consequence. Copyright © 2016 Elsevier Inc. All rights reserved.
Towards the Application of Fuzzy Logic for Developing a Novel Indoor Air Quality Index (FIAQI)
JAVID, Allahbakhsh; HAMEDIAN, Amir Abbas; GHARIBI, Hamed; SOWLAT, Mohammad Hossein
2016-01-01
Background: In the past few decades, Indoor Air Pollution (IAP) has become a primary concern to the point. It is increasingly believed to be of equal or greater importance to human health compared to ambient air. However, due to the lack of comprehensive indices for the integrated assessment of indoor air quality (IAQ), we aimed to develop a novel, Fuzzy-Based Indoor Air Quality Index (FIAQI) to bridge the existing gap in this area. Methods: We based our index on fuzzy logic, which enables us to overcome the limitations of traditional methods applied to develop environmental quality indices. Fifteen parameters, including the criteria air pollutants, volatile organic compounds, and bioaerosols were included in the FIAQI due mainly to their significant health effects. Weighting factors were assigned to the parameters based on the medical evidence available in the literature on their health effects. The final FIAQI consisted of 108 rules. In order to demonstrate the performance of the index, data were intentionally generated to cover a variety of quality levels. In addition, a sensitivity analysis was conducted to assess the validity of the index. Results: The FIAQI tends to be a comprehensive tool to classify IAQ and produce accurate results. Conclusion: It seems useful and reliable to be considered by authorities to assess IAQ environments. PMID:27114985
HUDSON, PARISA; HUDSON, STEPHEN D.; HANDLER, WILLIAM B.; SCHOLL, TIMOTHY J.; CHRONIK, BLAINE A.
2010-01-01
High-performance shim coils are required for high-field magnetic resonance imaging and spectroscopy. Complete sets of high-power and high-performance shim coils were designed using two different methods: the minimum inductance and the minimum power target field methods. A quantitative comparison of shim performance in terms of merit of inductance (ML) and merit of resistance (MR) was made for shim coils designed using the minimum inductance and the minimum power design algorithms. In each design case, the difference in ML and the difference in MR given by the two design methods was <15%. Comparison of wire patterns obtained using the two design algorithms show that minimum inductance designs tend to feature oscillations within the current density; while minimum power designs tend to feature less rapidly varying current densities and lower power dissipation. Overall, the differences in coil performance obtained by the two methods are relatively small. For the specific case of shim systems customized for small animal imaging, the reduced power dissipation obtained when using the minimum power method is judged to be more significant than the improvements in switching speed obtained from the minimum inductance method. PMID:20411157
NASA Astrophysics Data System (ADS)
Diao, Yu; Liu, Lei; Xia, Sihao; Kong, Yike
2017-05-01
To investigate the influences of dangling bonds on GaN nanowires surface, the differences in optoelectronic properties between H-saturated and unsaturated GaN nanowires are researched through first-principles study. The GaN nanowires along the [0001] growth direction with diameters of 3.7, 7.5 and 9.5 Å are considered. According to the results, H-saturated GaN nanowires are more stable than the unsaturated ones. With increasing nanowire diameter, unsaturated GaN nanowires become more stable, while the stability of H-saturated GaN nanowires has little change. After geometry optimization, the atomic displacements of unsaturated and H-saturated models are almost reversed. In (0001) crystal plane, Ga atoms tend to move inwards and N atoms tend to move outwards slightly for the unsaturated nanowires, while Ga atoms tend to move outwards and N atoms tend to move inwards slightly for the H-saturated nanowires. Besides, with increasing nanowire diameter, the conduction band minimum of H-saturated nanowire moves to the lower energy side, while that of the unsaturated nanowire changes slightly. The bandgaps of H-saturated nanowires are approaching to bulk GaN as the diameter increases. Absorption curves and reflectivity curves of the unsaturated and H-saturated nanowires exhibit the same trend with the change of energy except the H-saturated models which show larger variations. Through all the calculated results above, we can better understand the effects of dangling bonds on the optoelectronic properties of GaN nanowires and select more proper calculation models and methods for other calculations.
NASA Astrophysics Data System (ADS)
Yin, Huicheng; Zhao, Wenbin
2018-01-01
This paper is a continuation of the works in [35] and [37], where the authors have established the global existence of smooth compressible flows in infinitely expanding balls for inviscid gases and viscid gases, respectively. In this paper, we are concerned with the global existence and large time behavior of compressible Boltzmann gases in an infinitely expanding ball. Such a problem is one of the interesting models in studying the theory of global smooth solutions to multidimensional compressible gases with time dependent boundaries and vacuum states at infinite time. Due to the conservation of mass, the fluid in the expanding ball becomes rarefied and eventually tends to a vacuum state meanwhile there are no appearances of vacuum domains in any part of the expansive ball, which is easily observed in finite time. In the present paper, we will confirm this physical phenomenon for the Boltzmann equation by obtaining the exact lower and upper bound on the macroscopic density function.
Jani, Ylber; Kamberi, Ahmet; Xhunga, Sotir; Pocesta, Bekim; Ferati, Fatmir; Lala, Dali; Zeqiri, Agim; Rexhepi, Atila
2015-01-01
Objective: To assess the influence of type 2 DM and gender, on the QT dispersion, Tpeak-Tend dispersion of ventricular repolarization, in patients with sub-clinic left ventricular diastolic dysfunction of the heart. Background: QT dispersion, that reflects spatial inhomogeneity in ventricular repolarization, Tpeak-Tend dispersion, this on the other hand reflects transmural inhomogeneity in ventricular repolarization, that is increased in an early stage of cardiomyopathy, and in patients with left ventricular diastolic dysfunction, as well. The left ventricular diastolic dysfunction, a basic characteristic of diabetic heart disease (diabetic cardiomyopathy), that developes earlier than systolic dysfunction, suggests that diastolic markers might be sensitive for early cardiac injury. It is also demonstrated that gender has complex influence on indices of myocardial repolarization abnormalities such as QT interval and QT dispersion. Material and methods: We performed an observational study including 300 diabetic patients with similar epidemiological-demographic characteristics recruited in our institution from May 2009 to July 2014, divided into two groups. Demographic and laboratory echocardiographic data were obtained, twelve lead resting electrocardiography, QT, QTc, Tpeak-Tend-intervals and dispersion, were determined manually, and were compared between various groups. For statistical analysis a t-test, X2 test, and logistic regression are used according to the type of variables. A p value <0.05 was considered statistically significant for a confidence interval of 95%. Results: QTc max. interval, QTc dispersion and Tpeak-Tend dispersion, were significantly higher in diabetic group with subclinical LV (left ventricular) diastolic dysfunction, than in diabetic group with normal left ventricular diastolic function (445.24±14.7 ms vs. 433.55±14.4 ms, P<0.000; 44.98±18.78 ms vs. 32.05±17.9 ms, P<0.000; 32.60±1.6 ms vs. 17.46±2.0 ms, P<0.02. Prolonged QTc max. interval was found in 33% of patients, indiabetic group with subclinical left ventricular diastolic dysfunction vs. 13.3% of patients in diabetic group with normal left ventricular diastolic function, (Chi-square: 16.77, P<0.0001). A prolonged QTc dispersion, was found in 40.6% of patients, in diabetic group with subclinical left ventricular diastolic dysfunction vs. 20% of patients in diabetic group with normal left ventricular diastolic function Chi-square: 14.11, P<0.0002). A prolonged dispersion of Tpeak-Tend interval was found in 24% of patients in diabetic group with subclinical left ventricular diastolic dysfunction vs. 13.3% of patients in diabetic group with normal left ventricular diastolic function (Chi-square: 12.00, P<0.005). Females in diabetic group with subclinical left ventricular diastolic dysfunction in comparison with males in diabetic group with subclinical left ventricular diastolic dysfunction, have a significantly prolonged: mean QTc max. interval (23.3% vs. 10%, Chisquare: 12.0, P<0.005), mean QTc dispersion (27.3% vs. 13.3%, Chi-square: 10.24, P<0.001), mean Tpeak-Tend interval (10% vs. 3.3%, Chi-square: 5.77, P<0.01), mean Tpek-Tend dispersion (16.6% vs. 6.6%, Chi-square: 8.39, P<0.003). Conclusion: The present study has shown that influences of type 2 diabetes and gender in diabetics with sub-clinical left-ventricular diastolic dysfunction are reflected in a set of electrophysiological parameters that indicate a prolonged and more heterogeneous repolarization than in diabetic patients with normal diastolic function. In addition, it demonstrates that there exist differences between diabetic females with sub-clinic LV dysfunction and those with diabetes and normal LV function in the prevalence of increased set of electrophysiological parameters that indicate a prolonged and more heterogeneous repolarization. PMID:26550530
Translation: Towards a Critical-Functional Approach
ERIC Educational Resources Information Center
Sadeghi, Sima; Ketabi, Saeed
2010-01-01
The controversy over the place of translation in the teaching of English as a Foreign Language (EFL) is a thriving field of inquiry. Many older language teaching methodologies such as the Direct Method, the Audio-lingual Method, and Natural and Communicative Approaches, tended to either neglect the role of translation, or prohibit it entirely as a…
Inferring Clinical Workflow Efficiency via Electronic Medical Record Utilization
Chen, You; Xie, Wei; Gunter, Carl A; Liebovitz, David; Mehrotra, Sanjay; Zhang, He; Malin, Bradley
2015-01-01
Complexity in clinical workflows can lead to inefficiency in making diagnoses, ineffectiveness of treatment plans and uninformed management of healthcare organizations (HCOs). Traditional strategies to manage workflow complexity are based on measuring the gaps between workflows defined by HCO administrators and the actual processes followed by staff in the clinic. However, existing methods tend to neglect the influences of EMR systems on the utilization of workflows, which could be leveraged to optimize workflows facilitated through the EMR. In this paper, we introduce a framework to infer clinical workflows through the utilization of an EMR and show how such workflows roughly partition into four types according to their efficiency. Our framework infers workflows at several levels of granularity through data mining technologies. We study four months of EMR event logs from a large medical center, including 16,569 inpatient stays, and illustrate that over approximately 95% of workflows are efficient and that 80% of patients are on such workflows. At the same time, we show that the remaining 5% of workflows may be inefficient due to a variety of factors, such as complex patients. PMID:26958173
Results from flamelet and non-flamelet models for supersonic combustion
NASA Astrophysics Data System (ADS)
Ladeinde, Foluso; Li, Wenhai
2017-11-01
Air-breathing propulsion systems (scramjets) have been identified as a viable alternative to rocket engines for improved efficiency. A scramjet engine, which operates at flight Mach numbers around 7 or above, is characterized by the existence of supersonic flow conditions in the combustor. In a dual-mode scramjet, this phenomenon is possible because of the relatively low value of the equivalence ratio and high stagnation temperature, which, together, inhibits thermal choking downstream of transverse injectors. The flamelet method has been our choice for turbulence-combustion interaction modeling and we have extended the basic approach in several dimensions, with a focus on the way the pressure and progress variable are modeled. Improved results have been obtained. We have also examined non-flamelet models, including laminar chemistry (QL), eddy dissipation concept (EDC), and partially-stirred reactor (PaSR). The pressure/progress variable-corrected simulations give better results compared with the original model, with reaction rates that are lower than those from EDC and PaSR. In general, QL tends to over-predict the reaction rate for the supersonic combustion problems investigated in our work.
Pattenden, Jonathan
2010-01-01
This paper notes the prominence of self-help groups (SHGs) within current anti-poverty policy in India, and analyses the impacts of government- and NGO-backed SHGs in rural North Karnataka. It argues that self-help groups represent a partial neoliberalisation of civil society in that they address poverty through low-cost methods that do not challenge the existing distribution of power and resources between the dominant class and the labouring class poor. It finds that intra-group savings and loans and external loans/subsidies can provide marginal economic and political gains for members of the dominant class and those members of the labouring classes whose insecure employment patterns currently provide above poverty line consumption levels, but provide neither material nor political gains for the labouring class poor. Target-oriented SHG catalysts are inattentive to how the social relations of production reproduce poverty and tend to overlook class relations and socio-economic and political differentiation within and outside of groups, which are subject to interference by dominant class local politicians and landowners.
Novel GM animal technologies and their governance.
Bruce, Ann; Castle, David; Gibbs, Corrina; Tait, Joyce; Whitelaw, C Bruce A
2013-08-01
Scientific advances in methods of producing genetically modified (GM) animals continue, yet few such animals have reached commercial production. Existing regulations designed for early techniques of genetic modification pose formidable barriers to commercial applications. Radically improved techniques for producing GM animals invite a re-examination of current regulatory regimes. We critically examine current GM animal regulations, with a particular focus on the European Union, through a framework that recognises the importance of interactions among regulatory regimes, innovation outcomes and industry sectors. The current focus on the regulation of risk is necessary but is unable to discriminate among applications and tends to close down broad areas of application rather than facilitate innovation and positive industry interactions. Furthermore, the fields of innovative animal biosciences appear to lack networks of organisations with co-ordinated future oriented actions. Such networks could drive coherent programmes of innovation towards particular visions and contribute actively to the development of regulatory systems for GM animals. The analysis presented makes the case for regulatory consideration of each animal bioscience related innovation on the basis of the nature of the product itself and not the process by which it was developed.
Cloud computing-based TagSNP selection algorithm for human genome data.
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-05
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used.
Midboe, Amanda M; Lewis, Eleanor T; Cronkite, Ruth C; Chambers, Dallas; Goldstein, Mary K; Kerns, Robert D; Trafton, Jodie A
2011-03-01
Development of clinical decision support systems (CDSs) has tended to focus on facilitating medication management. An understanding of behavioral medicine perspectives on the usefulness of a CDS for patient care can expand CDSs to improve management of chronic disease. The purpose of this study is to explore feedback from behavioral medicine providers regarding the potential for CDSs to improve decision-making, care coordination, and guideline adherence in pain management. Qualitative methods were used to analyze semi-structured interview responses from behavioral medicine stakeholders following demonstration of an existing CDS for opioid prescribing, ATHENA-OT. Participants suggested that a CDS could assist with decision-making by educating providers, providing recommendations about behavioral therapy, facilitating risk assessment, and improving referral decisions. They suggested that a CDS could improve care coordination by facilitating division of workload, improving patient education, and increasing consideration and knowledge of options in other disciplines. Clinical decision support systems are promising tools for improving behavioral medicine care for chronic pain.
Xu, Yun; Zhao, Mingyang; Khalid, Syed; ...
2017-05-09
The high voltage cathode material, LiMn 1.6Ni 0.4O 4, was prepared by a polymer-assisted method. The novelty of this paper is the substitution of Ni with Mn, which already exists in the crystal structure instead of other isovalent metal ion dopants which would result in capacity loss. The electrochemical performance testing including stability and rate capability was evaluated. The temperature was found to impose a change on the valence and structure of the cathode materials. Specifically, manganese tends to be reduced at a high temperature of 800 °C and leads to structural changes. The manganese substituted LiMn 1.5Ni 0.5O 4more » (LMN) has proved to be a good candidate material for Li-ion battery cathodes displaying good rate capability and capacity retention. Finally, the cathode materials processed at 550 °C showed a stable performance with negligible capacity loss for 400 cycles.« less
Autonomous satellite command and control: A comparison with other military systems
NASA Technical Reports Server (NTRS)
Kruchten, Robert J.; Todd, Wayne
1988-01-01
Existing satellite concepts of operation depend on readily available experts and are extremely manpower intensive. Areas of expertise required include mission planning, mission data interpretation, telemetry monitoring, and anomaly resolution. The concepts of operation have envolved to their current state in part because space systems have tended to be treated more as research and development assets rather than as operational assets. These methods of satellite command and control will be inadequate in the future because of the availability, survivability, and capability of human experts. Because space systems have extremely high reliability and limited access, they offer challenges not found in other military systems. Thus, automation techniques used elsewhere are not necessarily applicable to space systems. A program to make satellites much more autonomous has been developed, using a variety of advanced software techniques. The problem the program is addressing, some possible solutions, the goals of the Rome Air Development Center (RADC) program, the rationale as to why the goals are reasonable, and the current program status are discussed. Also presented are some of the concepts used in the program and how they differ from more traditional approaches.
Morokuma, S; Horimoto, N; Nakano, H
2001-08-01
It is well known that 1/f characteristics in power spectral patterns exist in various biological factors including heart rate variability. In the present study, we tried to elucidate the diurnal variation in spectral properties of eye movement and heart rate variability in the human fetus at term, via continuous 24-h observation of both these parameters. Studied were five uncomplicated fetuses at term. We observed eye movement and fetal heart rate (FHR) with real-time ultrasound and Doppler cardiotocograph, respectively, and analyzed the diurnal change in spectral properties, using the maximum entropy method. In four of five cases, the slope values of power spectra for both eye movement frequency and FHR, ranging approximately between 0.5 and 1.8, indicated diurnal variation, where the slopes tended to have high values during the day and low values at night. These findings suggest that, in the human fetus at term, eye movement and FHR are under the control of a common central mechanism, and this center changes its complexity as seen through diurnal rhythm.
NASA Astrophysics Data System (ADS)
Markov, M.; Levin, V.; Markova, I.
2018-02-01
The paper presents an approach to determine the effective electromagnetic parameters of suspensions of ellipsoidal dielectric particles with surface conductivity. This approach takes into account the existence of critical porosity that corresponds to the maximum packing volume fraction of solid inclusions. The approach is based on the Generalized Differential Effective Medium (GDEM) method. We have introduced a model of suspensions containing ellipsoidal inclusions of two types. Inclusions of the first type (phase 1) represent solid grains, and inclusions of the second type (phase 2) contain material with the same physical properties as the host (phase 0). In this model, with increasing porosity the concentration of the host decreases, and it tends to zero near the critical porosity. The proposed model has been used to simulate the effective electromagnetic parameters of concentrated suspensions. We have compared the modeling results for electrical conductivity and dielectric permittivity with the empirical equations. The results obtained have shown that the GDEM model describes the effective electrical conductivity and dielectric permittivity of suspensions in a wide range of inclusion concentrations.
Consensus for second-order multi-agent systems with position sampled data
NASA Astrophysics Data System (ADS)
Wang, Rusheng; Gao, Lixin; Chen, Wenhai; Dai, Dameng
2016-10-01
In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated. The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example. Project supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LY13F030005) and the National Natural Science Foundation of China (Grant No. 61501331).
Attributed relational graphs for cell nucleus segmentation in fluorescence microscopy images.
Arslan, Salim; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem
2013-06-01
More rapid and accurate high-throughput screening in molecular cellular biology research has become possible with the development of automated microscopy imaging, for which cell nucleus segmentation commonly constitutes the core step. Although several promising methods exist for segmenting the nuclei of monolayer isolated and less-confluent cells, it still remains an open problem to segment the nuclei of more-confluent cells, which tend to grow in overlayers. To address this problem, we propose a new model-based nucleus segmentation algorithm. This algorithm models how a human locates a nucleus by identifying the nucleus boundaries and piecing them together. In this algorithm, we define four types of primitives to represent nucleus boundaries at different orientations and construct an attributed relational graph on the primitives to represent their spatial relations. Then, we reduce the nucleus identification problem to finding predefined structural patterns in the constructed graph and also use the primitives in region growing to delineate the nucleus borders. Working with fluorescence microscopy images, our experiments demonstrate that the proposed algorithm identifies nuclei better than previous nucleus segmentation algorithms.
Local structure order in Pd 78Cu 6Si 16 liquid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, G. Q.; Zhang, Y.; Sun, Y.
2015-02-05
The short-range order (SRO) in Pd 78Cu 6Si 16 liquid was studied by high energy x-ray diffraction and ab initio molecular dynamics (MD) simulations. The calculated pair correlation functions at different temperatures agree well with the experimental results. The partial pair correlation functions from ab intio MD simulations indicate that Si atoms prefer to be uniformly distributed while Cu atoms tend to aggregate. By performing structure analysis using Honeycutt-Andersen index, Voronoi tessellation, and atomic cluster alignment method, we show that the icosahedron and face-centered cubic SRO increase upon cooling. The dominant SRO is the Pd-centered Pd 9Si 2 motif, namelymore » the structure of which motif is similar to the structure of Pd-centered clusters in the Pd 9Si 2 crystal. The study further confirms the existence of trigonal prism capped with three half-octahedra that is reported as a structural unit in Pd-based amorphous alloys. The majority of Cu-centered clusters are icosahedra, suggesting that the presence of Cu is benefit to promote the glass forming ability.« less
Evolution simulation of lightning discharge based on a magnetohydrodynamics method
NASA Astrophysics Data System (ADS)
Fusheng, WANG; Xiangteng, MA; Han, CHEN; Yao, ZHANG
2018-07-01
In order to solve the load problem for aircraft lightning strikes, lightning channel evolution is simulated under the key physical parameters for aircraft lightning current component C. A numerical model of the discharge channel is established, based on magnetohydrodynamics (MHD) and performed by FLUENT software. With the aid of user-defined functions and a user-defined scalar, the Lorentz force, Joule heating and material parameters of an air thermal plasma are added. A three-dimensional lightning arc channel is simulated and the arc evolution in space is obtained. The results show that the temperature distribution of the lightning channel is symmetrical and that the hottest region occurs at the center of the lightning channel. The distributions of potential and current density are obtained, showing that the difference in electric potential or energy between two points tends to make the arc channel develop downwards. The arc channel comes into expansion on the anode surface due to stagnation of the thermal plasma and there exists impingement on the copper plate when the arc channel comes into contact with the anode plate.
NASA Astrophysics Data System (ADS)
Zainudin, M. N. Shah; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran
2017-10-01
Prior knowledge in pervasive computing recently garnered a lot of attention due to its high demand in various application domains. Human activity recognition (HAR) considered as the applications that are widely explored by the expertise that provides valuable information to the human. Accelerometer sensor-based approach is utilized as devices to undergo the research in HAR since their small in size and this sensor already build-in in the various type of smartphones. However, the existence of high inter-class similarities among the class tends to degrade the recognition performance. Hence, this work presents the method for activity recognition using our proposed features from combinational of spectral analysis with statistical descriptors that able to tackle the issue of differentiating stationary and locomotion activities. The noise signal is filtered using Fourier Transform before it will be extracted using two different groups of features, spectral frequency analysis, and statistical descriptors. Extracted signal later will be classified using random forest ensemble classifier models. The recognition results show the good accuracy performance for stationary and locomotion activities based on USC HAD datasets.
Modeling misidentification errors that result from use of genetic tags in capture-recapture studies
Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.
2011-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.
Cloud Computing-Based TagSNP Selection Algorithm for Human Genome Data
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-01
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088
On the nonlinear interfacial instability of rotating core-annular flow
NASA Technical Reports Server (NTRS)
Coward, Aidrian V.; Hall, Philip
1993-01-01
The interfacial stability of rotating core-annular flows is investigated. The linear and nonlinear effects are considered for the case when the annular region is very thin. Both asymptotic and numerical methods are used to solve the flow in the core and film regions which are coupled by a difference in viscosity and density. The long-term behavior of the fluid-fluid interface is determined by deriving its nonlinear evolution in the form of a modified Kuramoto-Sivashinsky equation. We obtain a generalization of this equation to three dimensions. The flows considered are applicable to a wide array of physical problems where liquid films are used to lubricate higher or lower viscosity core fluids, for which a concentric arrangement is desired. Linearized solutions show that the effects of density and viscosity stratification are crucial to the stability of the interface. Rotation generally destabilizes non-axisymmetric disturbances to the interface, whereas the centripetal forces tend to stabilize flows in which the film contains the heavier fluid. Nonlinear affects allow finite amplitude helically travelling waves to exist when the fluids have different viscosities.
2013-01-01
Background Identifying the emotional state is helpful in applications involving patients with autism and other intellectual disabilities; computer-based training, human computer interaction etc. Electrocardiogram (ECG) signals, being an activity of the autonomous nervous system (ANS), reflect the underlying true emotional state of a person. However, the performance of various methods developed so far lacks accuracy, and more robust methods need to be developed to identify the emotional pattern associated with ECG signals. Methods Emotional ECG data was obtained from sixty participants by inducing the six basic emotional states (happiness, sadness, fear, disgust, surprise and neutral) using audio-visual stimuli. The non-linear feature ‘Hurst’ was computed using Rescaled Range Statistics (RRS) and Finite Variance Scaling (FVS) methods. New Hurst features were proposed by combining the existing RRS and FVS methods with Higher Order Statistics (HOS). The features were then classified using four classifiers – Bayesian Classifier, Regression Tree, K- nearest neighbor and Fuzzy K-nearest neighbor. Seventy percent of the features were used for training and thirty percent for testing the algorithm. Results Analysis of Variance (ANOVA) conveyed that Hurst and the proposed features were statistically significant (p < 0.001). Hurst computed using RRS and FVS methods showed similar classification accuracy. The features obtained by combining FVS and HOS performed better with a maximum accuracy of 92.87% and 76.45% for classifying the six emotional states using random and subject independent validation respectively. Conclusions The results indicate that the combination of non-linear analysis and HOS tend to capture the finer emotional changes that can be seen in healthy ECG data. This work can be further fine tuned to develop a real time system. PMID:23680041
Producing good font attribute determination using error-prone information
NASA Astrophysics Data System (ADS)
Cooperman, Robert
1997-04-01
A method to provide estimates of font attributes in an OCR system, using detectors of individual attributes that are error-prone. For an OCR system to preserve the appearance of a scanned document, it needs accurate detection of font attributes. However, OCR environments have noise and other sources of errors, tending to make font attribute detection unreliable. Certain assumptions about font use can greatly enhance accuracy. Attributes such as boldness and italics are more likely to change between neighboring words, while attributes such as serifness are less likely to change within the same paragraph. Furthermore, the document as a whole, tends to have a limited number of sets of font attributes. These assumptions allow a better use of context than the raw data, or what would be achieved by simpler methods that would oversmooth the data.
On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2011-01-01
This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.
Applicability of ambient toxicity testing to national or regional water-quality assessment
Elder, J.F.
1989-01-01
Comprehensive assessment of the quality of natural waters requires a multifaceted approach. Based on experimentation designed to monitor responses of organisms to environmental stresses, toxicity testing may have diverse purposes in water quality assessments. These purposes may include identification that warrant further study because of poor water quality or unusual ecological features, verification of other types of monitoring, or assessment of contaminant effects on aquatic communities. A wide variety of toxicity test methods have been developed to fulfill the needs of diverse applications. The methods differ primarily in the full selections made relative to four characteristics: (1) test species, (2) endpoints (acute or chronic), (3) test enclosure type, and (4) test substance (toxicant) that functions as the environmental stress. Toxicity test approachs vary in their capacity to meet the needs of large-scale assessments of existing water quality. Ambient testing is more likely to meet these needs than are the procedures that call for exposure of the test organisms to known concentrations of a single toxicant. However, meaningful interpretation of ambient test results depend on the existence of accompanying chemical analysis of the ambient media. The ambient test substance may be water or sediments. Sediment tests have had limited application, but they are useful because of the fact that most toxicants tend to accumulate in sediments, and many test species either inhabit the sediments or are in frequent contact with them. Biochemical testing methods, which have been developing rapidly in recent years, are likely to be among the most useful procedures for large-scale water quality assessments. They are relatively rapid and simple, and more importantly, they focus on biochemical changes that are the initial responses of virtually all organisms to environmental stimuli. Most species are sensitive to relatively few toxicants and their sensitivities vary as conditions change. One of the most informative approaches for toxicity testing is to combine biochemical tests with other test methods in a ' battery or tests ' that is diversified enough to characterize different types of toxicants and different trophic levels. (Lantz-PTT)
NASA Astrophysics Data System (ADS)
Dougherty, Andrew W.
Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor responses in the time, gas and temperature domains, and the dual representation of the support vector regression solution is shown to provide insight into the sensor's sensitivity and potential orthogonality. Finally, the dual weights of the support vector regression solution to the sensor's response are suggested as a fitness function for a genetic algorithm, or some other method for efficiently searching large parameter spaces.
Benjafield, John G
2016-05-01
The digital humanities are being applied with increasing frequency to the analysis of historically important texts. In this study, the methods of G. K. Zipf are used to explore the digital history of the vocabulary of psychology. Zipf studied a great many phenomena, from word frequencies to city sizes, showing that they tend to have a characteristic distribution in which there are a few cases that occur very frequently and many more cases that occur very infrequently. We find that the number of new words and word senses that writers contribute to the vocabulary of psychology have such a Zipfian distribution. Moreover, those who make the most contributions, such as William James, tend also to invent new metaphorical senses of words rather than new words. By contrast, those who make the fewest contributions tend to invent entirely new words. The use of metaphor makes a text easier for a reader to understand. While the use of new words requires more effort on the part of the reader, it may lead to more precise understanding than does metaphor. On average, new words and word senses become a part of psychology's vocabulary in the time leading up to World War I, suggesting that psychology was "finding its language" (Danziger, 1997) during this period. (c) 2016 APA, all rights reserved).
Top-down and bottom-up definitions of human failure events in human reliability analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald Laurids
2014-10-01
In the probabilistic risk assessments (PRAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question is crucial, however, as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PRAs tend to be top-down—defined as a subset of the PRA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) often tend to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less
Simoens, Steven
2013-01-01
Objectives This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. Materials and Methods For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Results Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Conclusions Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation. PMID:24386474
Learning of Multimodal Representations With Random Walks on the Click Graph.
Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting
2016-02-01
In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.
In situ alkali-silica reaction observed by x-ray microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtis, K.E.; Monteiro, P.J.M.; Brown, J.T.
1997-04-01
In concrete, alkali metal ions and hydroxyl ions contributed by the cement and reactive silicates present in aggregate can participate in a destructive alkali-silica reaction (ASR). This reaction of the alkalis with the silicates produces a gel that tends to imbibe water found in the concrete pores, leading to swelling of the gel and eventual cracking of the affected concrete member. Over 104 cases of alkali-aggregate reaction in dams and spillways have been reported around the world. At present, no method exists to arrest the expansive chemical reaction which generates significant distress in the affected structures. Most existing techniques availablemore » for the examination of concrete microstructure, including ASR products, demand that samples be dried and exposed to high pressure during the observation period. These sample preparation requirements present a major disadvantage for the study of alkali-silica reaction. Given the nature of the reaction and the affect of water on its products, it is likely that the removal of water will affect the morphology, creating artifacts in the sample. The purpose of this research is to observe and characterize the alkali-silica reaction, including each of the specific reactions identified previously, in situ without introducing sample artifacts. For observation of unconditioned samples, x-ray microscopy offers an opportunity for such an examination of the alkali-silica reaction. Currently, this investigation is focusing on the effect of calcium ions on the alkali-silica reaction.« less
Analysis of an age structured model for tick populations subject to seasonal effects
NASA Astrophysics Data System (ADS)
Liu, Kaihui; Lou, Yijun; Wu, Jianhong
2017-08-01
We investigate an age-structured hyperbolic equation model by allowing the birth and death functions to be density dependent and periodic in time with the consideration of seasonal effects. By studying the integral form solution of this general hyperbolic equation obtained through the method of integration along characteristics, we give a detailed proof of the uniqueness and existence of the solution in light of the contraction mapping theorem. With additional biologically natural assumptions, using the tick population growth as a motivating example, we derive an age-structured model with time-dependent periodic maturation delays, which is quite different from the existing population models with time-independent maturation delays. For this periodic differential system with seasonal delays, the basic reproduction number R0 is defined as the spectral radius of the next generation operator. Then, we show the tick population tends to die out when R0 < 1 while remains persistent if R0 > 1. When there is no intra-specific competition among immature individuals due to the sufficient availability of immature tick hosts, the global stability of the positive periodic state for the whole model system of four delay differential equations can be obtained with the observation that a scalar subsystem for the adult stage size can be decoupled. The challenge for the proof of such a global stability result can be overcome by introducing a new phase space, based on which, a periodic solution semiflow can be defined which is eventually strongly monotone and strictly subhomogeneous.
Problematic gaming exists and is an example of disordered gaming
Griffiths, Mark D.; Kuss, Daria J.; Lopez-Fernandez, Olatz; Pontes, Halley M.
2017-01-01
Background The recent paper by Aarseth et al. (2016) questioned whether problematic gaming should be considered a new disorder particularly because “Gaming Disorder” (GD) has been identified as a disorder to be included in the next (11th) revision of the World Health Organization’s International Classification of Diseases (ICD-11). Methods This study uses contemporary literature to argue why GD should be included in the ICD-11. Results Aarseth and colleagues acknowledge that there is much literature (including papers by some of the authors themselves) that some individuals experience serious problems with video gaming. How can such an activity be seriously problematic yet not disordered? Similar to other addictions, gaming addiction is relatively rare and is in essence a syndrome (i.e., a condition or disorder characterized by a set of associated symptoms that tend to occur under specific circumstances). Consequently, not everyone will exhibit exactly the same set of symptoms and consequences, and this partly explains why those working in the problematic gaming field often disagree on symptomatology. Conclusions Research into gaming is not about pathologizing healthy entertainment, but about pathologizing excessive and problematic behaviors that cause significant psychological distress and impairment in an individual’s life. These are two related, but (ultimately) very distinct phenomena. While being aware that gaming is a pastime activity which is enjoyed non-problematically by many millions of individuals worldwide, it is concluded that problematic gaming exists and that it is an example of disordered gaming. PMID:28816501
Problematic gaming exists and is an example of disordered gaming.
Griffiths, Mark D; Kuss, Daria J; Lopez-Fernandez, Olatz; Pontes, Halley M
2017-09-01
Background The recent paper by Aarseth et al. (2016) questioned whether problematic gaming should be considered a new disorder particularly because "Gaming Disorder" (GD) has been identified as a disorder to be included in the next (11th) revision of the World Health Organization's International Classification of Diseases (ICD-11). Methods This study uses contemporary literature to argue why GD should be included in the ICD-11. Results Aarseth and colleagues acknowledge that there is much literature (including papers by some of the authors themselves) that some individuals experience serious problems with video gaming. How can such an activity be seriously problematic yet not disordered? Similar to other addictions, gaming addiction is relatively rare and is in essence a syndrome (i.e., a condition or disorder characterized by a set of associated symptoms that tend to occur under specific circumstances). Consequently, not everyone will exhibit exactly the same set of symptoms and consequences, and this partly explains why those working in the problematic gaming field often disagree on symptomatology. Conclusions Research into gaming is not about pathologizing healthy entertainment, but about pathologizing excessive and problematic behaviors that cause significant psychological distress and impairment in an individual's life. These are two related, but (ultimately) very distinct phenomena. While being aware that gaming is a pastime activity which is enjoyed non-problematically by many millions of individuals worldwide, it is concluded that problematic gaming exists and that it is an example of disordered gaming.
Apparent Motives for Aggression in the Social Context of the Bar
Graham, Kathryn; Bernards, Sharon; Osgood, D. Wayne; Parks, Michael; Abbey, Antonia; Felson, Richard B.; Saltz, Robert F.; Wells, Samantha
2013-01-01
Objective Little systematic research has focused on motivations for aggression and most of the existing research is qualitative and atheoretical. This study increases existing knowledge by using the theory of coercive actions to quantify the apparent motives of individuals involved in barroom aggression. Objectives were to examine: gender differences in the use of compliance, grievance, social identity, and excitement motives; how motives change during an aggressive encounter; and the relationship of motives to aggression severity. Method We analyzed 844 narrative descriptions of aggressive incidents observed in large late-night drinking venues as part of the Safer Bars evaluation. Trained coders rated each type of motive for the 1,507 bar patrons who engaged in aggressive acts. Results Women were more likely to be motivated by compliance and grievance, many in relation to unwanted sexual overtures from men; whereas men were more likely to be motivated by social identity concerns and excitement. Aggressive acts that escalated tended to be motivated by identity or grievance, with identity motivation especially associated with more severe aggression. Conclusions A key factor in preventing serious aggression is to develop approaches that focus on addressing identity concerns in the escalation of aggression and defusing incidents involving grievance and identity motives before they escalate. In bars, this might include training staff to recognize and defuse identity motives and eliminating grievance-provoking situations such as crowd bottlenecks and poorly managed queues. Preventive interventions generally need to more directly address the role of identity motives, especially among men. PMID:24224117
Ariane Transfer Vehicle in service of man in orbit
NASA Astrophysics Data System (ADS)
Deutscher, N.; Schefold, K.; Cougnet, C.
1988-10-01
The Ariane Transfer Vehicle (ATV), an unmanned propulsion system that is designed to be carried by the Ariane 5 launch vehicle, will undertake the logistical support required by the International Space Station and the Man-Tended Free Flyer, carrying both pressurized and unpressurized cargo to these spacecraft and carrying away wastes. The ATV is an expendable vehicle, disposed of by burn-up during reentry, and will be available for initial operations in 1996. In order to minimize development costs and recurrent costs, the ATV design will incorporate existing hardware and software.
NASA Technical Reports Server (NTRS)
Grugel, R. N.; Brush, L. N.
1996-01-01
Highly segregated macrostructures tend to develop during processing of hypermonotectic alloys because of the density difference existing between the two liquid phases. The approximately 4.6 seconds of low-gravity provided by Marshall Space Flight Center's 105 meter drop tube was utilized to minimize density-driven separation and promote uniform microstructures in hypermonotectic Ag-Ni and Ag-Mn alloys. For the Ag-Ni alloys a numerical model was developed to track heat flow and solidification of the bi-metal drop configuration. Results, potential applications, and future work are presented.
Quasi-neutral limit of Euler–Poisson system of compressible fluids coupled to a magnetic field
NASA Astrophysics Data System (ADS)
Yang, Jianwei
2018-06-01
In this paper, we consider the quasi-neutral limit of a three-dimensional Euler-Poisson system of compressible fluids coupled to a magnetic field. We prove that, as Debye length tends to zero, periodic initial-value problems of the model have unique smooth solutions existing in the time interval where the ideal incompressible magnetohydrodynamic equations has smooth solution. Meanwhile, it is proved that smooth solutions converge to solutions of incompressible magnetohydrodynamic equations with a sharp convergence rate in the process of quasi-neutral limit.
Three dimensional thermal stresses in angle-ply composite laminates
NASA Technical Reports Server (NTRS)
Griffin, O. Hayden, Jr.
1988-01-01
The room temperature stress distributions and shapes of a family of angle ply graphite/epoxy laminates have been obtained using a three-dimensional linear finite element analysis. The sensitivity of the corners to fiber angle variations is examined, in addition to the errors introduced by assuming planes of symmetry which do not exist in angle-ply laminates. The results show that angle ply laminates with 'clustered' plies will tend to delaminate at diagonally opposite corners, and that matrix cracks in this family of laminates will be initiated in the laminate interior.
Butterworth, C. E.; Baugh, C. M.; Krumdieck, Carlos
1969-01-01
The absorption and metabolism of synthetic polyglutamates of folic acid have been compared with free pteroylglutamic acid in four subjects having chronic lymphatic leukemia and one with Hodgkin's granuloma. Pteroylpolyglutamates containing either three or seven glutamate residues were prepared by the solid-phase method permitting placement of carbon-14 labels in either the pteridine ring or in a selected glutamate unit of the gamma peptide chain. Complete dissociation was observed between biological folate activity and radioactivity of plasma after ingestion of pteroyltriglutamate labeled in the middle glutamate. This indicates cleavage to the monoglutamate form at the time of absorption from the intestine or very soon thereafter. A large portion of radioactivity liberated from the middle glutamate is recoverable as carbon dioxide in the exhaled air. Fecal losses of folate tended to be greater with increasing length of the poly-γ-glutamyl chain. Higher blood levels and greater urinary losses of folate tended to occur after ingestion of mono- and triglutamates than with the heptaglutamate. Calculations based on radioactivity determinations in feces plus urinary folate losses, judged by either radioactivity or microbiological assays, indicated net retention of 37-67% of the dose irrespective of chain length ingested and major avenue of loss. During the peak of absorption the folate circulating in plasma was active for both Streptococcus fecalis and Lactobacillus casei and carried specific radioactivity which was virtually identical with that of the administered dose. This suggests that neither methylation, conjugation, nor displacement of nonradioactive folate occurred to any significant extent during the 1st 2 hr. The specific radioactivity of 24-hr urine specimens as measured with L. casei corresponded closely with that of the administered dose. Evidence exists that methylation of the radioactive folate may occur, but significant displacement of nonradioactive methylfolate was not observed under the conditions of this study. Since 50-75% of administered heptaglutamate appears to be absorbable in man, estimates of dietary intake should include this fraction as well as the “free” folate. PMID:4977032
Hypochondriasis and its relationship to obsessive-compulsive disorder.
Fallon, B A; Qureshi, A I; Laje, G; Klein, B
2000-09-01
Hypochondriasis is a heterogeneous disorder. This was well demonstrated in the study by Kellner et al, which showed that patients with high levels of disease fear tended to be more anxious or phobic, whereas patients with high levels of disease conviction tended to have more and more severe somatic symptoms. Little comorbidity exists to support the statement that hypochondriasis is an obsessive-compulsive spectrum disorder. Although patients exist whose hypochondriac concerns are identical in quality to the intrusive thoughts of patients with OCD, as a group, patients with hypochondriasis do not share a comorbidity profile comparable with that of patients with OCD. The data support a closer relationship between hypochondriasis and somatization disorder than between hypochondriasis and OCD. The family history data is limited by the lack of adequate studies. Using comparable methods of the family history approach, Black's study reported a higher frequency of GAD but not OCD among the relatives of OCD patients--a finding similar to what Noyes found among the relatives of hypochondriac patients; however, using the direct interview method, somatization disorder was the only statistically more common disorder, among relatives of female hypochondriac patients. Therefore, although the parallel in overlap with GAD is suggestive of a commonality between OCD, GAD, and hypochondriasis, the finding of a greater frequency of somatization disorder leans against the hypothesis that hypochondriasis is best considered an OCD spectrum disorder. The pharmacologic treatment data are the one type of biologic evidence that supports a bridge to OCD. The pharmacologic studies suggest that for patients with general hypochondriasis, TCAs are not effective and that higher dosages and longer trials of the SRIs are needed. These pharmacologic observations are comparable with the ones made for patients with OCD but dissimilar to the observations made for depression. The benefit of imipramine among patients with illness phobia must be assessed in placebo-controlled trials among illness phobics and among hypochondriacs. Even more valuable would be a direct comparison of a TCA (e.g., imipramine or desipramine) and a selective SRI (e.g., fluoxetine) to determine whether the response to selective SRIs is greater. Although the pharmacologic data are compelling in supporting the hypothesis that hypochondriasis is an obsessive-compulsive spectrum disorder, the comorbidity data are equally compelling in dispelling that hypothesis. Perhaps future studies clarify the subtypes of hypochondriasis, be they "phobic, obsessive, and depressive," "chronic and episodic," "early onset versus late onset" or some other as yet undetermined subtype. Such clarification may be aided by better instruments to assess the obsessive-compulsive and hypochondria spectrums within individuals and families and by neuropsychological or pharmacologic challenge and neuroimaging studies.
NASA Technical Reports Server (NTRS)
Sellers, J. F.; Daniele, C. J.
1975-01-01
The DYNGEN, a digital computer program for analyzing the steady state and transient performance of turbojet and turbofan engines, is described. The DYNGEN is based on earlier computer codes (SMOTE, GENENG, and GENENG 2) which are capable of calculating the steady state performance of turbojet and turbofan engines at design and off-design operating conditions. The DYNGEN has the combined capabilities of GENENG and GENENG 2 for calculating steady state performance; to these the further capability for calculating transient performance was added. The DYNGEN can be used to analyze one- and two-spool turbojet engines or two- and three-spool turbofan engines without modification to the basic program. A modified Euler method is used by DYNGEN to solve the differential equations which model the dynamics of the engine. This new method frees the programmer from having to minimize the number of equations which require iterative solution. As a result, some of the approximations normally used in transient engine simulations can be eliminated. This tends to produce better agreement when answers are compared with those from purely steady state simulations. The modified Euler method also permits the user to specify large time steps (about 0.10 sec) to be used in the solution of the differential equations. This saves computer execution time when long transients are run. Examples of the use of the program are included, and program results are compared with those from an existing hybrid-computer simulation of a two-spool turbofan.
Polgár, L; Soós, P; Lajkó, E; Láng, O; Merkely, B; Kőhidai, L
2018-06-01
Thrombogenesis plays an important role in today's morbidity and mortality. Antithrombotics are among the most frequently prescribed drugs. Thorough knowledge of platelet function is needed for optimal clinical care. Platelet adhesion is a separate subprocess of platelet thrombus formation; still, no well-standardized technique for the isolated measurement of platelet adhesion exists. Impedimetry is one of the most reliable, state-of-art techniques to analyze cell adhesion, proliferation, viability, and cytotoxicity. We propose impedimetry as a feasible novel method for the isolated measurement of 2 significant platelet functions: adhesion and spreading. Laboratory reference platelet agonists (epinephrine, ADP, and collagen) were applied to characterize platelet functions by impedimetry using the xCELLigence SP system. Platelet samples were obtained from 20 healthy patients under no drug therapy. Standard laboratory parameters and clinical patient history were also analyzed. Epinephrine and ADP increased platelet adhesion in a concentration-dependent manner, while collagen tended to have a negative effect. Serum sodium and calcium levels and age had a negative correlation with platelet adhesion induced by epinephrine and ADP, while increased immunoreactivity connected with allergic diseases was associated with increased platelet adhesion induced by epinephrine and ADP. ADP increased platelet spreading in a concentration-dependent manner. Impedimetry proved to be a useful and sensitive method for the qualitative and quantitated measurement of platelet adhesion, even differentiating between subgroups of a healthy population. This novel technique is offered as an important method in the further investigation of platelet function. © 2018 John Wiley & Sons Ltd.
Evaluation of free modeling targets in CASP11 and ROLL.
Kinch, Lisa N; Li, Wenlin; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Grishin, Nick V
2016-09-01
We present an assessment of 'template-free modeling' (FM) in CASP11and ROLL. Community-wide server performance suggested the use of automated scores similar to previous CASPs would provide a good system of evaluating performance, even in the absence of comprehensive manual assessment. The CASP11 FM category included several outstanding examples, including successful prediction by the Baker group of a 256-residue target (T0806-D1) that lacked sequence similarity to any existing template. The top server model prediction by Zhang's Quark, which was apparently selected and refined by several manual groups, encompassed the entire fold of target T0837-D1. Methods from the same two groups tended to dominate overall CASP11 FM and ROLL rankings. Comparison of top FM predictions with those from the previous CASP experiment revealed progress in the category, particularly reflected in high prediction accuracy for larger protein domains. FM prediction models for two cases were sufficient to provide functional insights that were otherwise not obtainable by traditional sequence analysis methods. Importantly, CASP11 abstracts revealed that alignment-based contact prediction methods brought about much of the CASP11 progress, producing both of the functionally relevant models as well as several of the other outstanding structure predictions. These methodological advances enabled de novo modeling of much larger domain structures than was previously possible and allowed prediction of functional sites. Proteins 2016; 84(Suppl 1):51-66. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J; Munch, Stephan; Skaug, Hans J
2014-09-01
The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth) and L∞ (asymptotic size). Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish.
Martinsen, Lene; Ottersen, Trygve; Dieleman, Joseph L; Hessel, Philipp; Kinge, Jonas Minet; Skirbekk, Vegard
2018-01-01
Per capita allocation of overall development assistance has been shown to be biased towards countries with lower population size, meaning funders tend to provide proportionally less development assistance to countries with large populations. Individuals that happen to be part of large populations therefore tend to receive less assistance. However, no study has investigated whether this is also true regarding development assistance for health. We examined whether this so-called 'small-country bias' exists in the health aid sector. We analysed the effect of a country's population size on the receipt of development assistance for health per capita (in 2015 US$) among 143 countries over the period 1990-2014. Explanatory variables shown to be associated with receipt of development assistance for health were included: gross domestic product per capita, burden of disease, under-5 mortality rate, maternal mortality ratio, vaccination coverage (diphtheria, tetanus and pertussis) and fertility rate. We used the within-between regression analysis, popularised by Mundluck, as well as a number of robustness tests, including ordinary least squares, random-effects and fixed-effects regressions. Our results suggest there exists significant negative effect of population size on the amount of development assistance for health per capita countries received. According to the within-between estimator, a 1% larger population size is associated with a 0.4% lower per capita development assistance for health between countries (-0.37, 95% CI -0.45 to -0.28), and 2.3% lower per capita development assistance for health within countries (-2.29, 95% CI -3.86 to -0.72). Our findings support the hypothesis that small-country bias exists within international health aid, as has been previously documented for aid in general. In a rapidly changing landscape of global health and development, the inclusion of population size in allocation decisions should be challenged on the basis of equitable access to healthcare and health aid effectiveness.
Using Computers in Relation to Learning Climate in CLIL Method
ERIC Educational Resources Information Center
Binterová, Helena; Komínková, Olga
2013-01-01
The main purpose of the work is to present a successful implementation of CLIL method in Mathematics lessons in elementary schools. Nowadays at all types of schools (elementary schools, high schools and universities) all over the world every school subject tends to be taught in a foreign language. In 2003, a document called Action plan for…
Postfire mortality of ponderosa pine and Douglas-fir: a review of methods to predict tree death
James F. Fowler; Carolyn Hull Sieg
2004-01-01
This review focused on the primary literature that described, modeled, or predicted the probability of postfire mortality in ponderosa pine (Pinus ponderosa) and Douglas-fir (Pseudotsuga menziesii). The methods and measurements that were used to predict postfire tree death tended to fall into two general categories: those focusing...
Method to Identify Deep Cases Based on Relationships between Nouns, Verbs, and Particles
ERIC Educational Resources Information Center
Ide, Daisuke; Kimura, Masaomi
2016-01-01
Deep cases representing the significant meaning of nouns in sentences play a crucial role in semantic analysis. However, a case tends to be manually identified because it requires understanding the meaning and relationships of words. To address this problem, we propose a method to predict deep cases by analyzing the relationship between nouns,…
Psychological Evaluations of Patients Operated for Idiopathic Scoliosis by the Harrington Method.
ERIC Educational Resources Information Center
Orvomaa, E.
1998-01-01
A study of 204 patients operated on for idiopathic scoliosis by the Harrington method between 1970 and 1975 found that patients were content with their lives, tended to form families later in life, and had fewer sexual relationships. The patients felt their illness had mostly influenced their participation in work and in physical activities.…
Regeneration alternatives for upland white spruce after buring and logging in interior Alaska
R. V. Densmore; G. P. Juday; John C. Zasada
1999-01-01
Site-preparation and regeneration methods for white spruce (Picea glaucu (Meench) Voss) were tested near Fairbanks Alaska, on two upland sites which had been burned in a wildfire and salvage logged. After 5 and 10 years, white spruce regeneration did not differ among the four scarification methods but tended to be lower without scarification....
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Gong, S
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
A simple method for simulating wind profiles in the boundary layer of tropical cyclones
Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; ...
2016-11-01
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method alsomore » requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Lastly, temporal spectra from LES produce an inertial subrange for frequencies ≳0.1 Hz, but only when the horizontal grid spacing ≲20 m.« less
A Simple Method for Simulating Wind Profiles in the Boundary Layer of Tropical Cyclones
NASA Astrophysics Data System (ADS)
Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; Zhang, Jun A.
2017-03-01
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method also requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Temporal spectra from LES produce an inertial subrange for frequencies ≳ 0.1 Hz, but only when the horizontal grid spacing ≲ 20 m.
Attitudes of neurology specialists toward older adults.
Seferoğlu, Meral; Yıldız, Demet; Pekel, Nilüfer Büyükkoyuncu; Güneş, Aygül; Yıldız, Abdülmecit; Tufan, Fatih
2017-08-01
Attitude of healthcare providers toward older people is very important in the aging world. Neurologists contact older adults very frequently. We aimed to investigate the attitudes of neurologists toward older adults. We recorded participants age; sex; duration of clinical practice in neurology; existence of older adult relatives; and history of geriatrics education, nursing home visits, older adult patient density in their clinical practice, and participation in voluntary public activities. UCLA Geriatrics Attitude Scale was used to evaluate participants' attitudes. A total of 100 neurologists participated in this study. Seventy-seven percent had positive, 3 % had neutral, and 20 % had negative attitudes. Twenty-seven percent of the participants had history of geriatrics education, and these participants tended to have a higher rate of positive attitudes. Neurologists with positive attitudes tended to be older than those with negative attitudes. Participants with history of living with older adult relatives had lower rates of positive attitudes. The most common diagnoses of the patients the participants encountered were stroke and dementia. Independent factors associated with positive attitudes were history of geriatrics education and older age. History of living with older relatives tended to have a negative effect. Most of the negative items of the attitude scale were associated with the natural course and behavior of the common diseases in neurology practice. Generalization of geriatrics education may translate into a better understanding and improved care for older patients. Development of instruments and implementation of qualitative studies to assess attitudes of neurologists toward older adults are needed.
Personas in Cross-Cultural Projects
NASA Astrophysics Data System (ADS)
Nielsen, Lene
Personas are a method to communicate data about users and to aid in the perception of users. The method is supposed to create a shared perception of the users that is not built on preconceived ideas, but on field data. The paper presents an experiment where the same persona description was sent to 16 participants in 9 countries. The participants were asked to return a photo that resembled the persona and explain their choice. Analysis of the photos and the explanations show that there is a difference between the participants with professional experiences and those without. The experienced tend to interpret the text and use people in their own immediate surroundings in the explanation for choosing the photo. The second group tends to find exact words and use these as explanation. The photos they choose are of stereotypical business-persons. The different strategies might hinder engagement in the persona.
Mg,Ce co-doped Lu2Gd1(Ga,Al)5O12 by micro-pulling down method and their luminescence properties
NASA Astrophysics Data System (ADS)
Kamada, Kei; Yamaguchi, Hiroaki; Yoshino, Masao; Kurosawa, Shunsuke; Shoji, Yasuhiro; Yokota, Yuui; Ohashi, Yuji; Pejchal, Jan; Nikl, Martin; Yoshikawa, Akira
2018-04-01
The effects of Mg co-doping on the scintillation properties of Ce:Lu2Gd1(Ga,Al)5O12 (LGGAG) single crystals with different Ga/Al ratios were investigated. Mg co-doped and non co-doped Ce:LGGAG single crystals were grown by the micro-pulling down (µ-PD) method and then cut, polished and annealed for each measurement. Absorption spectra, radioluminescence (RL) spectra, pulse height spectra, and scintillation decay were measured to reveal the effect of Mg co-doping. Ce4+ charge transfer (CT) absorption band peaking at ∼260 nm was observed in Mg co-doped samples, which is in good agreement with previous reports for the Ce4+ CT absorption band in other garnet-based crystals. The scintillation decay time tended to be accelerated and the light yield tended to be decreased by Mg co-doping at higher Ga concentrations.
Guizard, Sébastien; Piégu, Benoît; Arensburger, Peter; Guillou, Florian; Bigot, Yves
2016-08-19
The program RepeatMasker and the database Repbase-ISB are part of the most widely used strategy for annotating repeats in animal genomes. They have been used to show that avian genomes have a lower repeat content (8-12 %) than the sequenced genomes of many vertebrate species (30-55 %). However, the efficiency of such a library-based strategies is dependent on the quality and completeness of the sequences in the database that is used. An alternative to these library based methods are methods that identify repeats de novo. These alternative methods have existed for a least a decade and may be more powerful than the library based methods. We have used an annotation strategy involving several complementary de novo tools to determine the repeat content of the model genome galGal4 (1.04 Gbp), including identifying simple sequence repeats (SSRs), tandem repeats and transposable elements (TEs). We annotated over one Gbp. of the galGal4 genome and showed that it is composed of approximately 19 % SSRs and TEs repeats. Furthermore, we estimate that the actual genome of the red jungle fowl contains about 31-35 % repeats. We find that library-based methods tend to overestimate TE diversity. These results have a major impact on the current understanding of repeats distributions throughout chromosomes in the red jungle fowl. Our results are a proof of concept of the reliability of using de novo tools to annotate repeats in large animal genomes. They have also revealed issues that will need to be resolved in order to develop gold-standard methodologies for annotating repeats in eukaryote genomes.
Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander
2017-09-09
The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.
Yin, Yuzhi; Bai, Yun; Olivera, Ana; Desai, Avanti; Metcalfe, Dean D
2017-09-01
The culture of mast cells from human tissues such a cord blood, peripheral blood or bone marrow aspirates has advanced our understanding of human mast cells (huMC) degranulation, mediator production and response to pharmacologic agents. However, existing methods for huMC culture tend to be laborious and expensive. Combining technical approaches from several of these protocols, we designed a simplified and more cost effective approach to the culture of mast cells from human cell populations including peripheral blood and cryopreserved cells from lymphocytapheresis. On average, we reduced by 30-50 fold the amount of culture media compared to our previously reported method, while the total MC number generated by this method (2.46±0.63×10 6 vs. 2.4±0.28×10 6 , respectively, from 1.0×10 8 lymphocytapheresis or peripheral blood mononuclear blood cells [PBMCs]) was similar to our previous method (2.36±0.70×10 6 ), resulting in significant budgetary savings. In addition, we compared the yield of huMCs with or without IL-3 added to early cultures in the presence of stem cell factor (SCF) and interlukin-6 (IL-6) and found that the total MC number generated, while higher with IL-3 in the culture, did not reach statistical significance, suggesting that IL-3, often recommended in the culture of huMCs, is not absolutely required. We then performed a functional analysis by flow cytometry using standard methods and which maximized the data we could obtain from cultured cells. We believe these approaches will allow more laboratories to culture and examine huMC behavior going forward. Published by Elsevier B.V.
Ng, Ding-Quan; Liu, Shu-Wei; Lin, Yi-Pin
2018-09-15
In this study, a sampling campaign with a total of nine sampling events investigating lead in drinking water was conducted at 7 sampling locations in an old building with lead pipes in service in part of the building on the National Taiwan University campus. This study aims to assess the effectiveness of four different sampling methods, namely first draw sampling, sequential sampling, random daytime sampling and flush sampling, in lead contamination detection. In 3 out of the 7 sampling locations without lead pipe, lead could not be detected (<1.1 μg/L) in most samples regardless of the sampling methods. On the other hand, in the 4 sampling locations where lead pipes still existed, total lead concentrations >10 μg/L were consistently observed in 3 locations using any of the four sampling methods while the remaining location was identified to be contaminated using sequential sampling. High lead levels were consistently measured by the four sampling methods in the 3 locations in which particulate lead was either predominant or comparable to soluble lead. Compared to first draw and random daytime samplings, although flush sampling had a high tendency to reduce total lead in samples in lead-contaminated sites, the extent of lead reduction was location-dependent and not dependent on flush durations between 5 and 10 min. Overall, first draw sampling and random daytime sampling were reliable and effective in determining lead contamination in this study. Flush sampling could reveal the contamination if the extent is severe but tends to underestimate lead exposure risk. Copyright © 2018 Elsevier B.V. All rights reserved.
[Research on violence against women in Latin America: from blind empiricism to theory without data].
Castro, Roberto; Riquer, Florinda
2003-01-01
Research on violence against women in Latin America presents an interesting paradox: while the number of studies is quite small, there also appears to be a sense that research on this topic has been exhausted, despite the lack of any definitive responses to the nature and causes of the problem. This results from the boom in studies with a strong empirical focus, lacking any basis in more general sociological theory. On the other hand, research using social theory tends to ignore the existing mediations between structural arrangements and any individual specific behavior, as well as the interactive nature of domestic violence. Meanwhile, empirical research presents inconsistent results and tends to run into methodological problems such as operational confusion, contradictory findings, and results and recommendations that are too obvious. New research designs must be developed to enrich the field and which are solidly based on the body of conceptual knowledge in social sciences, abandoning designs without theory and those which are merely statistical. Only then will it be possible to imagine the new research questions that the problem of violence requires.
Representations of cosmetic surgery and emotional health in women's magazines in Canada.
Polonijo, Andrea N; Carpiano, Richard M
2008-01-01
This research examines how popular women's magazines portray cosmetic surgery and associated emotional health. Articles regarding cosmetic surgery were coded from the top five most circulated English-language women's magazines in Canada between 2002 and 2006 for type of procedure, patient demographics, risk information, and indicators of emotional health. Content analysis techniques were used to identify patterns of portraying the risks and benefits of cosmetic surgery. Content analyses show the articles tend to present readers with detailed physical health risk information. However, 48% of articles discuss the impact that cosmetic surgery has on emotional health, most often linking cosmetic surgery with enhanced emotional well-being regardless of the patient's pre-existing state of emotional health. The articles also tend to use accounts given by males to provide defining standards of female attractiveness. These findings are consistent with arguments in the research literature that women's magazines contribute to the medicalization of the female body. Cosmetic surgery is generally portrayed as a risky--but worthwhile--option for women to enhance both their physical appearance and emotional health. The implications for future research and public education strategies are discussed.
Young Children’s Sensitivity to Their Own Ignorance in Informing Others
Kim, Sunae; Paulus, Markus; Sodian, Beate; Proust, Joelle
2016-01-01
Prior research suggests that young children selectively inform others depending on others’ knowledge states. Yet, little is known whether children selectively inform others depending on their own knowledge states. To explore this issue, we manipulated 3- to 4-year-old children’s knowledge about the content of a box and assessed the impact on their decisions to inform another person. Moreover, we assessed the presence of uncertainty gestures while they inform another person in light of the suggestions that children's gestures reflect early developing, perhaps transient, epistemic sensitivity. Finally, we compared children’s performance in the informing context to their explicit verbal judgment of their knowledge states to further confirm the existence of a performance gap between the two tasks. In their decisions to inform, children tend to accurately assess their ignorance, whereas they tend to overestimate their own knowledge states when asked to explicitly report them. Moreover, children display different levels of uncertainty gestures depending on the varying degrees of their informational access. These findings suggest that children’s implicit awareness of their own ignorance may be facilitated in a social, communicative context. PMID:27023683
Examining the reaction of monetary policy to exchange rate changes: A nonlinear ARDL approach
NASA Astrophysics Data System (ADS)
Manogaran, Lavaneesvari; Sek, Siok Kun
2017-04-01
Previous studies showed the exchange rate changes can have significant impacts on macroeconomic performance. Over fluctuation of exchange rate may lead to economic instability. Hence, monetary policy rule tends to react to exchange rate changes. Especially, in emerging economies where the policy-maker tends to limit the exchange rate movement through interventions. In this study, we seek to investigate how the monetary policy rule reacts to exchange rate changes. The nonlinear autoregressive distributed lag (NARDL) model is applied to capture the asymmetric effect of exchange rate changes on monetary policy reaction function (interest rate). We focus the study in ASEAN5 countries (Indonesia, Malaysia, Philippines, Thailand and Singapore). The results indicated the existence of asymmetric effect of exchange rates changes on the monetary reaction function for all ASEAN5 countries in the long-run. Where, in majority of the cases the monetary policy is reacting to the appreciation and depreciation of exchange rate by raising the policy rate. This affirms the intervention of policymakers with the `fear of floating' behavior.
The male handicap: male-biased mortality explains skewed sex ratios in brown trout embryos.
Morán, P; Labbé, L; Garcia de Leaniz, C
2016-12-01
Juvenile sex ratios are often assumed to be equal for many species with genetic sex determination, but this has rarely been tested in fish embryos due to their small size and absence of sex-specific markers. We artificially crossed three populations of brown trout and used a recently developed genetic marker for sexing the offspring of both pure and hybrid crosses. Sex ratios (SR = proportion of males) varied widely one month after hatching ranging from 0.15 to 0.90 (mean = 0.39 ± 0.03). Families with high survival tended to produce balanced or male-biased sex ratios, but SR was significantly female-biased when survival was low, suggesting that males sustain higher mortality during development. No difference in SR was found between pure and hybrid families, but the existence of sire × dam interactions suggests that genetic incompatibility may play a role in determining sex ratios. Our findings have implications for animal breeding and conservation because skewed sex ratios will tend to reduce effective population size and bias selection estimates. © 2016 The Authors.
Dahl, Aaron; Sinha, Madhumita; Rosenberg, David I; Tran, Melissa; Valdez, André
2015-05-01
Effective physician-patient communication is critical to the clinical decision-making process. We studied parental recall of information provided during an informed consent discussion process before performance of emergency medical procedures in a pediatric emergency department of an inner-city hospital with a large bilingual population. Fifty-five parent/child dyads undergoing emergency medical procedures were surveyed prospectively in English/Spanish postprocedure for recall of informed consent information. Exact logistic regression was used to predict the ability to name a risk, benefit, and alternative to the procedure based on a parent's language, education, and acculturation. Among English-speaking parents, there tended to be higher proportions that could name a risk, benefit, or alternative. Our regression models showed overall that the parents with more than a high school education tended to have nearly 5 times higher odds of being able to name a risk. A gap in communication may exist between physicians and patients (or parents of patients) during the consent-taking process, and this gap may be impacted by socio-demographic factors such as language and education level.
Analysis of ramming settlement based on dissipative principle
NASA Astrophysics Data System (ADS)
Fu, Hao; Yu, Kaining; Chen, Changli; Li, Changrong; Wang, Xiuli
2018-03-01
The deformation of soil is a kind of dissipative structure under the action of dynamic compaction. The macroscopic performance of soil to steady state evolution is the change of ramming settlement in the process of dynamic compaction. based on the existing solution of dynamic compaction boundary problem, calculated ramming effectiveness (W) and ramming efficiency coefficient( η ). For the same soil, ramming efficiency coefficient is related to ramming factor λ = M/ρr3. By using the dissipative principle to analyze the law between ramming settlements and ramming times under different ramming energy and soil density, come to the conclusion that: Firstly, with the increase of ramming numbers, ramming settlement tends to a stable value, ramming effectiveness coefficient tends to a stable value. Secondly, under the condition of the same single ramming energy, the soil density of before ramming has effect on ramming effectiveness of previous ramming, almost no effect on ramming effectiveness of subsequent ramming. Thirdly, under the condition of the same soil density, different ramming energy correspond to different steady-state, the cumulative ramming settlement and steady-state increase with ramming energy.
Age-related differences in hair trace elements: a cross-sectional study in Orenburg, Russia.
Skalnaya, Margarita G; Tinkov, Alexey A; Demidov, Vasily A; Serebryansky, Eugeny P; Nikonorov, Alexandr A; Skalny, Anatoly V
2016-09-01
Age-related differences in the trace element content of hair have been reported. However, some discrepancies in the data exist. The primary objective of this study was to estimate the change in hair trace elements content in relation to age. Six hundred and eighteen women and 438 men aged from 10-59 years took part in the current cross-sectional study. Hair Cr, Mn, Ni, Si, Al, As, Be, Cd and Pb tended to decrease with age in the female sample, whereas hair Cu, Fe, I, Se, Li and Sn were characterised by an age-associated increase. Hair levels of Cr, Cu, I, Mn, Ni, Si and Al in men decreased with age, whereas hair Co, Fe, Se, Cd, Li and Pb content tended to increase. Hair mercury increased in association with age in men and in women, whereas hair vanadium was characterised by a significant decrease in both sexes. The difference in hair trace element content between men and women decreased with age. These data suggest that age-related differences in trace element status may have a direct implication in the ageing process.
Predictors of how often and when people fall in love.
Galperin, Andrew; Haselton, Martie
2010-01-19
A leading theory of romantic love is that it functions to make one feel committed to one's beloved, as well as to signal this commitment to the beloved (Frank, 1988). Because women tend to be skeptical of men's commitment, this view entails that men may have evolved to fall in love first, in order to show their commitment to women. Using a sample of online participants of a broad range of ages, this study tested this sex difference and several related individual difference hypotheses concerning the ease of falling in love. There was mixed evidence for sex differences: only some measures indicated that men are generally more love-prone than are women. We also found that men were more prone to falling in love if they tended to overestimate women's sexual interest and highly valued physical attractiveness in potential partners. Women were more prone to falling in love if they had a stronger sex drive. These results provide modest support for the existence of sex differences in falling in love, as well as initial evidence for links between several individual difference variables and the propensity to fall in love.
Experiential avoidance, self-compassion, self-judgment and coping styles in infertility.
Cunha, Marina; Galhardo, Ana; Pinto-Gouveia, José
2016-12-01
This study sought out to explore the existence of differences regarding emotion regulation processes (psychological inflexibility/experiential avoidance, self-judgment and self-compassion) and coping styles (emotional/detached, avoidant and rational) in three different groups of couples: 120 fertile couples (FG), 147 couples with an infertility diagnosis who were pursuing medical treatment for their fertility problem(s) (IG), and 59 couples with infertility applying for adoption (AG). Cross-sectional survey, using the couple as unit of analysis. Participants filled in paper-pencil questionnaires assessing coping styles, psychological inflexibility/experiential avoidance, self-judgment and self-compassion. IG couples, and particularly women, tend to use more experiential avoidance and self-judgment mechanisms and less emotional/detached coping style. When compared to FG couples, IG and AG couples tend to apply more avoidant coping strategies. AG couples showed higher self-compassion. Findings suggest that emotion regulation processes may be an important target in psychological interventions for patients dealing with infertility and with the demands of medical treatment. Copyright © 2016 Elsevier B.V. All rights reserved.
Quasi-Classical Asymptotics for the Pauli Operator
NASA Astrophysics Data System (ADS)
Sobolev, Alexander V.
We study the behaviour of the sums of the eigenvalues of the Pauli operator in , in a magnetic field and electric field V(x) as the Planck constant ħ tends to zero and the magnetic field strength μ tends to infinity. We show that for the sum obeys the natural Weyl type formula
Assessing Data Quality in Emergent Domains of Earth Sciences
NASA Astrophysics Data System (ADS)
Darch, P. T.; Borgman, C.
2016-12-01
As earth scientists seek to study known phenomena in new ways, and to study new phenomena, they often develop new technologies and new methods such as embedded network sensing, or reapply extant technologies, such as seafloor drilling. Emergent domains are often highly multidisciplinary as researchers from many backgrounds converge on new research questions. They may adapt existing methods, or develop methods de novo. As a result, emerging domains tend to be methodologically heterogeneous. As these domains mature, pressure to standardize methods increases. Standardization promotes trust, reliability, accuracy, and reproducibility, and simplifies data management. However, for standardization to occur, researchers must be able to assess which of the competing methods produces the highest quality data. The exploratory nature of emerging domains discourages standardization. Because competing methods originate in different disciplinary backgrounds, their scientific credibility is difficult to compare. Instead of direct comparison, researchers attempt to conduct meta-analyses. Scientists compare datasets produced by different methods to assess their consistency and efficiency. This paper presents findings from a long-term qualitative case study of research on the deep subseafloor biosphere, an emergent domain. A diverse community converged on the study of microbes in the seafloor and those microbes' interactions with the physical environments they inhabit. Data on this problem are scarce, leading to calls for standardization as a means to acquire and analyze greater volumes of data. Lacking consistent methods, scientists attempted to conduct meta-analyses to determine the most promising methods on which to standardize. Among the factors that inhibited meta-analyses were disparate approaches to metadata and to curating data. Datasets may be deposited in a variety of databases or kept on individual scientists' servers. Associated metadata may be inconsistent or hard to interpret. Incentive structures, including prospects for journal publication, often favor new data over reanalyzing extant datasets. Assessing data quality in emergent domains is extremely difficult and will require adaptations in infrastructure, culture, and incentives.
Yamanishi, Mamoru; Ito, Yoichiro; Kintaka, Reiko; Imamura, Chie; Katahira, Satoshi; Ikeuchi, Akinori; Moriya, Hisao; Matsuyama, Takashi
2013-06-21
The terminator regions of eukaryotes encode functional elements in the 3' untranslated region (3'-UTR) that influence the 3'-end processing of mRNA, mRNA stability, and translational efficiency, which can modulate protein production. However, the contribution of these terminator regions to gene expression remains unclear, and therefore their utilization in metabolic engineering or synthetic genetic circuits has been limited. Here, we comprehensively evaluated the activity of 5302 terminator regions from a total of 5880 genes in the budding yeast Saccharomyces cerevisiae by inserting each terminator region downstream of the P TDH3 - green fluorescent protein (GFP) reporter gene and measuring the fluorescent intensity of GFP. Terminator region activities relative to that of the PGK1 standard terminator ranged from 0.036 to 2.52, with a mean of 0.87. We thus could isolate the most and least active terminator regions. The activities of the terminator regions showed a positive correlation with mRNA abundance, indicating that the terminator region is a determinant of mRNA abundance. The least active terminator regions tended to encode longer 3'-UTRs, suggesting the existence of active degradation mechanisms for those mRNAs. The terminator regions of ribosomal protein genes tended to be the most active, suggesting the existence of a common regulator of those genes. The ″terminatome″ (the genome-wide set of terminator regions) thus not only provides valuable information to understand the modulatory roles of terminator regions on gene expression but also serves as a useful toolbox for the development of metabolically and genetically engineered yeast.
Alertness Modulates Conflict Adaptation and Feature Integration in an Opposite Way
Chen, Jia; Huang, Xiting; Chen, Antao
2013-01-01
Previous studies show that the congruency sequence effect can result from both the conflict adaptation effect (CAE) and feature integration effect which can be observed as the repetition priming effect (RPE) and feature overlap effect (FOE) depending on different experimental conditions. Evidence from neuroimaging studies suggests that a close correlation exists between the neural mechanisms of alertness-related modulations and the congruency sequence effect. However, little is known about whether and how alertness mediates the congruency sequence effect. In Experiment 1, the Attentional Networks Test (ANT) and a modified flanker task were used to evaluate whether the alertness of the attentional functions had a correlation with the CAE and RPE. In Experimental 2, the ANT and another modified flanker task were used to investigate whether alertness of the attentional functions correlate with the CAE and FOE. In Experiment 1, through the correlative analysis, we found a significant positive correlation between alertness and the CAE, and a negative correlation between the alertness and the RPE. Moreover, a significant negative correlation existed between CAE and RPE. In Experiment 2, we found a marginally significant negative correlation between the CAE and the RPE, but the correlation between alertness and FOE, CAE and FOE was not significant. These results suggest that alertness can modulate conflict adaptation and feature integration in an opposite way. Participants at the high alerting level group may tend to use the top-down cognitive processing strategy, whereas participants at the low alerting level group tend to use the bottom-up processing strategy. PMID:24250824
Street choice logit model for visitors in shopping districts.
Kawada, Ko; Yamada, Takashi; Kishimoto, Tatsuya
2014-09-01
In this study, we propose two models for predicting people's activity. The first model is the pedestrian distribution prediction (or postdiction) model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation). The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that "have more shops, and are wider and lower". In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive) and CARS (negative). Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive).
Street Choice Logit Model for Visitors in Shopping Districts
Kawada, Ko; Yamada, Takashi; Kishimoto, Tatsuya
2014-01-01
In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction) model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation). The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive) and CARS (negative). Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive). PMID:25379274
Generating Sudoku puzzles and its applications in teaching mathematics
NASA Astrophysics Data System (ADS)
Evans, Ryan; Lindner, Brett; Shi, Yixun
2011-07-01
This article presents a few methods for generating Sudoku puzzles. These methods are developed based on the concepts of matrix, permutation, and modular functions, and therefore can be used to form application examples or student projects when teaching various mathematics courses. Mathematical properties of these methods are studied, connections between the methods are investigated, and student projects are suggested. Since most students tend to enjoy games, studies like this may help raising students' interests and enhance their problem-solving skills.
The Effect of Cluster Sampling Design in Survey Research on the Standard Error Statistic.
ERIC Educational Resources Information Center
Wang, Lin; Fan, Xitao
Standard statistical methods are used to analyze data that is assumed to be collected using a simple random sampling scheme. These methods, however, tend to underestimate variance when the data is collected with a cluster design, which is often found in educational survey research. The purposes of this paper are to demonstrate how a cluster design…
Beef customer satisfaction: factors affecting consumer evaluations of clod steaks.
Goodson, K J; Morgan, W W; Reagan, J O; Gwartney, B L; Courington, S M; Wise, J W; Savell, J W
2002-02-01
An in-home beef study evaluated consumer ratings of clod steaks (n = 1,264) as influenced by USDA quality grade (Top Choice, Low Choice, High Select, and Low Select), city (Chicago and Philadelphia), consumer segment (Beef Loyals, who are heavy consumers of beef; Budget Rotators, who are cost-driven and split meat consumption between beef and chicken; and Variety Rotators, who have higher incomes and education and split their meat consumption among beef, poultry, and other foods), degree of doneness, and cooking method. Consumers evaluated each steak for Overall Like, Tenderness, Juiciness, Flavor Like, and Flavor Amount using 10-point scales. Grilling was the predominant cooking method used, and steaks were cooked to medium-well and greater degrees of doneness. Interactions existed involving the consumer-controlled factors of degree of doneness and(or) cooking method for all consumer-evaluated traits for the clod steak (P < 0.05). USDA grade did not affect any consumer evaluation traits or Warner-Bratzler shear force values (P > 0.05). One significant main effect, segment (P = 0.006), and one significant interaction, cooking method x city (P = 0.0407), existed for Overall Like ratings. Consumers in the Beef Loyals segment rated clod steaks higher in Overall Like than the other segments. Consumers in Chicago tended to give more uniform Overall Like ratings to clod steaks cooked by various methods; however, consumers in Philadelphia gave among the highest ratings to clod steaks that were fried and among the lowest to those that were grilled. Additionally, although clod steaks that were fried were given generally high ratings by consumers in Philadelphia, consumers in Chicago rated clod steaks cooked in this manner significantly lower than those in Philadelphia. Conversely, consumers in Chicago rated clod steaks that were grilled significantly higher than consumers in Philadelphia. Correlation and stepwise regression analyses indicated that Flavor Like was driving customer satisfaction of the clod steak. Flavor Like was the sensory trait most highly correlated to Overall Like, followed by Tenderness, Flavor Amount, and Juiciness. Flavor Like was the first variable to enter into the stepwise regression equation for predicting Overall Like, followed by Tenderness and Flavor Amount. For the clod steak, it is likely that preparation techniques that improve flavor without reducing tenderness positively affect customer satisfaction.
Robustness of fit indices to outliers and leverage observations in structural equation modeling.
Yuan, Ke-Hai; Zhong, Xiaoling
2013-06-01
Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Halabi, Gh; Bulanova, N; Aleksandrova, S; Ivanov, G; Aleksandrova, M
2018-05-01
Objective - to access seasonal variation of microvolt T-wave alternans of ECG dispersion mapping in patients with cardiovascular disease and healthy subjects. ECG data of the three groups of healthy subjects have been compared: inhabitants of Beirut, Lebanon (n=51), inhabitants of Moscow, Russia (n=94) and ECG data of healthy subjects (n=44) from the testing ECG database of the PTB - The National Metrology Institute of Germany as well as a group of patients with cardiovascular disease (n=138), inhabitants of Beirut, Lebanon. Microvolt T-wave alternans of ECG dispersion mapping was evaluated in three points - Tbeginning, Tmaximum, Tend. In healthy subjects, the seasonal variation of ECG dispersion mapping microvolt T-wave alternans was nonexistent. Myocardial lesion is characterized by an increase in Tbeg, Tmax, Tend in relation to the healthy individuals. Tbeg values are minimal in winter and summer and increase in spring and autumn. Tend values were reversed - they were maximal in winter and summer, decreasing in spring-autumn period. Seasonal variation of Tmax - Tbeg, and Tmax -Tend was detected: Tmax - Tbeg increased in the winter-summer period and decreased in spring and autumn, Tmax-Tend - increased in the spring-autumn period in relation to the winter-summer period. In patients with cardiovascular disease, in contrast to the healthy, there is a seasonal variation in microvolt T-wave alternans of ECG dispersion mapping, with the maximum differences in the winter and spring seasons, which should be taken into account when applying the method in clinical practice.
Self-acceleration and matter content in bicosmology from Noether symmetries
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Capozziello, Salvatore; Martín-Moruno, Prado
2018-04-01
In bigravity, when taking into account the potential existence of matter fields minimally coupled to the second gravitation sector, the dynamics of our Universe depends on some matter that cannot be observed in a direct way. In this paper, we assume the existence of a Noether symmetry in bigravity cosmologies in order to constrain the dynamics of that matter. By imposing this assumption we obtain cosmological models with interesting phenomenology. In fact, considering that our universe is filled with standard matter and radiation, we show that the existence of a Noether symmetry implies that either the dynamics of the second sector decouples, being the model equivalent to general relativity (GR), or the cosmological evolution of our universe tends to a de Sitter state with the vacuum energy in it given by the conserved quantity associated with the symmetry. The physical consequences of the genuine bigravity models obtained are briefly discussed. We also point out that the first model, which is equivalent to GR, may be favored due to the potential appearance of instabilities in the second model.
Semantic Segmentation of Forest Stands of Pure Species as a Global Optimization Problem
NASA Astrophysics Data System (ADS)
Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet-Brunet, V.
2017-05-01
Forest stand delineation is a fundamental task for forest management purposes, that is still mainly manually performed through visual inspection of geospatial (very) high spatial resolution images. Stand detection has been barely addressed in the literature which has mainly focused, in forested environments, on individual tree extraction and tree species classification. From a methodological point of view, stand detection can be considered as a semantic segmentation problem. It offers two advantages. First, one can retrieve the dominant tree species per segment. Secondly, one can benefit from existing low-level tree species label maps from the literature as a basis for high-level object extraction. Thus, the semantic segmentation issue becomes a regularization issue in a weakly structured environment and can be formulated in an energetical framework. This papers aims at investigating which regularization strategies of the literature are the most adapted to delineate and classify forest stands of pure species. Both airborne lidar point clouds and multispectral very high spatial resolution images are integrated for that purpose. The local methods (such as filtering and probabilistic relaxation) are not adapted for such problem since the increase of the classification accuracy is below 5%. The global methods, based on an energy model, tend to be more efficient with an accuracy gain up to 15%. The segmentation results using such models have an accuracy ranging from 96% to 99%.
Banding of NMR-derived Methyl Order Parameters: Implications for Protein Dynamics
Sharp, Kim A.; Kasinath, Vignesh; Wand, A. Joshua
2014-01-01
Our understanding of protein folding, stability and function has begun to more explicitly incorporate dynamical aspects. Nuclear magnetic resonance has emerged as a powerful experimental method for obtaining comprehensive site-resolved insight into protein motion. It has been observed that methyl-group motion tends to cluster into three “classes” when expressed in terms of the popular Lipari-Szabo model-free squared generalized order parameter. Here the origins of the three classes or bands in the distribution of order parameters are examined. As a first step, a Bayesian based approach, which makes no a priori assumption about the existence or number of bands, is developed to detect the banding of O2axis values derived either from NMR experiments or molecular dynamics simulations. The analysis is applied to seven proteins with extensive molecular dynamics simulations of these proteins in explicit water to examine the relationship between O2 and fine details of the motion of methyl bearing side chains. All of the proteins studied display banding, with some subtle differences. We propose a very simple yet plausible physical mechanism for banding. Finally, our Bayesian method is used to analyze the measured distributions of methyl group motions in the catabolite activating protein and several of its mutants in various liganded states and discuss the functional implications of the observed banding to protein dynamics and function. PMID:24677353
A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.
Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong
2018-01-01
The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.