NASA Technical Reports Server (NTRS)
Tick, Evan
1987-01-01
This note describes an efficient software emulator for the Warren Abstract Machine (WAM) Prolog architecture. The version of the WAM implemented is called Lcode. The Lcode emulator, written in C, executes the 'naive reverse' benchmark at 3900 LIPS. The emulator is one of a set of tools used to measure the memory-referencing characteristics and performance of Prolog programs. These tools include a compiler, assembler, and memory simulators. An overview of the Lcode architecture is given here, followed by a description and listing of the emulator code implementing each Lcode instruction. This note will be of special interest to those studying the WAM and its performance characteristics. In general, this note will be of interest to those creating efficient software emulators for abstract machine architectures.
NASA Technical Reports Server (NTRS)
1997-01-01
Vern Wedeven, president of Wedeven Associates, developed the WAM4, a computer-aided "smart" test machine for simulating stress on equipment, based on his bearing lubrication expertise gained while working for Lewis Research Center. During his NASA years from the 1970s into the early 1980s, Wedeven initiated an "Interdisciplinary Collaboration in Tribology," an effort that involved NASA, six universities, and several university professors. The NASA-sponsored work provided foundation for Wedeven in 1983 to form his own company. Several versions of the smart test machine, the WAM1, WAM2, and WAM3, have proceeded the current version, WAM4. This computer-controlled device can provide detailed glimpses at gear and bearing points of contact. WAM4 can yield a three-dimensional view of machinery as an operator adds "what-if" thermal and lubrication conditions, contact stress, and surface motion. Along with NASA, a number of firms, including Pratt & Whitney, Caterpillar Tractor, Exxon, and Chevron have approached Wedeven for help on resolving lubrication problems.
Feature reduction and payload location with WAM steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.; Lubenko, Ivans
2009-02-01
WAM steganalysis is a feature-based classifier for detecting LSB matching steganography, presented in 2006 by Goljan et al. and demonstrated to be sensitive even to small payloads. This paper makes three contributions to the development of the WAM method. First, we benchmark some variants of WAM in a number of sets of cover images, and we are able to quantify the significance of differences in results between different machine learning algorithms based on WAM features. It turns out that, like many of its competitors, WAM is not effective in certain types of cover, and furthermore it is hard to predict which types of cover are suitable for WAM steganalysis. Second, we demonstrate that only a few the features used in WAM steganalysis do almost all of the work, so that a simplified WAM steganalyser can be constructed in exchange for a little less detection power. Finally, we demonstrate how the WAM method can be extended to provide forensic tools to identify the location (and potentially content) of LSB matching payload, given a number of stego images with payload placed in the same locations. Although easily evaded, this is a plausible situation if the same stego key is mistakenly re-used for embedding in multiple images.
Choi, Sangjun; Kang, Dongmug; Park, Donguk; Lee, Hyunhee; Choi, Bongkyoo
2017-03-01
The goal of this study is to develop a general population job-exposure matrix (GPJEM) on asbestos to estimate occupational asbestos exposure levels in the Republic of Korea. Three Korean domestic quantitative exposure datasets collected from 1984 to 2008 were used to build the GPJEM. Exposure groups in collected data were reclassified based on the current Korean Standard Industrial Classification (9 th edition) and the Korean Standard Classification of Occupations code (6 th edition) that is in accordance to international standards. All of the exposure levels were expressed by weighted arithmetic mean (WAM) and minimum and maximum concentrations. Based on the established GPJEM, the 112 exposure groups could be reclassified into 86 industries and 74 occupations. In the 1980s, the highest exposure levels were estimated in "knitting and weaving machine operators" with a WAM concentration of 7.48 fibers/mL (f/mL); in the 1990s, "plastic products production machine operators" with 5.12 f/mL, and in the 2000s "detergents production machine operators" handling talc containing asbestos with 2.45 f/mL. Of the 112 exposure groups, 44 groups had higher WAM concentrations than the Korean occupational exposure limit of 0.1 f/mL. The newly constructed GPJEM which is generated from actual domestic quantitative exposure data could be useful in evaluating historical exposure levels to asbestos and could contribute to improved prediction of asbestos-related diseases among Koreans.
Autonomous berthing/unberthing of a Work Attachment Mechanism/Work Attachment Fixture (WAM/WAF)
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Antrazi, Sami S.
1992-01-01
Discussed here is the autonomous berthing of a Work Attachment Mechanism/Work Attachment Fixture (WAM/WAF) developed by NASA for berthing and docking applications in space. The WAM/WAF system enables fast and reliable berthing (unberthing) of space hardware. A successful operation of the WAM/WAF requires that the WAM motor velocity be precisely controlled. The operating principle and the design of the WAM/WAF is described as well as the development of a control system used to regulate the WAM motor velocity. The results of an experiment in which the WAM/WAF is used to handle an orbital replacement unit are given.
Descriptive Model of Generic WAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hauer, John F.; DeSteese, John G.
The Department of Energy’s (DOE) Transmission Reliability Program is supporting the research, deployment, and demonstration of various wide area measurement system (WAMS) technologies to enhance the reliability of the Nation’s electrical power grid. Pacific Northwest National Laboratory (PNNL) was tasked by the DOE National SCADA Test Bed Program to conduct a study of WAMS security. This report represents achievement of the milestone to develop a generic WAMS model description that will provide a basis for the security analysis planned in the next phase of this study.
Impact of dynamical regionalization on precipitation biases and teleconnections over West Africa
NASA Astrophysics Data System (ADS)
Gómara, Iñigo; Mohino, Elsa; Losada, Teresa; Domínguez, Marta; Suárez-Moreno, Roberto; Rodríguez-Fonseca, Belén
2018-06-01
West African societies are highly dependent on the West African Monsoon (WAM). Thus, a correct representation of the WAM in climate models is of paramount importance. In this article, the ability of 8 CMIP5 historical General Circulation Models (GCMs) and 4 CORDEX-Africa Regional Climate Models (RCMs) to characterize the WAM dynamics and variability is assessed for the period July-August-September 1979-2004. Simulations are compared with observations. Uncertainties in RCM performance and lateral boundary conditions are assessed individually. Results show that both GCMs and RCMs have trouble to simulate the northward migration of the Intertropical Convergence Zone in boreal summer. The greatest bias improvements are obtained after regionalization of the most inaccurate GCM simulations. To assess WAM variability, a Maximum Covariance Analysis is performed between Sea Surface Temperature and precipitation anomalies in observations, GCM and RCM simulations. The assessed variability patterns are: El Niño-Southern Oscillation (ENSO); the eastern Mediterranean (MED); and the Atlantic Equatorial Mode (EM). Evidence is given that regionalization of the ENSO-WAM teleconnection does not provide any added value. Unlike GCMs, RCMs are unable to precisely represent the ENSO impact on air subsidence over West Africa. Contrastingly, the simulation of the MED-WAM teleconnection is improved after regionalization. Humidity advection and convergence over the Sahel area are better simulated by RCMs. Finally, no robust conclusions can be determined for the EM-WAM teleconnection, which cannot be isolated for the 1979-2004 period. The novel results in this article will help to select the most appropriate RCM simulations to study WAM teleconnections.
The Study on the Communication Network of Wide Area Measurement System in Electricity Grid
NASA Astrophysics Data System (ADS)
Xiaorong, Cheng; Ying, Wang; Yangdan, Ni
Wide area measurement system(WAMS) is a fundamental part of security defense in Smart Grid, and the communication system of WAMS is an important part of Electric power communication network. For a large regional network is concerned, the real-time data which is transferred in the communication network of WAMS will affect the safe operation of the power grid directly. Therefore, WAMS raised higher requirements for real-time, reliability and security to its communication network. In this paper, the architecture of WASM communication network was studied according to the seven layers model of the open systems interconnection(OSI), and the network architecture was researched from all levels. We explored the media of WAMS communication network, the network communication protocol and network technology. Finally, the delay of the network were analyzed.
Suppadit, Tawadchai; Jaturasitha, Sanchai; Sunthorn, Napassawan; Poungsuk, Pukkapong
2012-10-01
Wolffia arrhiza meal (WAM) was evaluated as a protein replacement for soybean meal (SBM) in the diet of laying Japanese quails. A total of 480 4-week-old laying quails were randomly allocated to form six groups in a completely randomized design. Each group contained four replicates, with 20 quails per replicate. WAM was incorporated into the diets at levels of 0, 4.00, 8.00, 12.0, 16.0 and 20.0%. The results showed that feed intake per bird per day, daily egg-laying rate, feed cost per 100 egg weight, egg width, egg length, egg weight, eggshell thickness, yolk height and shell quality characteristics in terms of breaking time, Young's modulus, work, maximum force, fracturability, breaking stress, stiffness and power showed no statistically significant differences (P > 0.05) among the 0 to 16.0% levels of WAM. However, these performance measures were significantly lower with 20.0% WAM in the formulated ration (P < 0.05). Mortality showed no significant differences among dietary treatments (P > 0.05). The color intensity of the yolk increased as SBM was replaced with increasing amounts of WAM (P < 0.05). In conclusion, WAM could be successfully used in place of SBM. However, the amount used should not exceed 16.0%.
Park, Donguk; Stewart, Patrica A.; Coble, Joseph B.
2009-01-01
An extensive literature review of published metalworking fluid (MWF) aerosol measurement data was conducted to identify the major determinants that may affect exposure to aerosol fractions (total or inhalable, thoracic and respirable) and mass median diameters (MMDs). The identification of determinants was conducted through published studies and analysis of published measurement levels. For the latter, weighted arithmetic means (WAMs) by number of measurements were calculated and compared using analysis of variance and t-tests. The literature review found that the major factors affecting aerosol exposure levels were, primarily, decade, type of industry, operation and fluid and engineering control measures. Our analysis of total aerosol levels found a significant decline in measured levels from an average of 5.36 mg m−3 prior to the 1970s and 2.52 mg m−3 in the 1970s to 1.21 mg m−3 in the 1980s, 0.50 mg m−3 in the 1990s and 0.55 mg m−3 in the 2000s. Significant declines from the 1990s to the 2000s also were found in thoracic fraction levels (0.48 versus 0.40 mg m−3), but not for the respirable fraction. The WAMs for the auto (1.47 mg m−3) and auto parts manufacturing industry (1.83 mg m−3) were significantly higher than that for small-job machine shops (0.68 mg m−3). In addition, a significant difference in the thoracic WAM was found between the automotive industry (0.46 mg m−3) and small-job machine shops (0.32 mg m−3). Operation type, in particular, grinding, was a significant factor affecting the total aerosol fraction [grinding operations (1.75 mg m−3) versus other machining (0.95 mg m−3)], but the levels associated with these operations were not statistically different for either the thoracic or the respirable fractions. Across all decades, the total aerosol fraction for straight oils (1.49 mg m−3) was higher than for other fluid types (soluble = 1.08 mg m−3, synthetic = 0.52 mg m−3 and semisynthetic = 0.50 mg m−3). Fluid type was also found to be partly associated with differences in the respirable fraction level. We found that the total aerosols were measured by a variety of sampling media, devices and analytical methods. This diversity of approaches makes interpretation of the study results difficult. In conclusion, both the literature review and the measurement data analyzed found that decade and type of industry, operation and fluid were important determinants of total aerosol exposure. Industry type and fluid type were associated with differences in exposure to the thoracic and respirable fraction levels, respectively. PMID:19329796
Normahani, Pasha; Kwasnicki, Richard; Bicknell, Colin; Allen, Louise; Jenkins, Mike P; Gibbs, Richard; Cheshire, Nicholas; Darzi, Ara; Riga, Celia
2017-05-11
To evaluate the effect of using wearable activity monitors (WAMs) in patients with intermittent claudication (IC) within a single-center randomized controlled trial. WAMs allow users to set daily activity targets and monitor their progress. They may offer an alternative treatment to supervised exercise programs (SEPs) for patients with IC. Thirty-seven patients with IC were recruited and randomized into intervention or control group. The intervention consisted of a feedback-enabled, wrist-worn activity monitor (WAM) in addition to access to SEP. The control group was given access to SEP only. The outcome measures were maximum walking distance (MWD), claudication distance (CD), and quality of life as measured by the VascuQol questionnaire. Participants were assessed upon recruitment, and at 3, 6, and 12 months. Patients in the WAM group showed significant improvement in MWD at 3 and 6 months (80-112 m, to 178 m; P < 0.001), which was sustained at 12 months. The WAM group also increased CD (40 vs 110 m; P < 0.001) and VascuQol score (4.7 vs 5.8; P = 0.004). The control group saw a temporary increase in VascuQol score at 6 months (4.5 vs 4.7; P = 0.028), but no other improvements in MWD or CD were observed. Significantly higher improvements in MWD were seen in the WAM group compared with that in the control group at 6 months (82 vs -5 m; P = 0.009, r = 0.47) and 12 months (69 vs 7.5 m; P = 0.011, r = 0.52). The study demonstrates the significant, sustained benefit of WAM-led technologies for patients with IC. This potentially resource-sparing intervention is likely to provide a valuable adjunct or alternative to SEP.
Suzaku Wide-band All-sky Monitor measurements of duration distributions of gamma-ray bursts
NASA Astrophysics Data System (ADS)
Ohmori, Norisuke; Yamaoka, Kazutaka; Ohno, Masanori; Sugita, Satoshi; Kinoshita, Ryuuji; Nishioka, Yusuke; Hurley, Kevin; Hanabata, Yoshitaka; Tashiro, Makoto S.; Enomoto, Junichi; Fujinuma, Takeshi; Fukazawa, Yasushi; Iwakiri, Wataru; Kawano, Takafumi; Kokubun, Motohide; Makishima, Kazuo; Matsuoka, Shunsuke; Nagayoshi, Tsutomu; Nakagawa, Yujin E.; Nakaya, Souhei; Nakazawa, Kazuhiro; Takahashi, Tadayuki; Takeda, Sawako; Terada, Yukikatsu; Urata, Yuji; Yabe, Seiya; Yasuda, Tetsuya; Yamauchi, Makoto
2016-06-01
We report on the T90 and T50 duration distributions and their relations with spectral hardness using 1464 gamma-ray bursts (GRBs), which were observed by the Suzaku Wide-band All-sky Monitor (WAM) from 2005 August 4 to 2010 December 29. The duration distribution is clearly bimodal in three energy ranges (50-120, 120-250, and 250-550 keV), but is unclear in the 550-5000 keV range, probably because of the limited sample size. The WAM durations decrease with energy according to a power-law index of -0.058(-0.034, +0.033). The hardness-duration relation reveals the presence of short-hard and long-soft GRBs. The short:long event ratio tends to be higher with increasing energy. We compared the WAM distribution with ones measured by eight other GRB instruments. The WAM T90 distribution is very similar to those of INTEGRAL/SPI-ACS and Granat/PHEBUS, and least likely to match the Swift/BAT distribution. The WAM short:long event ratio (0.25:0.75) is much different from Swift/BAT (0.08:0.92), but is almost the same as CGRO/BATSE (0.25:0.75). To explain this difference for BAT, we examined three effects: BAT trigger types, energy dependence of the duration, and detection sensitivity differences between BAT and WAM. As a result, we found that the ratio difference could be explained mainly by energy dependence including soft extended emissions for short GRBs and much better sensitivity for BAT which can detect weak/long GRBs. The reason for the same short:long event ratio for BATSE and WAM was confirmed by calculation using the trigger efficiency curve.
Colorado WAM separations standards targets of opportunity and flight test analysis
DOT National Transportation Integrated Search
2009-10-25
The Federal Aviation Administration (FAA) : Surveillance and Broadcast Services (SBS) Program : Office and the Colorado Department of : Transportation are implementing Wide Area : Multilateration (WAM) in Non-Radar Airspace : (NRA) to improve air tra...
Schmidt, Matthew W; Chang, Ping; Parker, Andrew O; Ji, Link; He, Feng
2017-11-13
Multiple lines of evidence show that cold stadials in the North Atlantic were accompanied by both reductions in Atlantic Meridional Overturning Circulation (AMOC) and collapses of the West African Monsoon (WAM). Although records of terrestrial change identify abrupt WAM variability across the deglaciation, few studies show how ocean temperatures evolved across the deglaciation. To identify the mechanism linking AMOC to the WAM, we generated a new record of subsurface temperature variability over the last 21 kyr based on Mg/Ca ratios in a sub-thermocline dwelling planktonic foraminifera in an Eastern Equatorial Atlantic (EEA) sediment core from the Niger Delta. Our subsurface temperature record shows abrupt subsurface warming during both the Younger Dryas (YD) and Heinrich Event 1. We also conducted a new transient coupled ocean-atmosphere model simulation across the YD that better resolves the western boundary current dynamics and find a strong negative correlation between AMOC strength and EEA subsurface temperatures caused by changes in ocean circulation and rainfall responses that are consistent with the observed WAM change. Our combined proxy and modeling results provide the first evidence that an oceanic teleconnection between AMOC strength and subsurface temperature in the EEA impacted the intensity of the WAM on millennial time scales.
NASA Astrophysics Data System (ADS)
Matsui, Toshi; Zhang, Sara Q.; Lang, Stephen E.; Tao, Wei-Kuo; Ichoku, Charles; Peters-Lidard, Christa D.
2018-03-01
In this study, the impact of different configurations of the Goddard radiation scheme on convection-permitting simulations (CPSs) of the West African monsoon (WAM) is investigated using the NASA-Unified WRF (NU-WRF). These CPSs had 3 km grid spacing to explicitly simulate the evolution of mesoscale convective systems (MCSs) and their interaction with radiative processes across the WAM domain and were able to reproduce realistic precipitation and energy budget fields when compared with satellite data, although low clouds were overestimated. Sensitivity experiments reveal that (1) lowering the radiation update frequency (i.e., longer radiation update time) increases precipitation and cloudiness over the WAM region by enhancing the monsoon circulation, (2) deactivation of precipitation radiative forcing suppresses cloudiness over the WAM region, and (3) aggregating radiation columns reduces low clouds over ocean and tropical West Africa. The changes in radiation configuration immediately modulate the radiative heating and low clouds over ocean. On the 2nd day of the simulations, patterns of latitudinal air temperature profiles were already similar to the patterns of monthly composites for all radiation sensitivity experiments. Low cloud maintenance within the WAM system is tightly connected with radiation processes; thus, proper coupling between microphysics and radiation processes must be established for each modeling framework.
Suzaku Wide-band All-sky Monitor (WAM) observations of GRBs and SGRs
NASA Astrophysics Data System (ADS)
Yamaoka, Kazutaka; Ohno, Masanori; Tashiro, Makoto S.; Hurley, Kevin; Krimm, Hans A.; Lien, Amy Y.; Ohmori, Norisuke; Sugita, Satoshi; Urata, Yuji; Yasuda, Tetsuya; Enomoto, Junichi; Fujinuma, Takeshi; Fukazawa, Yasushi; Hanabata, Yoshitaka; Iwakiri, Wataru; Kawano, Takafumi; Kinoshita, Ryuuji; Kokubun, Motohide; Makishima, Kazuo; Matsuoka, Shunsuke; Nagayoshi, Tsutomu; Nakagawa, Yujin; Nakaya, Souhei; Nakazawa, Kazuhiro; Nishioka, Yusuke; Sakamoto, Takanori; Takahashi, Tadayuki; Takeda, Sawako; Terada, Yukikatsu; Yabe, Seiya; Yamauchi, Makoto; Yoshida, Hiraku
2017-06-01
We will review results for gamma-ray bursts (GRBs) and soft gamma repeaters (SGRs), obtained from the Suzaku Wide-band All-sky Monitor (WAM) which operated for about 10 years from 2005 to 2015. The WAM is a BGO (bismuth germanate: Bi4Ge3O12) lateral shield for the Hard X-ray Detector (HXD), used mainly for rejecting its detector background, but it also works as an all-sky monitor for soft gamma-ray transients in the 50-5000 keV range thanks to its large effective area (˜600 cm2 at 1 MeV for one detector) and wide field of view (about half of the entire sky). The WAM actually detected more than 1400 GRBs and 300 bursts from SGRs, and this detection number is comparable to that of other GRB-specific instruments. Based on the 10 years of operation, we describe timing and spectral performance for short GRBs, weak GRBs with high redshifts, and time-resolved pulses with good statistics.
Midnight Temperature Maximum (MTM) in Whole Atmosphere Model (WAM) Simulations
2016-04-14
naturally strongly dissipative medium, eliminating the need for ‘‘ sponge layers’’ and extra numerical dissipation often imposed in upper layers to...stabilize atmospheric model codes. WAM employs no ‘‘ sponge layers’’ and remains stable using a substantially reduced numerical Rayleigh friction coeffi
Watchdog activity monitor (WAM) for use wth high coverage processor self-test
NASA Technical Reports Server (NTRS)
Tulpule, Bhalchandra R. (Inventor); Crosset, III, Richard W. (Inventor); Versailles, Richard E. (Inventor)
1988-01-01
A high fault coverage, instruction modeled self-test for a signal processor in a user environment is disclosed. The self-test executes a sequence of sub-tests and issues a state transition signal upon the execution of each sub-test. The self-test may be combined with a watchdog activity monitor (WAM) which provides a test-failure signal in the presence of a counted number of state transitions not agreeing with an expected number. An independent measure of time may be provided in the WAM to increase fault coverage by checking the processor's clock. Additionally, redundant processor systems are protected from inadvertent unsevering of a severed processor using a unique unsever arming technique and apparatus.
WAM: an improved algorithm for modelling antibodies on the WEB.
Whitelegg, N R; Rees, A R
2000-12-01
An improved antibody modelling algorithm has been developed which incorporates significant improvements to the earlier versions developed by Martin et al. (1989, 1991), Pedersen et al. (1992) and Rees et al. (1996) and known as AbM (Oxford Molecular). The new algorithm, WAM (for Web Antibody Modelling), has been launched as an online modelling service and is located at URL http://antibody.bath.ac.uk. Here we provide a summary only of the important features of WAM. Readers interested in further details are directed to the website, which gives extensive background information on the methods employed. A brief description of the rationale behind some of the newer methodology (specifically, the knowledge-based screens) is also given.
Modelling ionospheric scintillation under the crest of the equatorial anomaly
NASA Astrophysics Data System (ADS)
Alfonsi, L.; Wernik, A. W.; Materassi, M.; Spogli, L.
2017-10-01
WAM is realized making use of the plasma density data collected via the retarding potential analyser on board the Dynamics Explorer 2 spacecraft, capable to model the scintillation climatology over the northern hemisphere high latitude ionosphere. More recently, WAM has been tuned to model the ionospheric scintillations also over the equatorial latitudes. The effort has been done to support the CIGALA (Concept for Ionospheric Scintillation Mitigation for Professional GNSS in Latin America) project in the assessment of the scintillations climatology over Latin America. The concept of the new release of WAM is the same already adopted for the high latitudes: the in situ measurements, supplemented with an ionospheric model and with the irregularity anisotropy model, are treated to describe the morphology of scintillation, provided a suitable propagation model is used. Significant differences have been included in the low latitudes release to account for the anisotropy of the irregularities and for strong scattering regime. The paper describes the new WAM formulation and presents comparisons of the model predictions with the actual measurements collected in Brazil.
NASA Astrophysics Data System (ADS)
Martin, G. M.; Peyrillé, P.; Roehrig, R.; Rio, C.; Caian, M.; Bellon, G.; Codron, F.; Lafore, J.-P.; Poan, D. E.; Idelkadi, A.
2017-03-01
Vertical and horizontal distributions of diabatic heating in the West African monsoon (WAM) region as simulated by four model families are analyzed in order to assess the physical processes that affect the WAM circulation. For each model family, atmosphere-only runs of their CMIP5 configurations are compared with more recent configurations which are on the development path toward CMIP6. The various configurations of these models exhibit significant differences in their heating/moistening profiles, related to the different representation of physical processes such as boundary layer mixing, convection, large-scale condensation and radiative heating/cooling. There are also significant differences in the models' simulation of WAM rainfall patterns and circulations. The weaker the radiative cooling in the Saharan region, the larger the ascent in the rainband and the more intense the monsoon flow, while the latitude of the rainband is related to heating in the Gulf of Guinea region and on the northern side of the Saharan heat low. Overall, this work illustrates the difficulty experienced by current climate models in representing the characteristics of monsoon systems, but also that we can still use them to understand the interactions between local subgrid physical processes and the WAM circulation. Moreover, our conclusions regarding the relationship between errors in the large-scale circulation of the WAM and the structure of the heating by small-scale processes will motivate future studies and model development.
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew B.
2014-01-01
The Atlantic cold tongue (ACT) develops during spring and early summer near the Equator in the Eastern Atlantic Ocean and Gulf of Guinea. The hypothesis that the ACT accelerates the timing of West African monsoon (WAM) onset is tested by comparing two regional climate model (RM3) simulation ensembles. Observed sea surface temperatures (SST) that include the ACT are used to force a control ensemble. An idealized, warm SST perturbation is designed to represent lower boundary forcing without the ACT for the experiment ensemble. Summer simulations forced by observed SST and reanalysis boundary conditions for each of five consecutive years are compared to five parallel runs forced by SST with the warm perturbation. The article summarizes the sequence of events leading to the onset of the WAM in the Sahel region. The representation of WAM onset in RM3 simulations is examined and compared to Tropical Rainfall Measuring Mission (TRMM), Global Precipitation Climatology Project (GPCP) and reanalysis data. The study evaluates the sensitivity of WAM onset indicators to the presence of the ACT by analysing the differences between the two simulation ensembles. Results show that the timing of major rainfall events and therefore theWAM onset in the Sahel are not sensitive to the presence of the ACT. However, the warm SST perturbation does increase downstream rainfall rates over West Africa as a consequence of enhanced specific humidity and enhanced northward moisture flux in the lower troposphere.
Testing the E(sub peak)-E(sub iso) Relation for GRBs Detected by Swift and Suzaku-WAM
NASA Technical Reports Server (NTRS)
Krimm, H. A.; Yamaoka, K.; Sugita, S.; Ohno, M.; Sakamoto, T.; Barthelmy, S. D.; Gehrels, N.; Hara, R.; Onda, K.; Sato, G.;
2009-01-01
One of the most prominent, yet controversial associations derived from the ensemble of prompt-phase observations of gamma-ray bursts (GRBs) is the apparent correlation in the source frame between the peak energy (E(sub peak)) of the nuF(nu) spectrum and the isotropic radiated energy, E(sub iso). Since most gamma-ray bursts (GRBs) have E(sub peak) above the energy range (15-150 keV) of the Burst Alert Telescope (BAT) on Swift, determining accurate E(sub peak) values for large numbers of Swift bursts has been difficult. However, by combining data from Swift/BAT and the Suzaku Wide-band All-Sky Monitor (WAM), which covers the energy range from 50-5000 keV, for bursts which are simultaneously detected ; one can accurately fit E(sub peak) and E(sub iso) and test the relationship between them for the Swift sample. Between the launch of Suzaku in July 2005 and the end of March 2009, there were 45 gamma-ray bursts (GRBs) which triggered both Swift/BAT and WAM and an additional 47 bursts which triggered Swift and were detected by WAM, but did not trigger. A BAT-WAM team has cross-calibrated the two instruments using GRBs, and we are now able to perform joint fits on these bursts to determine spectral parameters. For those bursts with spectroscopic redshifts.. we can also calculate the isotropic energy. Here we present the results of joint Swift/BAT-Suzaku/WAM spectral fits for 86 of the bursts detected by the two instruments. We show that the distribution of spectral fit parameters is consistent with distributions from earlier missions and confirm that Swift, bursts are consistent with earlier reported relationships between Epeak and isotropic energy. We show through time-resolved spectroscopy that individual burst pulses are also consistent with this relationship.
A Distribution Level Wide Area Monitoring System for the Electric Power Grid–FNET/GridEye
Liu, Yong; You, Shutang; Yao, Wenxuan; ...
2017-02-09
The wide area monitoring system (WAMS) is considered a pivotal component of future electric power grids. As a pilot WAMS that has been operated for more than a decade, the frequency monitoring network FNET/GridEye makes use of hundreds of global positioning system-synchronized phasor measurement sensors to capture the increasingly complicated grid behaviors across the interconnected power systems. In this paper, the FNET/GridEye system is overviewed and its operation experiences in electric power grid wide area monitoring are presented. Particularly, the implementation of a number of data analytics applications will be discussed in details. FNET/GridEye lays a firm foundation for themore » later WAMS operation in the electric power industry.« less
NASA Astrophysics Data System (ADS)
Divakar, L.; Babel, M. S.; Perret, S. R.; Gupta, A. Das
2011-04-01
SummaryThe study develops a model for optimal bulk allocations of limited available water based on an economic criterion to competing use sectors such as agriculture, domestic, industry and hydropower. The model comprises a reservoir operation module (ROM) and a water allocation module (WAM). ROM determines the amount of water available for allocation, which is used as an input to WAM with an objective function to maximize the net economic benefits of bulk allocations to different use sectors. The total net benefit functions for agriculture and hydropower sectors and the marginal net benefit from domestic and industrial sectors are established and are categorically taken as fixed in the present study. The developed model is applied to the Chao Phraya basin in Thailand. The case study results indicate that the WAM can improve net economic returns compared to the current water allocation practices.
The Smart Mine Simulator User’s Guide and Algorithm Description
1993-12-01
meters control kill range tank 2 meters * APC 1.5 meters other ground 1 meter munition burst type projectile 105APDS detonator M739 155mm C-1 WAM...in range 15 meters munition launch burst type projectile TOW detonator M739 155mm WAM Sublet: component parameter Index value sublet regular update...detonator M739 155mm sensor detection range 50 meters control firing angle -55 degrees munition fire burst type projectile TOW detonator M739 155mm
Learning New Basic Movements for Robotics
NASA Astrophysics Data System (ADS)
Kober, Jens; Peters, Jan
Obtaining novel skills is one of the most important problems in robotics. Machine learning techniques may be a promising approach for automatic and autonomous acquisition of movement policies. However, this requires both an appropriate policy representation and suitable learning algorithms. Employing the most recent form of the dynamical systems motor primitives originally introduced by Ijspeert et al. [1], we show how both discrete and rhythmic tasks can be learned using a concerted approach of both imitation and reinforcement learning, and present our current best performing learning algorithms. Finally, we show that it is possible to include a start-up phase in rhythmic primitives. We apply our approach to two elementary movements, i.e., Ball-in-a-Cup and Ball-Paddling, which can be learned on a real Barrett WAM robot arm at a pace similar to human learning.
Abstract quantum computing machines and quantum computational logics
NASA Astrophysics Data System (ADS)
Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto
2016-06-01
Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.
NASA Astrophysics Data System (ADS)
Lauer, Axel; Jones, Colin; Eyring, Veronika; Evaldsson, Martin; Hagemann, Stefan; Mäkelä, Jarmo; Martin, Gill; Roehrig, Romain; Wang, Shiyu
2018-01-01
The performance of updated versions of the four earth system models (ESMs) CNRM, EC-Earth, HadGEM, and MPI-ESM is assessed in comparison to their predecessor versions used in Phase 5 of the Coupled Model Intercomparison Project. The Earth System Model Evaluation Tool (ESMValTool) is applied to evaluate selected climate phenomena in the models against observations. This is the first systematic application of the ESMValTool to assess and document the progress made during an extensive model development and improvement project. This study focuses on the South Asian monsoon (SAM) and the West African monsoon (WAM), the coupled equatorial climate, and Southern Ocean clouds and radiation, which are known to exhibit systematic biases in present-day ESMs. The analysis shows that the tropical precipitation in three out of four models is clearly improved. Two of three updated coupled models show an improved representation of tropical sea surface temperatures with one coupled model not exhibiting a double Intertropical Convergence Zone (ITCZ). Simulated cloud amounts and cloud-radiation interactions are improved over the Southern Ocean. Improvements are also seen in the simulation of the SAM and WAM, although systematic biases remain in regional details and the timing of monsoon rainfall. Analysis of simulations with EC-Earth at different horizontal resolutions from T159 up to T1279 shows that the synoptic-scale variability in precipitation over the SAM and WAM regions improves with higher model resolution. The results suggest that the reasonably good agreement of modeled and observed mean WAM and SAM rainfall in lower-resolution models may be a result of unrealistic intensity distributions.
NASA Astrophysics Data System (ADS)
Stern, Rowena F.; Picard, Kathryn T.; Hamilton, Kristina M.; Walne, Antony; Tarran, Glen A.; Mills, David; McQuatters-Gollop, Abigail; Edwards, Martin
2015-09-01
There is a paucity of data on long-term, spatially resolved changes in microbial diversity and biogeography in marine systems, and yet these organisms underpin fundamental ecological processes in the oceans affecting socio-economic values of the marine environment. We report results from a new autonomous Water and Microplankton Sampler (WaMS) that is carried within the Continuous Plankton Recorder (CPR). Whilst the CPR with its larger mesh size (270 μm), is designed to capture larger plankton, the WaMS was designed as an additional device to capture plankton below 50 μm and delicate larger species, often destroyed by net sampling methods. A 454 pyrosequencing and flow cytometric investigation of eukaryotic microbes using the partial 18S rDNA from thirteen WaMS samples collected over three months in the English Channel revealed a wide diversity of organisms. Alveolates, Fungi, and picoplanktonic Chlorophytes were the most common lineages captured despite the small sample volumes (200-250 ml). The survey also identified Cercozoa and MAST heterotrophic Stramenopiles, normally missed in microscopic-based plankton surveys. The most common was the likely parasitic LKM11 Rozellomycota lineage which comprised 43.2% of all reads and are rarely observed in marine pelagic surveys. An additional 9.5% of reads belonged to other parasitic lineages including marine Syndiniales and Ichthyosporea. Sample variation was considerable, indicating that microbial diversity is spatially or temporally patchy. Our study has shown that the WaMS sampling system is autonomous, versatile and robust, and due to its deployment on the established CPR network, is a cost-effective monitoring tool for microbial diversity for the detection of smaller and delicate taxa.
Suitability of open-field autorefractors as pupillometers and instrument design effects.
Otero, Carles; Aldaba, Mikel; Ferrer, Oriol; Gascón, Andrea; Ondategui-Parra, Juan C; Pujol, Jaume
2017-01-01
To determine the agreement and repeatability of the pupil measurement obtained with VIP-200 (Neuroptics), PowerRef II (Plusoptix), WAM-5500 (Grand Seiko) and study the effects of instrument design on pupillometry. Forty patients were measured twice in low, mid and high mesopic. Repeatability was analyzed with the within-subject standard deviation (Sw) and paired t -tests. Agreement was studied with Bland-Altman plots and repeated measures ANOVA. Instrument design analysis consisted on measuring pupil size with PowerRef II simulating monocular and binocular conditions as well as with proximity cues and without proximity cues. The mean difference (±standard deviation) between test-retest for low, mid and high mesopic conditions were, respectively: -0.09 (±0.16), -0.05 (±0.18) and -0.08 (±0.23) mm for Neuroptics, -0.05 (±0.17), -0.12 (±0.23) and -0.17 (±0.34) mm for WAM-5500, -0.04 (±0.27), -0.13 (±0.37) and -0.11 (±0.28) mm for PowerRef II. Regarding agreement with Neuroptics, the mean difference for low, mid and high mesopic conditions were, respectively: -0.48 (±0.35), -0.83 (±0.52) and -0.38 (±0.56) mm for WAM-5500, -0.28 (±0.56), -0.70 (±0.55) and -0.61 (±0.54) mm for PowerRef II. The mean difference of binocular minus monocular pupil measurements was: -0.83 (±0.87) mm; and with proximity cues minus without proximity cues was: -0.30 (±0.77) mm. All the instruments show similar repeat-ability. In all illumination conditions, agreement of Neuroptics with WAM-5500 and PowerRef II is not good enough, which can be partially induced due to their open field design.
Wireless Acoustic Measurement System
NASA Technical Reports Server (NTRS)
Anderson, Paul D.; Dorland, Wade D.; Jolly, Ronald L.
2007-01-01
A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/ Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in the article on page 8. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro- ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1.3-octave spectrograms.
Wireless Acoustic Measurement System
NASA Technical Reports Server (NTRS)
Anderson, Paul D.; Dorland, Wade D.
2005-01-01
A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in "Predicting Rocket or Jet Noise in Real Time" (SSC-00215-1), which appears elsewhere in this issue of NASA Tech Briefs. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro-ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1/3-octave spectrograms.
West African Monsoon dynamics in idealized simulations: the competitive roles of SST warming and CO2
NASA Astrophysics Data System (ADS)
Gaetani, Marco; Flamant, Cyrille; Hourdin, Frederic; Bastin, Sophie; Braconnot, Pascale; Bony, Sandrine
2015-04-01
The West African Monsoon (WAM) is affected by large climate variability at different timescales, from interannual to multidecadal, with strong environmental and socio-economic impacts associated to climate-related rainfall variability, especially in the Sahelian belt. State-of-the-art coupled climate models still show poor ability in correctly simulating the WAM past variability and also a large spread is observed in future climate projections. In this work, the July-to-September (JAS) WAM variability in the period 1979-2008 is studied in AMIP-like simulations (SST-forced) from CMIP5. The individual roles of global SST warming and CO2 concentration increasing are investigated through idealized experiments simulating a 4K warmer SST and a 4x CO2 concentration, respectively. Results show a dry response in Sahel to SST warming, with dryer conditions over western Sahel. On the contrary, wet conditions are observed when CO2 is increased, with the strongest response over central-eastern Sahel. The precipitation changes are associated to modifications in the regional atmospheric circulation: dry (wet) conditions are associated with reduced (increased) convergence in the lower troposphere, a southward (northward) shift of the African Easterly Jet, and a weaker (stronger) Tropical Easterly Jet. The co-variability between global SST and WAM precipitation is also investigated, highlighting a reorganization of the main co-variability modes. Namely, in the 4xCO2 simulation the influence of Tropical Pacific is dominant, while it is reduced in the 4K simulation, which also shows an increased coupling with the eastern Pacific and the Indian Ocean. The above results suggest a competitive action of SST warming and CO2 increasing on the WAM climate variability, with opposite effects on precipitation. The combination of the observed positive and negative response in precipitation, with wet conditions in central-eastern Sahel and dry conditions in western Sahel, is consistent with the future precipitation trends over West Africa resulting from CMIP5 coupled simulations. It is argued that the large spread in CMIP5 future projections may be related to the weight given to SST warming and direct CO2 effect by individual models. The capability of climate models in reproducing the SST-precipitation relationship appears to be crucial in this respect.
NASA Astrophysics Data System (ADS)
Gingerich, P. D.
2012-12-01
Many important environmental events in the geological past were first recognized by their effects on the associated biota, and this is true for the Paleocene-Eocene Thermal Maximum or PETM global greenhouse warming event, which happened 55 million years before present. In the Southern Ocean, PETM carbon and oxygen isotope anomalies were found to coincide with a major terminal-Paleocene disappearance or extinction of benthic foraminiferans. On North America the PETM carbon isotope excursion (CIE) was found to coincide with mammalian dwarfing and a major initial-Eocene appearance or origination event of continental mammals. Linking the two records, marine and continental, resolved a long-standing disagreement over competing definitions of the Paleocene-Eocene epoch boundary, and more importantly indicated that the PETM greenhouse warming event was global. Dwarfing of herbivorous mammals can be interpreted as a response to elevated atmospheric CO2. The origin of modern orders of mammals including Artiodactyla, Perissodactyla, and Primates ('APP' taxa) is more complicated and difficult to explain but the origin of these orders may also be a response, directly or indirectly, to PETM warming. We now know from Polecat Bench and elsewhere in North America that the biotic response to PETM greenhouse warming involved the appearance of at least two new mammalian faunas distinct from previously known Clarkforkian mammals of the upper or late Paleocene and previously known Wasatchian mammals of the lower or early Eocene. Three stages and ages of the former are known (Cf-1 to Cf-3) and seven stages and ages of the latter are known (Wa-1 to Wa-7), each occupying about a hundred meters of strata representing a half-million years or so of time. Between the standard Clarkforkian and Wasatchian faunal zones is an initial 'Wa-M' faunal zone of only five or so meters in thickness and something on the order of 20 thousand years of geological time. The Wa-M fauna includes the first appearance of its namesake herbivorous condylarth, Meniscotherium, but Wa-M seemingly lacks APP taxa. Overlying Wa-M is the better known 'Wa-0' fauna in a zone spanning 30 meters of strata and about 120 thousand years of geological time. This has dwarfed mammals and APP taxa, and is overlain in turn by strata with a standard Wa-1 early Eocene fauna. Documentation is still in progress, but it appears that the change from a Cf-3 to a Wa-M fauna lagged behind the onset of the CIE, the Wa-M fauna coincided with maximum excursion of the CIE, and the Wa-0 fauna lagged behind this maximum excursion and filled the recovery phase of the CIE. It is possible that other short-lived faunas will be found in addition to those already known because the events of interest are so short in duration that they may not be preserved in every stratigraphic section. Biotic effects (e.g., dwarfing and other adaptive change, biotic extinction, and biotic origination) are compelling reasons to study global warming, and the PETM provides an opportunity to study warming and sustainability in an event free from human influence.;
Clark, Edward B; Hickinbotham, Simon J; Stepney, Susan
2017-05-01
We present a novel stringmol-based artificial chemistry system modelled on the universal constructor architecture (UCA) first explored by von Neumann. In a UCA, machines interact with an abstract description of themselves to replicate by copying the abstract description and constructing the machines that the abstract description encodes. DNA-based replication follows this architecture, with DNA being the abstract description, the polymerase being the copier, and the ribosome being the principal machine in expressing what is encoded on the DNA. This architecture is semantically closed as the machine that defines what the abstract description means is itself encoded on that abstract description. We present a series of experiments with the stringmol UCA that show the evolution of the meaning of genomic material, allowing the concept of semantic closure and transitions between semantically closed states to be elucidated in the light of concrete examples. We present results where, for the first time in an in silico system, simultaneous evolution of the genomic material, copier and constructor of a UCA, giving rise to viable offspring. © 2017 The Author(s).
Tracking the Evolution of Smartphone Sensing for Monitoring Human Movement.
del Rosario, Michael B; Redmond, Stephen J; Lovell, Nigel H
2015-07-31
Advances in mobile technology have led to the emergence of the "smartphone", a new class of device with more advanced connectivity features that have quickly made it a constant presence in our lives. Smartphones are equipped with comparatively advanced computing capabilities, a global positioning system (GPS) receivers, and sensing capabilities (i.e., an inertial measurement unit (IMU) and more recently magnetometer and barometer) which can be found in wearable ambulatory monitors (WAMs). As a result, algorithms initially developed for WAMs that "count" steps (i.e., pedometers); gauge physical activity levels; indirectly estimate energy expenditure and monitor human movement can be utilised on the smartphone. These algorithms may enable clinicians to "close the loop" by prescribing timely interventions to improve or maintain wellbeing in populations who are at risk of falling or suffer from a chronic disease whose progression is linked to a reduction in movement and mobility. The ubiquitous nature of smartphone technology makes it the ideal platform from which human movement can be remotely monitored without the expense of purchasing, and inconvenience of using, a dedicated WAM. In this paper, an overview of the sensors that can be found in the smartphone are presented, followed by a summary of the developments in this field with an emphasis on the evolution of algorithms used to classify human movement. The limitations identified in the literature will be discussed, as well as suggestions about future research directions.
A multi-decadal wind-wave hindcast for the North Sea 1949-2014: coastDat2
NASA Astrophysics Data System (ADS)
Groll, Nikolaus; Weisse, Ralf
2017-12-01
Long and consistent wave data are important for analysing wave climate variability and change. Moreover, such wave data are also needed in coastal and offshore design and for addressing safety-related issues at sea. Using the third-generation spectral wave model WAM a multi-decadal wind-wave hindcast for the North Sea covering the period 1949-2014 was produced. The hindcast is part of the coastDat database representing a consistent and homogeneous met-ocean data set. It is shown that despite not being perfect, data from the wave hindcast are generally suitable for wave climate analysis. In particular, comparisons of hindcast data with in situ and satellite observations show on average a reasonable agreement, while a tendency towards overestimation of the highest waves could be inferred. Despite these limitations, the wave hindcast still provides useful data for assessing wave climate variability and change as well as for risk analysis, in particular when conservative estimates are needed. Hindcast data are stored at the World Data Center for Climate (WDCC) and can be freely accessed using the doi:10.1594/WDCC/coastDat-2_WAM-North_Sea Groll and Weisse(2016) or via the coastDat web-page http://www.coastdat.de.
2010-02-01
multi-agent reputation management. State abstraction is a technique used to allow machine learning technologies to cope with problems that have large...state abstrac- tion process to enable reinforcement learning in domains with large state spaces. State abstraction is vital to machine learning ...across a collective of independent platforms. These individual elements, often referred to as agents in the machine learning community, should exhibit both
Design Methodology for Automated Construction Machines
1987-12-11
along with the design of a pair of machines which automate framework installation.-,, 20. DISTRIBUTION IAVAILABILITY OF ABSTRACT 21. ABSTRACT SECURITY... Development Assistant Professor of Civil Engineering and Laura A . Demsetz, David H. Levy, Bruce Schena Graduate Research Assistants December 11, 1987 U.S...are discussed along with the design of a pair of machines which automate framework installation. Preliminary analysis and testing indicate that these
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matteson, A.; Morris, R.; Tate, R.
1993-12-31
The acoustic signal produced by the gas metal arc welding (GMAW) arc contains information about the behavior of the arc column, the molten pool and droplet transfer. It is possible to detect some defect producing conditions from the acoustic signal from the GMAW arc. An intelligent sensor, called the Weld Acoustic Monitor (WAM) has been developed to take advantage of this acoustic information in order to provide real-time quality assessment information for process control. The WAM makes use of an Artificial Neural Network (ANN) to classify the characteristic arc acoustic signals of acceptable and unacceptable welds. The ANN used inmore » the Weld Acoustic Monitor developed its own set of rules for this classification problem by learning a data base of known GMAW acoustic signals.« less
Atmospheric feedbacks in North Africa from an irrigated, afforested Sahara
NASA Astrophysics Data System (ADS)
Kemena, Tronje Peer; Matthes, Katja; Martin, Thomas; Wahl, Sebastian; Oschlies, Andreas
2017-09-01
Afforestation of the Sahara has been proposed as a climate engineering method to sequester a substantial amount of carbon dioxide, potentially effective to mitigate climate change. Earlier studies predicted changes in the atmospheric circulation system. These atmospheric feedbacks raise questions about the self-sustainability of such an intervention, but have not been investigated in detail. Here, we investigate changes in precipitation and circulation in response to Saharan large-scale afforestation and irrigation with NCAR's CESM-WACCM Earth system model. Our model results show a Saharan temperature reduction by 6 K and weak precipitation enhancement by 267 mm/year over the Sahara. Only 26% of the evapotranspirated water re-precipitates over the Saharan Desert, considerably large amounts are advected southward to the Sahel zone and enhance the West African monsoon (WAM). Different processes cause circulation and precipitation changes over North Africa. The increase in atmospheric moisture leads to radiative cooling above the Sahara and increased high-level cloud coverage as well as atmospheric warming above the Sahel zone. Both lead to a circulation anomaly with descending air over the Sahara and ascending air over the Sahel zone. Together with changes in the meridional temperature gradient, this results in a southward shift of the inner-tropical front. The strengthening of the Tropical easterly jet and the northward displacement of the African easterly jet is associated with a northward displacement and strengthening of the WAM precipitation. Our results suggest complex atmospheric circulation feedbacks, which reduce the precipitation potential over an afforested Sahara and enhance WAM precipitation.
Violante-Carvalho, Nelson
2005-12-01
Synthetic Aperture Radar (SAR) onboard satellites is the only source of directional wave spectra with continuous and global coverage. Millions of SAR Wave Mode (SWM) imagettes have been acquired since the launch in the early 1990's of the first European Remote Sensing Satellite ERS-1 and its successors ERS-2 and ENVISAT, which has opened up many possibilities specially for wave data assimilation purposes. The main aim of data assimilation is to improve the forecasting introducing available observations into the modeling procedures in order to minimize the differences between model estimates and measurements. However there are limitations in the retrieval of the directional spectrum from SAR images due to nonlinearities in the mapping mechanism. The Max-Planck Institut (MPI) scheme, the first proposed and most widely used algorithm to retrieve directional wave spectra from SAR images, is employed to compare significant wave heights retrieved from ERS-1 SAR against buoy measurements and against the WAM wave model. It is shown that for periods shorter than 12 seconds the WAM model performs better than the MPI, despite the fact that the model is used as first guess to the MPI method, that is the retrieval is deteriorating the first guess. For periods longer than 12 seconds, the part of the spectrum that is directly measured by SAR, the performance of the MPI scheme is at least as good as the WAM model.
Tracking the Evolution of Smartphone Sensing for Monitoring Human Movement
del Rosario, Michael B.; Redmond, Stephen J.; Lovell, Nigel H.
2015-01-01
Advances in mobile technology have led to the emergence of the “smartphone”, a new class of device with more advanced connectivity features that have quickly made it a constant presence in our lives. Smartphones are equipped with comparatively advanced computing capabilities, a global positioning system (GPS) receivers, and sensing capabilities (i.e., an inertial measurement unit (IMU) and more recently magnetometer and barometer) which can be found in wearable ambulatory monitors (WAMs). As a result, algorithms initially developed for WAMs that “count” steps (i.e., pedometers); gauge physical activity levels; indirectly estimate energy expenditure and monitor human movement can be utilised on the smartphone. These algorithms may enable clinicians to “close the loop” by prescribing timely interventions to improve or maintain wellbeing in populations who are at risk of falling or suffer from a chronic disease whose progression is linked to a reduction in movement and mobility. The ubiquitous nature of smartphone technology makes it the ideal platform from which human movement can be remotely monitored without the expense of purchasing, and inconvenience of using, a dedicated WAM. In this paper, an overview of the sensors that can be found in the smartphone are presented, followed by a summary of the developments in this field with an emphasis on the evolution of algorithms used to classify human movement. The limitations identified in the literature will be discussed, as well as suggestions about future research directions. PMID:26263998
Representation of the West African Monsoon System in the aerosol-climate model ECHAM6-HAM2
NASA Astrophysics Data System (ADS)
Stanelle, Tanja; Lohmann, Ulrike; Bey, Isabelle
2017-04-01
The West African Monsoon (WAM) is a major component of the global monsoon system. The temperature contrast between the Saharan land surface in the North and the sea surface temperature in the South dominates the WAM formation. The West African region receives most of its precipitation during the monsoon season between end of June and September. Therefore the existence of the monsoon is of major social and economic importance. We discuss the ability of the climate model ECHAM6 as well as the coupled aerosol climate model ECHAM6-HAM2 to simulate the major features of the WAM system. The north-south temperature gradient is reproduced by both model versions but all model versions fail in reproducing the precipitation amount south of 10° N. A special focus is on the representation of the nocturnal low level jet (NLLJ) and the corresponding enhancement of low level clouds (LLC) at the Guinea Coast, which are a crucial factor for the regional energy budget. Most global climate models have difficulties to represent these features. The pure climate model ECHAM6 is able to simulate the existence of the NLLJ and LLC, but the model does not represent the pronounced diurnal cycle. Overall, the representation of LLC is worse in the coupled model. We discuss the model behaviors on the basis of outputted temperature and humidity tendencies and try to identify potential processes responsible for the model deficiencies.
Atmospheric feedbacks in North Africa from an irrigated, afforested Sahara
NASA Astrophysics Data System (ADS)
Kemena, Tronje Peer; Matthes, Katja; Martin, Thomas; Wahl, Sebastian; Oschlies, Andreas
2018-06-01
Afforestation of the Sahara has been proposed as a climate engineering method to sequester a substantial amount of carbon dioxide, potentially effective to mitigate climate change. Earlier studies predicted changes in the atmospheric circulation system. These atmospheric feedbacks raise questions about the self-sustainability of such an intervention, but have not been investigated in detail. Here, we investigate changes in precipitation and circulation in response to Saharan large-scale afforestation and irrigation with NCAR's CESM-WACCM Earth system model. Our model results show a Saharan temperature reduction by 6 K and weak precipitation enhancement by 267 mm/year over the Sahara. Only 26% of the evapotranspirated water re-precipitates over the Saharan Desert, considerably large amounts are advected southward to the Sahel zone and enhance the West African monsoon (WAM). Different processes cause circulation and precipitation changes over North Africa. The increase in atmospheric moisture leads to radiative cooling above the Sahara and increased high-level cloud coverage as well as atmospheric warming above the Sahel zone. Both lead to a circulation anomaly with descending air over the Sahara and ascending air over the Sahel zone. Together with changes in the meridional temperature gradient, this results in a southward shift of the inner-tropical front. The strengthening of the Tropical easterly jet and the northward displacement of the African easterly jet is associated with a northward displacement and strengthening of the WAM precipitation. Our results suggest complex atmospheric circulation feedbacks, which reduce the precipitation potential over an afforested Sahara and enhance WAM precipitation.
Time of Flight Estimation in the Presence of Outliers: A Biosonar-Inspired Machine Learning Approach
2013-08-29
REPORT Time of Flight Estimation in the Presence of Outliers: A biosonar -inspired machine learning approach 14. ABSTRACT 16. SECURITY CLASSIFICATION OF...installations, biosonar , remote sensing, sonar resolution, sonar accuracy, sonar energy consumption Nathan Intrator, Leon N Cooper Brown University...Presence of Outliers: A biosonar -inspired machine learning approach Report Title ABSTRACT When the Signal-to-Noise Ratio (SNR) falls below a certain
Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm
Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W
2015-01-01
Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323
Multi-Sensor Fusion with Interacting Multiple Model Filter for Improved Aircraft Position Accuracy
Cho, Taehwan; Lee, Changho; Choi, Sangbang
2013-01-01
The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter. PMID:23535715
Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.
Cho, Taehwan; Lee, Changho; Choi, Sangbang
2013-03-27
The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.
Greening of the Sahara suppressed ENSO activity during the mid-Holocene
Pausata, Francesco S. R.; Zhang, Qiong; Muschitiello, Francesco; Lu, Zhengyao; Chafik, Léon; Niedermeyer, Eva M.; Stager, J. Curt; Cobb, Kim M.; Liu, Zhengyu
2017-01-01
The evolution of the El Niño-Southern Oscillation (ENSO) during the Holocene remains uncertain. In particular, a host of new paleoclimate records suggest that ENSO internal variability or other external forcings may have dwarfed the fairly modest ENSO response to precessional insolation changes simulated in climate models. Here, using fully coupled ocean-atmosphere model simulations, we show that accounting for a vegetated and less dusty Sahara during the mid-Holocene relative to preindustrial climate can reduce ENSO variability by 25%, more than twice the decrease obtained using orbital forcing alone. We identify changes in tropical Atlantic mean state and variability caused by the momentous strengthening of the West Africa Monsoon (WAM) as critical factors in amplifying ENSO’s response to insolation forcing through changes in the Walker circulation. Our results thus suggest that potential changes in the WAM due to anthropogenic warming may influence ENSO variability in the future as well. PMID:28685758
Greening of the Sahara suppressed ENSO activity during the mid-Holocene.
Pausata, Francesco S R; Zhang, Qiong; Muschitiello, Francesco; Lu, Zhengyao; Chafik, Léon; Niedermeyer, Eva M; Stager, J Curt; Cobb, Kim M; Liu, Zhengyu
2017-07-07
The evolution of the El Niño-Southern Oscillation (ENSO) during the Holocene remains uncertain. In particular, a host of new paleoclimate records suggest that ENSO internal variability or other external forcings may have dwarfed the fairly modest ENSO response to precessional insolation changes simulated in climate models. Here, using fully coupled ocean-atmosphere model simulations, we show that accounting for a vegetated and less dusty Sahara during the mid-Holocene relative to preindustrial climate can reduce ENSO variability by 25%, more than twice the decrease obtained using orbital forcing alone. We identify changes in tropical Atlantic mean state and variability caused by the momentous strengthening of the West Africa Monsoon (WAM) as critical factors in amplifying ENSO's response to insolation forcing through changes in the Walker circulation. Our results thus suggest that potential changes in the WAM due to anthropogenic warming may influence ENSO variability in the future as well.
WAMS measurements pre-processing for detecting low-frequency oscillations in power systems
NASA Astrophysics Data System (ADS)
Kovalenko, P. Y.
2017-07-01
Processing the data received from measurement systems implies the situation when one or more registered values stand apart from the sample collection. These values are referred to as “outliers”. The processing results may be influenced significantly by the presence of those in the data sample under consideration. In order to ensure the accuracy of low-frequency oscillations detection in power systems the corresponding algorithm has been developed for the outliers detection and elimination. The algorithm is based on the concept of the irregular component of measurement signal. This component comprises measurement errors and is assumed to be Gauss-distributed random. The median filtering is employed to detect the values lying outside the range of the normally distributed measurement error on the basis of a 3σ criterion. The algorithm has been validated involving simulated signals and WAMS data as well.
Assessing Wide Area Multilateration and ADS-B as alternative surveillance technology
DOT National Transportation Integrated Search
2005-09-26
The Helicopter In-Flight Tracking System (HITS) program evaluated both Wide Area Multilateration (WAM) and Automatic Dependent Surveillance Broadcast (ADS-B) as alternative surveillance technologies for both the terminal and en route domains in t...
On the Feasibility of Tracking the Monsoon History by Using Ancient Wind Direction Records
NASA Astrophysics Data System (ADS)
Gallego, D.; Ribera, P.; Peña-Ortiz, C.; Vega, I.; Gómez, F. D. P.; Ordoñez-Perez, P.; Garcia-Hererra, R.
2015-12-01
In this work, we use old wind direction records to reconstruct indices for the West African Monsoon (WAM) and the Indian Summer Monsoon (ISM). Since centuries ago, ships departing from the naval European powers circumnavigated Africa in their route to the Far East. Most of these ships took high-quality observations preserved in logbooks. We show that wind direction observations taken aboard ships can be used to track the seasonal wind reversal typical of monsoonal circulations. The persistence of the SW winds in the 20W-17W and 7N-13N region is highly correlated with the WAM strength and Sahel's precipitation. It has been possible to build a WAM index back to the 19th Century. Our results show that in the Sahel, the second half of the 19thCentury was significantly wetter than present day. The relation of the WAM with the ENSO cycle, and the Atlantic Multidecadal Oscillation was low and instable from the 1840s to the 1970s, when they abruptly suffered an unprecedented reinforcement which last up to the present day. The persistence of the SSW wind in the 60E-80E and 8N-12N area has been used to track the ISM onset since the 1880s. We found evidences of later than average onset dates during the 1900-1925 and 1970-1990 periods and earlier than average onset between 1940 and 1965. A significant relation between the ISM onset and the PDO restricted to shifts from negative to positive PDO phases has been found. The most significant contribution of our study is the fact that we have shown that it is possible to build consistent monsoon indices of instrumental character using solely direct observations of wind direction. Our indices have been generated by using data currently available in the ICOADS 2.5 database, but a large amount of wind observations for periods previous to the 20thcentury still remain not explored in thousands of logbooks preserved in British archives. The interest of unveil these data to track the monsoons for more than 200 -or even 300 years- it is difficult to exaggerate and will largely justify the time and economic costs of its digitation. This research was funded by the Spanish Ministerio de Economía y Competitividad through the project INCITE (CGL2013-44530-P).
NASA Astrophysics Data System (ADS)
Tolosana-Delgado, R.; Soret, A.; Jorba, O.; Baldasano, J. M.; Sánchez-Arcilla, A.
2012-04-01
Meteorological models, like WRF, usually describe the earth surface characteristics by tables that are function of land-use. The roughness length (z0) is an example of such approach. However, over sea z0 is modeled by the Charnock (1955) relation, linking the surface friction velocity u*2 with the roughness length z0 of turbulent air flow, z0 = α-u2* g The Charnock coefficient α may be considered a measure of roughness. For the sea surface, WRF considers a constant roughness α = 0.0185. However, there is evidence that sea surface roughness should depend on wave energy (Donelan, 1982). Spectral wave models like WAM, model the evolution and propagation of wave energy as a function of wind, and include a richer sea surface roughness description. Coupling WRF and WAM is thus a common way to improve the sea surface roughness description of WRF. WAM is a third generation wave model, solving the equation of advection of wave energy subject to input/output terms of: wind growth, energy dissipation and resonant non-linear wave-wave interactions. Third generation models work on the spectral domain. WAM considers the Charnock coefficient α a complex yet known function of the total wind input term, which depends on the wind velocity and on the Charnock coefficient again. This is solved iteratively (Janssen et al., 1990). Coupling of meteorological and wave models through a common Charnock coefficient is operationally done in medium-range met forecasting systems (e.g., at ECMWF) though the impact of coupling for smaller domains is not yet clearly assessed (Warner et al, 2010). It is unclear to which extent the additional effort of coupling improves the local wind and wave fields, in comparison to the effects of other factors, like e.g. a better bathymetry and relief resolution, or a better circulation information which might have its influence on local-scale meteorological processes (local wind jets, local convection, daily marine wind regimes, etc.). This work, within the scope of the 7th EU FP Project FIELD_AC, assesses the impact of coupling WAM and WRF on wind and wave forecasts on the Balearic Sea, and compares it with other possible improvements, like using available high-resolution circulation information from MyOcean GMES core services, or assimilating altimeter data on the Western Mediterranean. This is done in an ordered fashion following statistical design rules, which allows to extract main effects of each of the factors considered (coupling, better circulation information, data assimilation following Lionello et al., 1992) as well as two-factor interactions. Moreover, the statistical significance of these improvements can be tested in the future, though this requires maximum likelihood ratio tests with correlated data. Charnock, H. (1955) Wind stress on a water surface. Quart.J. Row. Met. Soc. 81: 639-640 Donelan, M. (1982) The dependence of aerodynamic drag coefficient on wave parameters. Proc. 1st Int. Conf. on Meteorology and Air-Sea Interactions of teh Coastal Zone. The Hague (Netherlands). AMS. 381-387 Janssen, P.A.E.M., Doyle, J., Bidlot, J., Hansen, B., Isaksen, L. and Viterbo, P. (1990) The impact of oean waves on the atmosphere. Seminars of the ECMWF. Lionello, P., Günther, H., and Janssen P.A.E.M. (1992) Assimilation of altimeter data in a global third-generation wave model. Journal of Geophysical Research 97 (C9): 453-474. Warner, J., Armstrong, B., He, R. and Zambon, J.B. (2010) Development of a Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. Ocean Modelling 35: 230-244.
The sixth generation robot in space
NASA Technical Reports Server (NTRS)
Butcher, A.; Das, A.; Reddy, Y. V.; Singh, H.
1990-01-01
The knowledge based simulator developed in the artificial intelligence laboratory has become a working test bed for experimenting with intelligent reasoning architectures. With this simulator, recently, small experiments have been done with an aim to simulate robot behavior to avoid colliding paths. An automatic extension of such experiments to intelligently planning robots in space demands advanced reasoning architectures. One such architecture for general purpose problem solving is explored. The robot, seen as a knowledge base machine, goes via predesigned abstraction mechanism for problem understanding and response generation. The three phases in one such abstraction scheme are: abstraction for representation, abstraction for evaluation, and abstraction for resolution. Such abstractions require multimodality. This multimodality requires the use of intensional variables to deal with beliefs in the system. Abstraction mechanisms help in synthesizing possible propagating lattices for such beliefs. The machine controller enters into a sixth generation paradigm.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-07
....P. G WAM Acquisition GP, Inc. 27-NOV-09 20100154 G ACS Actividades de Construccion y Servicios, S.A. G William R. Pulice. G Pulice Construction, Inc. For Further Information Contact: Sandra M. Peay...
Spherical subjective refraction with a novel 3D virtual reality based system.
Pujol, Jaume; Ondategui-Parra, Juan Carlos; Badiella, Llorenç; Otero, Carles; Vilaseca, Meritxell; Aldaba, Mikel
To conduct a clinical validation of a virtual reality-based experimental system that is able to assess the spherical subjective refraction simplifying the methodology of ocular refraction. For the agreement assessment, spherical refraction measurements were obtained from 104 eyes of 52 subjects using three different methods: subjectively with the experimental prototype (Subj.E) and the classical subjective refraction (Subj.C); and objectively with the WAM-5500 autorefractor (WAM). To evaluate precision (intra- and inter-observer variability) of each refractive tool independently, 26 eyes were measured in four occasions. With regard to agreement, the mean difference (±SD) for the spherical equivalent (M) between the new experimental subjective method (Subj.E) and the classical subjective refraction (Subj.C) was -0.034D (±0.454D). The corresponding 95% Limits of Agreement (LoA) were (-0.856D, 0.924D). In relation to precision, intra-observer mean difference for the M component was 0.034±0.195D for the Subj.C, 0.015±0.177D for the WAM and 0.072±0.197D for the Subj.E. Inter-observer variability showed worse precision values, although still clinically valid (below 0.25D) in all instruments. The spherical equivalent obtained with the new experimental system was precise and in good agreement with the classical subjective routine. The algorithm implemented in this new system and its optical configuration has been shown to be a first valid step for spherical error correction in a semiautomated way. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Multimodel comparison of the ionosphere variability during the 2009 sudden stratosphere warming
NASA Astrophysics Data System (ADS)
Pedatella, N. M.; Fang, T.-W.; Jin, H.; Sassi, F.; Schmidt, H.; Chau, J. L.; Siddiqui, T. A.; Goncharenko, L.
2016-07-01
A comparison of different model simulations of the ionosphere variability during the 2009 sudden stratosphere warming (SSW) is presented. The focus is on the equatorial and low-latitude ionosphere simulated by the Ground-to-topside model of the Atmosphere and Ionosphere for Aeronomy (GAIA), Whole Atmosphere Model plus Global Ionosphere Plasmasphere (WAM+GIP), and Whole Atmosphere Community Climate Model eXtended version plus Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model (WACCMX+TIMEGCM). The simulations are compared with observations of the equatorial vertical plasma drift in the American and Indian longitude sectors, zonal mean F region peak density (NmF2) from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) satellites, and ground-based Global Positioning System (GPS) total electron content (TEC) at 75°W. The model simulations all reproduce the observed morning enhancement and afternoon decrease in the vertical plasma drift, as well as the progression of the anomalies toward later local times over the course of several days. However, notable discrepancies among the simulations are seen in terms of the magnitude of the drift perturbations, and rate of the local time shift. Comparison of the electron densities further reveals that although many of the broad features of the ionosphere variability are captured by the simulations, there are significant differences among the different model simulations, as well as between the simulations and observations. Additional simulations are performed where the neutral atmospheres from four different whole atmosphere models (GAIA, HAMMONIA (Hamburg Model of the Neutral and Ionized Atmosphere), WAM, and WACCMX) provide the lower atmospheric forcing in the TIME-GCM. These simulations demonstrate that different neutral atmospheres, in particular, differences in the solar migrating semidiurnal tide, are partly responsible for the differences in the simulated ionosphere variability in GAIA, WAM+GIP, and WACCMX+TIMEGCM.
A comprehensive review of arsenic levels in the semiconductor manufacturing industry.
Park, Donguk; Yang, Haengsun; Jeong, Jeeyeon; Ha, Kwonchul; Choi, Sangjun; Kim, Chinyon; Yoon, Chungsik; Park, Dooyong; Paek, Domyung
2010-11-01
This paper presents a summary of arsenic level statistics from air and wipe samples taken from studies conducted in fabrication operations. The main objectives of this study were not only to describe arsenic measurement data but also, through a literature review, to categorize fabrication workers in accordance with observed arsenic levels. All airborne arsenic measurements reported were included in the summary statistics for analysis of the measurement data. The arithmetic mean was estimated assuming a lognormal distribution from the geometric mean and the geometric standard deviation or the range. In addition, weighted arithmetic means (WAMs) were calculated based on the number of measurements reported for each mean. Analysis of variance (ANOVA) was employed to compare arsenic levels classified according to several categories such as the year, sampling type, location sampled, operation type, and cleaning technique. Nine papers were found reporting airborne arsenic measurement data from maintenance workers or maintenance areas in semiconductor chip-making plants. A total of 40 statistical summaries from seven articles were identified that represented a total of 423 airborne arsenic measurements. Arsenic exposure levels taken during normal operating activities in implantation operations (WAM = 1.6 μg m⁻³, no. of samples = 77, no. of statistical summaries = 2) were found to be lower than exposure levels of engineers who were involved in maintenance works (7.7 μg m⁻³, no. of samples = 181, no. of statistical summaries = 19). The highest level (WAM = 218.6 μg m⁻³) was associated with various maintenance works performed inside an ion implantation chamber. ANOVA revealed no significant differences in the WAM arsenic levels among the categorizations based on operation and sampling characteristics. Arsenic levels (56.4 μg m⁻³) recorded during maintenance works performed in dry conditions were found to be much higher than those from maintenance works in wet conditions (0.6 μg m⁻³). Arsenic levels from wipe samples in process areas after maintenance activities ranged from non-detectable to 146 μg cm⁻², indicating the potential for dispersion into the air and hence inhalation. We conclude that workers who are regularly or occasionally involved in maintenance work have higher potential for occupational exposure than other employees who are in charge of routine production work. In addition, fabrication workers can be classified into two groups based on the reviewed arsenic exposure levels: operators with potential for low levels of exposure and maintenance engineers with high levels of exposure. These classifications could be used as a basis for a qualitative ordinal ranking of exposure in an epidemiological study.
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
ERIC Educational Resources Information Center
Byrne, Jerry R.
1975-01-01
Investigated the relative merits of searching on titles, subject headings, abstracts, free-language terms, and combinations of these elements. The combination of titles and abstracts came the closest to 100 percent retrieval. (Author/PF)
Abstracts of AF Materials Laboratory Reports
1975-09-01
NO: TITLE: AUTHOR(S): CONTRACT NO; CONTRACTOR: AFML-TR-73-307 200,397 IMPROVED AUTOMATED TAPE LAYING MACHINE M. Poullos, W. J. Murray, D.L...AUTOMATED IMPROVED AUTOMATED TAPE LAYING MACHINE AUTOMATION AUTOMATION OF COATING PROCESSES FOR GAS TURBINE DLADcS AND VANES 203222/111 203072...IMP90VE0 TAPE LAYING MACHINE IMPP)VED AUTOMATED TAPE LAYING MACHINE A STUDY O^ THE STRESS-STRAIN TEHAVIOR OF GRAPHITE
Preparing for the Downsizing and Closure of Letterman Army Medical Center: A Case Study
1991-06-17
and closure of Lieutenant Colonel F. William Brown believed in the value of this project, encouraged , and guided me during conceptualization , design...issues dirocled Sn the RW docnent repository were coded within this framwork . The muiaion category was coded 1 if primary or secmonay care waM affected
ERIC Educational Resources Information Center
Tucker, Richard
2013-01-01
This paper considers the relationship between architecture and construction management students' overall academic abilities (as measured by Weighted Average Marks [WAMs]), their peer ratings for contributions to team design assignments (as measured by an online Self-and-Peer-Assessment [SAPA] tool), and their specific abilities as building…
High Speed Computing, LANs, and WAMs
NASA Technical Reports Server (NTRS)
Bergman, Larry A.; Monacos, Steve
1994-01-01
Optical fiber networks may one day offer potential capacities exceeding 10 terabits/sec. This paper describes present gigabit network techniques for distributed computing as illustrated by the CASA gigabit testbed, and then explores future all-optic network architectures that offer increased capacity, more optimized level of service for a given application, high fault tolerance, and dynamic reconfigurability.
Wind Tunnel to Atmospheric Mapping for Static Aeroelastic Scaling
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Spain, Charles V.; Rivera, J. A.
2004-01-01
Wind tunnel to Atmospheric Mapping (WAM) is a methodology for scaling and testing a static aeroelastic wind tunnel model. The WAM procedure employs scaling laws to define a wind tunnel model and wind tunnel test points such that the static aeroelastic flight test data and wind tunnel data will be correlated throughout the test envelopes. This methodology extends the notion that a single test condition - combination of Mach number and dynamic pressure - can be matched by wind tunnel data. The primary requirements for affecting this extension are matching flight Mach numbers, maintaining a constant dynamic pressure scale factor and setting the dynamic pressure scale factor in accordance with the stiffness scale factor. The scaling is enabled by capabilities of the NASA Langley Transonic Dynamics Tunnel (TDT) and by relaxation of scaling requirements present in the dynamic problem that are not critical to the static aeroelastic problem. The methodology is exercised in two example scaling problems: an arbitrarily scaled wing and a practical application to the scaling of the Active Aeroelastic Wing flight vehicle for testing in the TDT.
Power System Observation by using Synchronized Phasor Measurements as a Smart Device
NASA Astrophysics Data System (ADS)
Mitani, Yasunori
Phasor Measurement Unit (PMU) is an apparatus which detects the absolute value of phase angle in sinusoidal signal. When more than two units are located distantly apart from each other, and they are synchronized with GPS signal which tells us the information on exact time, it becomes ready to get phase differences between two distant places. Thus, PMU with GPS receiver is applied to the monitoring of AC power system dynamics and usually installed at substations of transmission lines. The states of power network are uniquely determined by the active and reactive power and the magnitude and phase angle of voltage in each node. Among these values the phase angle had not been easily obtained until the scheme of time synchronism with GPS appeared. In this report, the history of GPS and PMU, and the current status of the applications in power systems in the world are presented. In Japan we are developing a power system monitoring system with PMUs installed at University's campuses with 100V outlets, which is called Campus WAMS. This report also introduces some results from the Campus WAMS briefly.
Empowerment of women for health promotion: a meta-analysis.
Kar, S B; Pascual, C A; Chickering, K L
1999-12-01
The objective of this paper is to identify conditions, factors and methods, which empower women and mothers (WAM) for social action and health promotion movements. WAM are the primary caregivers in almost all cultures; they have demonstrated bold leadership under extreme adversity. Consequently, when empowered and involved, WAM can be effective partners in health promotion programs. The methodology includes a meta-analysis of 40 exemplary case studies from across the world, which meet predetermined criteria, to draw implications for social action and health promotion. Cases were selected from industrialized and less-industrialized nations and from four problem domains affecting quality of life and health: (1) human rights, (2) women's equal rights, (3) economic enhancement and (4) health promotion. Content analysis extracted data from all cases on six dimensions: (1) problem, (2) impetus/leadership, (3) macro-environment, (4) methods used, (5) partners/opponents and (6) impact. Analysis identified seven methods frequently used to EMPOWER (acronym): empowerment education and training, media use and advocacy, public education and participation, organizing associations and unions, work training and micro-enterprise, enabling services and support, and rights protection and promotion. Cochran's Q test confirmed significant differences in the frequencies of methods used. The seven EMPOWER methods were used in this order: enabling services, rights protection/promotion, public education, media use/advocacy, and organizing associations/unions, empowerment education, and work training and micro-enterprise. Media and public education were more frequently used by industrialized than non-industrialized societies (X2 tests). While frequencies of methods used varied in all other comparisons, these differences were not statistically significant, suggesting the importance of these methods across problem domains and levels of industrialization. The paper integrates key findings into an empowerment model consisting of five stages: motivation for action, empowerment support, initial individual action, empowerment program, and institutionalization and replication. Implications for policy and health promotion programs are discussed.
Observed Oceanic and Terrestrial Drivers of North African Climate
NASA Astrophysics Data System (ADS)
Yu, Y.; Notaro, M.; Wang, F.; Mao, J.; Shi, X.; Wei, Y.
2015-12-01
Hydrologic variability can pose a serious threat to the poverty-stricken regions of North Africa. Yet, the current understanding of oceanic versus terrestrial drivers of North African droughts/pluvials is largely model-based, with vast disagreement among models. In order to identify the observed drivers of North African climate and develop a benchmark for model evaluations, the multivariate Generalized Equilibrium Feedback Assessment (GEFA) is applied to observations, remotely sensed data, and reanalysis products. The identified primary oceanic drivers of North African rainfall variability are the Atlantic, tropical Indian, and tropical Pacific Oceans and Mediterranean Sea. During the summer monsoon, positive tropical eastern Atlantic sea-surface temperature (SST) anomalies are associated with a southward shift of the Inter-Tropical Convergence Zone, enhanced ocean evaporation, and greater precipitable water across coastal West Africa, leading to increased West African monsoon (WAM) rainfall and decreased Sahel rainfall. During the short rains, positive SST anomalies in the western tropical Indian Ocean and negative anomalies in the eastern tropical Indian Ocean support greater easterly oceanic flow, evaporation over the western ocean, and moisture advection to East Africa, thereby enhancing rainfall. The sign, magnitude, and timing of observed vegetation forcing on rainfall vary across North Africa. The positive feedback of leaf area index (LAI) on rainfall is greatest during DJF for the Horn of Africa, while it peaks in autumn and is weakest during the summer monsoon for the Sahel. Across the WAM region, a positive LAI anomaly supports an earlier monsoon onset, increased rainfall during the pre-monsoon, and decreased rainfall during the wet season. Through unique mechanisms, positive LAI anomalies favor enhanced transpiration, precipitable water, and rainfall across the Sahel and Horn of Africa, and increased roughness, ascent, and rainfall across the WAM region. The current study represents the first attempt to separate the observed roles of oceanic and vegetation feedbacks across North Africa, and provides observational benchmark for model evaluation.
NASA Astrophysics Data System (ADS)
Carrasco, Ana; Semedo, Alvaro; Behrens, Arno; Weisse, Ralf; Breivik, Øyvind; Saetra, Øyvind; Håkon Christensen, Kai
2016-04-01
The global wave-induced current (the Stokes Drift - SD) is an important feature of the ocean surface, with mean values close to 10 cm/s along the extra-tropical storm tracks in both hemispheres. Besides the horizontal displacement of large volumes of water the SD also plays an important role in the ocean mix-layer turbulence structure, particularly in stormy or high wind speed areas. The role of the wave-induced currents in the ocean mix-layer and in the sea surface temperature (SST) is currently a hot topic of air-sea interaction research, from forecast to climate ranges. The SD is mostly driven by wind sea waves and highly sensitive to changes in the overlaying wind speed and direction. The impact of climate change in the global wave-induced current climate will be presented. The wave model WAM has been forced by the global climate model (GCM) ECHAM5 wind speed (at 10 m height) and ice, for present-day and potential future climate conditions towards the end of the end of the twenty-first century, represented by the Intergovernmental Panel for Climate Change (IPCC) CMIP3 (Coupled Model Inter-comparison Project phase 3) A1B greenhouse gas emission scenario (usually referred to as a ''medium-high emissions'' scenario). Several wave parameters were stored as output in the WAM model simulations, including the wave spectra. The 6 hourly and 0.5°×0.5°, temporal and space resolution, wave spectra were used to compute the SD global climate of two 32-yr periods, representative of the end of the twentieth (1959-1990) and twenty-first (1969-2100) centuries. Comparisons of the present climate run with the ECMWF (European Centre for Medium-Range Weather Forecasts) ERA-40 reanalysis are used to assess the capability of the WAM-ECHAM5 runs to produce realistic SD results. This study is part of the WRCP-JCOMM COWCLIP (Coordinated Ocean Wave Climate Project) effort.
Model Simulations of Waves in Hurricane Juan
NASA Astrophysics Data System (ADS)
Perrie, W.; Toulany, B.; Padilla-Hernandez, R.; Hu, Y.; Smith, P.; Zhang, W.; Zou, Q.; Ren, X.
2004-05-01
Hurricane Juan made landfall at 0300 UTC near Halifax Nova Scotia. This was a category 2 hurricane with winds of 44 m/s, the largest storm to pass over these coastal areas in several decades. Associated high ocean waves were experienced in coastal waters, from Peggy's Cove to Sheet Harbour, growing to epic proportions on the Scotian Shelf, and exceeding the 100-year return wave based on the present climatology. As part of the GoMOOS program (Gulf of Maine Ocean Observing System, www.gomoos.org), winds from the USA Navy COAMPS (Coupled Ocean Atmosphere Model Prediction System) were used to evaluate and compare three widely-used third generation numerical wave models, SWAN, WAM and WaveWatch-III (hereafter WW3) for accuracy, with in situ measurements. Model comparisons consist of a set of composite model systems, respectively nesting WAM, WW3 and SWAN in WAM and WW3. We report results from the intermediate-resolution grid for Hurricane Juan. Wave measurements were made using four operational deep-water buoys (C44258, C44142, C44137, 44005), by a conventional directional wave rider (DWR) moored offshore from Lunenburg Bay, and also by two acoustic Doppler current profiler (ADCP) located (1) near an oil rig on Sable Island Bank, in relatively shallow water, and (2) near the outer boundary of Lunenburg Bay. We discuss the reliability of DWR wave data compared to ADCP wave data. We show that all models provide reliable hindcasts for significant wave height (Hs) and for peak period (Tp) for Juan, although a clear under-estimation of Hs at the peak of the storm is evident, compared to observations. A feature in the COAMPS storm simulation is that the storm track appears to be slightly to the east of that of Quikscat scatterometer data. Comparisons between models and 2-dimensional wave spectra are presented. Preliminary results suggest that the recently released upgrade to the WW3 model shows slightly enhanced skill compared to the other models.
NASA Astrophysics Data System (ADS)
Wang, F.; Notaro, M.; Yu, Y.; Mao, J.; Shi, X.; Wei, Y.
2016-12-01
North (N.) African rainfall is characterized by dramatic interannual to decadal variability with serious socio-economic ramifications. The Sahel and West African Monsoon (WAM) region experienced a dramatic shift to persistent drought by the late 1960s, while the Horn of Africa (HOA) underwent drying since the 1990s. Large disagreementregarding the dominant oceanic drivers of N. African hydrologic variability exists among modeling studies, leading to notable spread in Sahel summer rainfall projections for this century among Coupled Model Intercomparison Project models. In order to gain a deeper understanding of the oceanic drivers of N. African rainfall and establish a benchmark for model evaluation, a statistical method, the multivariate Generalized Equilibrium Feedback Assessment, is validated and applied to observations and a control run from the Community Earth System Model (CESM). This study represents the first time that the dominant oceanic drivers of N. African rainfall were evaluated and systematically compared between observations and model simulations. CESM and the observations consistently agree that tropical oceanic modes are the dominant controls of N. African rainfall. During the monsoon season, CESM and observations agree that an anomalously warm eastern tropical Pacific shifts the Walker Circulation eastward, with its descending branch supporting Sahel drying. CESM and the observations concur that a warmer tropical eastern Atlantic favors a southward-shifted Intertropical Convergence Zone, which intensifies WAM monsoonal rainfall. An observed reduction in Sahel rainfall accompanies this enhanced WAM rainfall, yet is confined to the Atlantic in CESM. During the short rains, both observations and CESM indicate that a positive phase of tropical Indian Ocean dipole (IOD) mode [anomalously warm (cold) in western (eastern) Indian] enhances HOA rainfall. The observed IOD impacts are limited to the short rains, while the simulated impacts are year-round.
Hard X-ray spectral investigations of gamma-ray bursts 120521C and 130606A at high-redshift z ˜ 6
NASA Astrophysics Data System (ADS)
Yasuda, T.; Urata, Y.; Enomoto, J.; Tashiro, M. S.
2017-04-01
This study presents a temporal and spectral analysis of the prompt emission of two high-redshift gamma-ray bursts (GRBs), 120521C at z ˜ 6 and 130606A at z ˜ 5.91, using data obtained from the Swift-XRT/BAT and the Suzaku-WAM simultaneously. Based on follow-up XRT observations, the longest durations of the prompt emissions were approximately 80 s (120521C) and 360 s (130606A) in the rest-frames of the two GRBs. These objects are thus categorized as long-duration GRBs; however, the durations are short compared with the predicted duration of GRBs originating from first-generation stars. Because of the wide bandpass of the instruments, covering the ranges 15 keV-5 MeV (BAT-WAM) and 0.3 keV-5.0 MeV (XRT-BAT-WAM), we could successfully determine the νFν peak energies E_peak^src in the rest-frame and isotropic-equivalent radiated energies Eiso; E^src_peak = 682^{+845}_{-207} keV and E_iso = (8. 25^{+2.24}_{-1.96}) × 10^{52} erg for 120521C, and E^src_peak = 1209^{+553}_{-304} keV and E_iso = (2.82^{+0.17}_{-0.71}) × 10^{53} erg for 130606A. These obtained characteristic parameters are in accordance with the well-known relationship between E_peak^src and Eiso (Amati relationship). In addition, we examined the relationships between E_peak^src and the 1-s peak luminosity, Lp, and between E_peak^src and the geometrical corrected radiated energy, Eγ, and confirmed the E_peak^src-Lp (Yonetoku) and E_peak^src-Eγ (Ghirlanda) relationships. The results imply that these high-redshift GRBs at z ˜ 6, which are expected to have radiated during the reionization epoch, have properties similar to those of low-redshift GRBs regarding X-ray prompt emission.
Imaging the West Bohemia Seismic Zone
NASA Astrophysics Data System (ADS)
Alexandrakis, C.; Calo, M.; Bouchaala, F.; Vavrycuk, V.
2013-12-01
West Bohemia is located at the suture of three mantle lithosphere plates, the Eger Rift, the Cheb basin and is the site of Quaternary volcanism. This complex tectonic setting results in localized, periodic earthquake swarms throughout the region and many CO2 springs and gas exhalation sites. Nový Kostel, the most active swarm area, experiences frequent swarms of several hundreds to thousands of earthquakes over a period of weeks to several months. It is a unique study area, since the swarm region is surrounded by the West Bohemia Seismic Network (WEBNET), providing observations in all directions. Larger swarms, such as those in 1985/1986, 1997, 2000, 2007 and 2008, have been studied in terms of source mechanisms and swarm characteristics (Fischer and Michálek, 2003; Fischer et al., 2010; Vavryčuk, 2011). The seismicity is always located in the same area and depth range (6-15 km), however the active fault planes differ. This indicates changes to the local stress field, and may relate to the complicated tectonic situation and/or migrating fluids. Many studies have examined individual swarms and compared the earthquake episodes, however the mechanisms behind the phenomenon are still not understood. This has motivated many studies, including recent proposals for a reflection seismic profile directly over the swarm area and multidisciplinary monitoring through ICDP. In this study, we image the velocity structure within and around the swarm area using double-difference tomography (Zhang and Thurber, 2003) and Weighted Average Model (WAM) post-processing analysis (Calò et al., 2011). The WAM analysis averages together velocity models calculated with a variety of reasonable starting parameters. The velocities are weighted by the raypath proximity and density at an inversion node. This reduces starting model bias and artifacts, and yields a weighted standard deviation at each grid point. Earthquake locations and WEBNET P and S arrival times for the two most recent large swarms, 2008 and 2011, are used in this study. P-wave, S-wave and P-to-S ratio WAMs (P-to-S ratios are calculated directly from the P and S WAMs) reveal interesting features which correlate with the shallowest earthquakes. These features are interpreted in relation to the role of fluids in Nový Kostel. References: Calò, M., C. Dorbath, F. Cornet, & N. Cuenot, 2011. Geophys. J. Int., doi: 10.1111/j.1365-246X.2011.05108.x. Fischer, T., J. Horálek, J. Michálek & A. Boušková, 2010. J. Seismol., 14: 665-682. Fischer, T. & J. Michálek, 2008. Stud. Geophys. Geod., 52: 493-511. Vavryčuk, V., 2011. Earth Planet. Sci. Lett., 305: 290-296. Zhang, H. & C.H. Thurber, 2003. Bull. Seism. Soc. Am., 93: 1175-1189.
ERIC Educational Resources Information Center
Berryman-Fink, Cynthia; Wheeless, Virginia Eman
A study examined the relationship among attitudes toward women in general, attitudes toward women as managers, and perceptions of the communication competencies of women managers. Subjects, 178 employees from various types of organizations, completed the Positive Regard Scale (PRS), the Women as Managers Scale (WAMS), and the Communication…
Study on the Feasibility of Mass Area Ordnance Decontamination
1974-08-15
and Armor Plate Protection 11-50 11-20 Bucket Modifications 11-53 Appendices Figure Page III-1 The Relationship Between Rippability and Seismic...FK S. .. . ... L, i ! ,, ,,i , , W~AM0RPHIC ROK MINERALS 9 ORESm n, imI I’ 1.. I 1. 1 .t’ I.* [ FIGURE III-l: The Relationship Between Rippability
Machine characterization based on an abstract high-level language machine
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.; Smith, Alan Jay; Miya, Eugene
1989-01-01
Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.
A Technique for Machine-Aided Indexing
ERIC Educational Resources Information Center
Klingbiel, Paul H.
1973-01-01
The technique for machine-aided indexing developed at the Defense Documentation Center (DDC) is illustrated on a randomly chosen abstract. Additional text is provided in coded form so that the reader can more fully explore this technique. (2 references) (Author)
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
Wave Energy Potential in the Eastern Mediterranean Levantine Basin. An Integrated 10-year Study
2014-01-01
SUBTITLE Wave energy potential in the Eastern Mediterranean Levantine Basin. An integrated 10-year study 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c... Cardone CV, Ewing JA, et al. The WAM model e a third generation ocean wave prediction model. J Phys Oceanogr 1988;18(12):1775e810. [70] Varinou M
NASA Astrophysics Data System (ADS)
Sugita, Satoshi; Yamaoka, Kazutaka; Ohno, Masanori; Tashiro, Makoto S.; Nakagawa, Yujin E.; Urata, Yuji; Pal'Shin, Valentin; Golenetskii, Sergei; Sakamoto, Takanori; Cummings, Jay; Krimm, Hans; Stamatikos, Michael; Parsons, Ann; Barthelmy, Scott; Gehrels, Neil
2009-06-01
We present the results of the high-redshift GRB 050904 at z = 6.295 from joint spectral analysis among Swift-BAT, Konus-Wind, and Suzaku-WAM, covering a wide energy range of 15--5000keV. The νFu spectrum peak energy, Epeak, was measured at 314+173-89 keV, corresponding to 2291+1263-634 keV in the source frame, and the isotropic equivalent radiated energy, Eiso, was estimated to be 1.04+0.25-0.17 × 1054erg. Both are among the highest values that have ever been measured. GRBs with such a high Eiso (˜1054erg) might be associated with prompt optical emission. The derived spectral and energetic parameters are consistent with the correlation between the rest-frame Ep,i and the Eiso (Amati relation), but not with the correlation between the intrinsic peak energy Ep,i and the collimation-corrected energy Eγ (Ghirlanda relation), unless the density of the circumburst environment of this burst is much larger than the nominal value, as suggested by other wavelength observations. We also discuss the possibility that this burst is an outlier in the correlation between Ep,i and the peak luminosity Lp (Yonetoku relation).
NASA Astrophysics Data System (ADS)
Alari, Victor; Staneva, Joanna; Breivik, Øyvind; Bidlot, Jean-Raymond; Mogensen, Kristian; Janssen, Peter
2016-04-01
The effects of wind waves on the Baltic Sea water temperature has been studied by coupling the hydrodynamical model NEMO with the wave model WAM. The wave forcing terms that have been taken into consideration are: Stokes-Coriolis force, seastate dependent energy flux and sea-state dependent momentum flux. The combined role of these processes as well as their individual contributions on simulated temperature is analysed. The results indicate a pronounced effect of waves on surface temperature, on the distribution of vertical temperature and on upwellinǵs. In northern parts of the Baltic Sea a warming of the surface layer occurs in the wave included simulations. This in turn reduces the cold bias between simulated and measured data. The warming is primarily caused by sea-state dependent energy flux. Wave induced cooling is mostly observed in near coastal areas and is mainly due to Stokes-Coriolis forcing. The latter triggers effect of intensifying upwellings near the coasts, depending on the direction of the wind. The effect of sea-state dependent momentum flux is predominantly to warm the surface layer. During the summer the wave induced water temperature changes were up to 1 °C.
1993-01-01
engineering has led to many AI systems that are now regularly used in industry and elsewhere. The ultimate test of machine learning , the subfield of Al that...applications of machine learning suggest the time was ripe for a meeting on this topic. For this reason, Pat Langley (Siemens Corporate Research) and Yves...Kodratoff (Universite de Paris, Sud) organized an invited workshop on applications of machine learning . The goal of the gathering was to familiarize
FFATA: Mechine Augmented Composites for Structures with High Damping with High Stiffness
2012-12-05
applied , the inner channel will be the same width. The best LHG machines have the Z...Instron5567 screw controlled machine is suited to experiments up to 0.2Hz and a bit higher if operators are careful. These experiments applied ...REPORT FFATA: MACHINE AUGMENTED COMPOSITES FOR STRUCTURES WITH HIGH DAMPING WITH HIGH STIFFNESS 14. ABSTRACT 16. SECURITY CLASSIFICATION OF:
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines
1989-09-01
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer F ( Abstract In this...Projects Agency under contract number N00014-87-K-0825. Author Information Devadas : Department of Electrical Engineering and Computer Science, Room 36...MA 02139; (617) 253-0292. 0 * Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Siivas Devadas
Sensitivity of The High-resolution Wam Model With Respect To Time Step
NASA Astrophysics Data System (ADS)
Kasemets, K.; Soomere, T.
The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave heights over the whole Baltic Sea was 2.4 m (1 minute) and 2.04 m (3 minutes), respectively. The most probable reason of such difference is that the WAM model with a relatively large time step poorly describes wave field evolution in the Aland area with extremely ragged bottom topography and coastal line. In earlier studies, it has been reported that the WAM model frequently underestimates wave heights in the northern Baltic Proper by 20-30% in the case of strong north storms (Tuomi et al, Report series of the Finnish Institute of Marine Re- search, 1999). The described results suggest that a part of this underestimation may be removed through a proper choice of the time step.
Goldstein, Benjamin A.; Navar, Ann Marie; Carter, Rickey E.
2017-01-01
Abstract Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. PMID:27436868
NASA Astrophysics Data System (ADS)
Calo', M. C.; Dorbath, C.
2009-12-01
One major goal of monitoring seismicity accompanying hydraulic fracturing of a reservoir is to recover the seismic velocity field in and around the geothermal site. In many cases the seismicity induced by the hydraulic stimulations allows us to roughly describe the velocity anomalies close to the hypocentral location, but only during the time period of the stimulation. Several studies have shown that the 4D (time dependent) seismic tomographies are very useful to illustrate and study the temporal variation of the seismic velocities conditioned by injected fluids. Nevertheless in geothermal fields local earthquake tomography (LET) is often inadequate to study the seismic velocities during the inter-injection periods, due to the lack of seismicity. In July 2000 an injection test that lasted 15 days performed at the Enhanced Geothermal System (EGS) site of Soultz-sous-Forêts (Alsace, France) produced about 7200 micro-earthquakes with Duration Magnitude ranging from -0.9 to 2.5. the earthquakes were located by down hole and surface seismic stations. We present here a comparison between three tomographic studies, 1) the “traditional” seismic tomography of Cuneot et al., 2008, 2) a Double Difference tomography using the TomoDD code of Zhang and Thurber (2003) and, 3) the models obtained by applying the Weighted Average Model method (WAM, Calo’ et al., 2009). the velocity models were obtained using the same dataset recorded during the stimulation. The WAM technique produces a more reliable reconstruction of the structures around and above the cluster of earthquakes, as demonstrated by the distribution of the velocity standard deviations. Although the velocity distributions obtained by the three tomographic approaches are qualitatively similar, the WAM results correlate better with independent data such the fracturing directions measured in the down-holes, the location of the clustered seimsicity) than those of the traditional and DD tomographies. To overcome the limits of LET during the inter-injection periods we plan to perform a seismic noise tomography study. In geothermal sites, the elastic characteristics of the volume at rest, i.e. during the inter-injection periods, are often poorly known.
NASA Astrophysics Data System (ADS)
Yudin, V. A.; England, S.; Matsuo, T.; Wang, H.; Immel, T. J.; Eastes, R.; Akmaev, R. A.; Goncharenko, L. P.; Fuller-Rowell, T. J.; Liu, H.; Solomon, S. C.; Wu, Q.
2014-12-01
We review and discuss the capability of novel configurations of global community (WACCM-X and TIME-GCM) and planned-operational (WAM) models to support current and forthcoming space-borne missions to monitor the dynamics and composition of the Ionosphere-Thermosphere-Mesosphere (ITM) system. In the specified meteorology model configuration of WACCM-X, the lower atmosphere is constrained by operational analyses and/or short-term forecasts provided by the Goddard Earth Observing System (GEOS-5) of GMAO/NASA/GSFC. With the terrestrial weather of GEOS-5 and updated model physics, WACCM-X simulations are capable to reproduce the observed signatures of the perturbed wave dynamics and ion-neutral coupling during recent (2006-2013) stratospheric warming events, short-term, annual and year-to-year variability of prevailing flows, planetary waves, tides, and composition. With assimilation of the NWP data in the troposphere and stratosphere the planned-operational configuration of WAM can also recreate the observed features of the ITM day-to-day variability. These "terrestrial-weather" driven whole atmosphere simulations, with day-to-day variable solar and geomagnetic inputs, can provide specification of the background state (first guess) and errors for the inverse algorithms of forthcoming NASA ITM missions, such as ICON and GOLD. With two different viewing geometries (sun-synchronous, for ICON and geostationary for GOLD) these missions promise to perform complimentary global observations of temperature, winds and constituents to constrain the first-principle space weather forecast models. The paper will discuss initial designs of Observing System Simulation Experiments (OSSE) in the coupled simulations of TIME-GCM/WACCM-X/GEOS5 and WAM/GIP. As recognized, OSSE represent an excellent learning tool for designing and evaluating observing capabilities of novel sensors. The choice of assimilation schemes, forecast and observational errors will be discussed along with challenges and perspectives to constrain fast-varying dynamics of tides and planetary waves by observations made from sun-synchronous and geostationary space-borne platforms. We will also discuss how correlative space-borne and ground-based observations can evaluate OSSE results.
Static and Dynamic Measurement of Accommodation Using the Grand Seiko WAM-5500 Autorefractor
Win-Hall, Dorothy M.; Houser, Jaime; Glasser, Adrian
2013-01-01
Purpose The Grand Seiko WR-5500 (WAM) is an open field autorefractor capable of measuring accommodation and pupil diameter dynamically. This study was undertaken to compare static and dynamic accommodation measurements with this instrument in young, phakic subjects. Methods Fifteen subjects, aged 20–28 years (23.8±0.58; mean±SD) participated. Accommodation was stimulated with text printed on a transparent sheet presented at various distances. In static mode, subjects focused on the near text and three measurements were taken for each stimulus amplitude. In dynamic mode, the 5 Hz recording was started and subjects alternately looked through the transparent near chart and focused on a letter chart at 6 m for 5 seconds and then focused on the near letter chart for 5 seconds for a total of 30 seconds. After smoothing the raw data, the highest three individual values recorded in each 5 second interval of focusing at near were averaged for each stimulus amplitude. ANOVA and Bland-Altman analysis were used to compare the static and dynamic measurements. A calibration was performed with +3.00 to -10.00 D trial lenses behind an IR filter, in 1.00 D steps in 5 of the 15 subjects. Results Stimulus-response graphs from static and dynamic modes were not significantly different in the lower stimulus range (< 5.00 D, p = 0.93), but differed significantly for the higher stimulus amplitudes (p = 0.0027). One of 15 subjects showed a significant difference between the static and dynamic modes. Corresponding pupil diameter could be recorded along with the accommodation responses for the subjects and pupil diameter decreased with increasing stimulus demand. Calibration curves for static and dynamic measurements were not significantly different from the 1:1 line or from each other (p = 0.32). Conclusion Although slight differences between the dynamically and statically recorded responses were identified, the Grand-Seiko WAM autorefractor provides the ability to measure both. Dynamic measurement of accommodation and pupil constriction potentially provides additional useful information on the accommodative response other than simply the response amplitude. PMID:20852450
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under MVS/XA, host and target Completion...Joint Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of
ERGONOMICS ABSTRACTS 48347-48982.
ERIC Educational Resources Information Center
Ministry of Technology, London (England). Warren Spring Lab.
IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…
Comparison of Document Data Bases
ERIC Educational Resources Information Center
Schipma, Peter B.; And Others
This paper presents a detailed analysis of the content and format of seven machine-readable bibliographic data bases: Chemical Abstracts Service Condensates, Chemical and Biological Activities, and Polymer Science and Technology, Biosciences Information Service's BA Previews including Biological Abstracts and BioResearch Index, Institute for…
STATISTICAL EVALUATION OF CONFOCAL MICROSCOPY IMAGES
Abstract
In this study the CV is defined as the Mean/SD of the population of beads or pixels. Flow cytometry uses the CV of beads to determine if the machine is aligned correctly and performing properly. This CV concept to determine machine performance has been adapted to...
1988-09-01
Group Subgroup Command and control; Computational linguistics; expert system voice recognition; man- machine interface; U.S. Government 19 Abstract...simulates the characteristics of FRESH on a smaller scale. This study assisted NOSC in developing a voice-recognition, man- machine interface that could...scale. This study assisted NOSC in developing a voice-recogni- tion, man- machine interface that could be used with TONE and upgraded at a later date
Acute T-2 Intoxication: Physiologic Consequences and New Therapeutic Approaches
1983-08-01
Trichothecene mycotoxins have been implicated in both naturally oc- curring diseases and chemical attacks on civilian and military personnel. Yet...have been implicated in severe, naturally occurring, potentially fatal diseases of both man and animals following ingestion of contaminated grains...underscoring the nature of the TRM-Induced cArdioexcltation. In contrpst to the beneficial effects of TR14, nalo-one wam withotit effect on either blood
Joint Eglin Acoustics Week 2013 Data Report
2017-10-01
during this test. The M-model HH-60 (Tail Number 04-27001), with the new wide-chord blade that is principally characterized by its unique tapered...cards located within each remote unit. Upon termination of each run , sufficient data metrics and system health information are transmitted back to the...command computer to assure that good data were acquired at each microphone station during the run . A typical WAMS microphone station deployment is
Analysis of an Unusual Mirror in a 16th-Century Painting: A Museum Exercise for Physics Students
ERIC Educational Resources Information Center
Swaminathan, Sudha; Lamelas, Frank
2017-01-01
Physics students at Worcester State University visit the Worcester Art Museum (WAM) at the end of a special 100- level course called Physics in Art. The students have studied geometrical optics, and they have been introduced to concepts in atomic physics. The purpose of the museum tour is to show how physics-based techniques can be used in a…
A new model for biological effects of radiation and the driven force of molecular evolution
NASA Astrophysics Data System (ADS)
Wada, Takahiro; Manabe, Yuichiro; Nakajima, Hiroo; Tsunoyama, Yuichi; Bando, Masako
We proposed a new mathematical model to estimate biological effects of radiation, which we call Whack-A-Mole (WAM) model. A special feature of WAM model is that it involves the dose rate of radiation as a key ingredient. We succeeded to reproduce the experimental data of various species concerning the radiation induced mutation frequencies. From the analysis of the mega-mouse experiments, we obtained the mutation rate per base-pair per year for mice which is consistent with the so-called molecular clock in evolution genetics, 10-9 mutation/base-pair/year. Another important quantity is the equivalent dose rate for the whole spontaneous mutation, deff. The value of deff for mice is 1.1*10-3 Gy/hour which is much larger than the dose rate of natural radiation (10- (6 - 7) Gy/hour) by several orders of magnitude. We also analyzed Drosophila data and obtained essentially the same numbers. This clearly indicates that the natural radiation is not the dominant driving force of the molecular evolution, but we should look for other factors, such as miscopy of DNA in duplication process. We believe this is the first quantitative proof of the small contribution of the natural radiation in the molecular evolution.
Runtime Verification of C Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2008-01-01
We present in this paper a framework, RMOR, for monitoring the execution of C programs against state machines, expressed in a textual (nongraphical) format in files separate from the program. The state machine language has been inspired by a graphical state machine language RCAT recently developed at the Jet Propulsion Laboratory, as an alternative to using Linear Temporal Logic (LTL) for requirements capture. Transitions between states are labeled with abstract event names and Boolean expressions over such. The abstract events are connected to code fragments using an aspect-oriented pointcut language similar to ASPECTJ's or ASPECTC's pointcut language. The system is implemented in the C analysis and transformation package CIL, and is programmed in OCAML, the implementation language of CIL. The work is closely related to the notion of stateful aspects within aspect-oriented programming, where pointcut languages are extended with temporal assertions over the execution trace.
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host IBM 4381 under MVS/XA, target...Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation, IBM...Standard ANSI/MIL-STD-1815A in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record
Report on the formal specification and partial verification of the VIPER microprocessor
NASA Technical Reports Server (NTRS)
Brock, Bishop; Hunt, Warren A., Jr.
1991-01-01
The VIPER microprocessor chip is partitioned into four levels of abstractions. At the highest level, VIPER is described with decreasingly abstract sets of functions in LCF-LSM. At the lowest level are the gate-level models in proprietary CAD languages. The block-level and gate-level specifications are also given in the ELLA simulation language. Among VIPER's deficiencies are the fact that there is no notion of external events in the top-level specification, and it is impossible to use the top-level specifications to prove abstract properties of programs running on VIPER computers. There is no complete proof that the gate-level specifications implement the top-level specifications. Cohn's proof that the major-state machine correctly implements the top-level specifications has no formal connection with any of the other proof attempts. None of the latter address resetting the machine, memory timeout, forced error, or single step modes.
An Immanent Machine: Reconsidering Grades, Historical and Present
ERIC Educational Resources Information Center
Tocci, Charles
2010-01-01
At some point the mechanics of schooling begin running of their own accord. Such has become the case with grades (A's, B's, C's, etc.). This article reconsiders the history of grades through the concepts of immanence and abstract machines from the oeuvre of Deleuze and Guattari. In the first section, the history of grades as presently written…
Worldwide Buoy Technology Survey. Volume 2. Appendix B. Buoy Records. Book 2. Germany - USA
1991-02-01
arrangement and effectiveness of the complete system could not be addressed in detail within the constraints of this project. In an overall evaluation of...the SRA system , such considerations should also be addressed. The USCG’s Waterway Analysis and Management System (WAMS) is considering this matter as...project. The task includes the screening of worldwide engineering and technical information on buoy systems , approaches to problem solving (particularly
Inventory of U.S. Health Care Data Bases, 1976-1987.
ERIC Educational Resources Information Center
Kralovec, Peter D.; Andes, Steven M.
This inventory contains summary abstracts of 305 current (1976-1987) non-bibliographic machine-readable databases and national health care data that have been created by public and private organizations throughout the United States. Each of the abstracts contains pertinent information on the sponsor or database, a description of the purpose and…
Kaplan, Jonas T.; Man, Kingson; Greening, Steven G.
2015-01-01
Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202
NASA Astrophysics Data System (ADS)
Pohl, Benjamin; Douville, Hervé
2011-10-01
The CNRM atmospheric general circulation model Arpege-Climat is relaxed towards atmospheric reanalyses outside the 10°S-32°N 30°W-50°E domain in order to disentangle the regional versus large-scale sources of climatological biases and interannual variability of the West African monsoon (WAM). On the one hand, the main climatological features of the monsoon, including the spatial distribution of summer precipitation, are only weakly improved by the nudging, thereby suggesting the regional origin of the Arpege-Climat biases. On the other hand, the nudging technique is relatively efficient to control the interannual variability of the WAM dynamics, though the impact on rainfall variability is less clear. Additional sensitivity experiments focusing on the strong 1994 summer monsoon suggest that the weak sensitivity of the model biases is not an artifact of the nudging design, but the evidence that regional physical processes are the main limiting factors for a realistic simulation of monsoon circulation and precipitation in the Arpege-Climat model. Sensitivity experiments to soil moisture boundary conditions are also conducted and highlight the relevance of land-atmosphere coupling for the amplification of precipitation biases. Nevertheless, the land surface hydrology is not the main explanation for the model errors that are rather due to deficiencies in the atmospheric physics. The intraseasonal timescale and the model internal variability are discussed in a companion paper.
A mathematical model for the effects of radiation to the induced cancer in mice
NASA Astrophysics Data System (ADS)
Wada, Takahiro; Manabe, Yuichiro; Bando, Masako
We have been studying biological effects of radiation in terms of mathematical models. There are two main objects that we need to study: mutation and cancer. We proposed the Whack-A-Mole (WAM) model which takes account of the repair effects to study radiation induced mutations. We applied it to the mutation of several species including Drosophila and mice, and succeeded to reproduce the dose and dose-rate dependence of the mutation rates. Here, as a next step, we study the effects of low dose-rate radiation to an induced cancer in mice. In the experiment, they divided their mice in four groups and kept them under constant gamma-ray radiations with different dose rate for each group since the birth. On the 35th day, chemical carcinogen was given to each mouse and they observed the occurrence and the growth of cancer for one year. Our mathematical model consists of two stages. The first stage describes a multiple-step carcinogenesis and the second stage describes its growth. We assume that the carcinogenesis starts with the chemical carcinogen and that the rate of the following processes depends on the dose rate as it does in the WAM model. We found some irregularities in the data, however, the overall fit is satisfactory. This work was supported by JSPS KAKENHI Grant Number JP16H04637.
NASA Astrophysics Data System (ADS)
Akmaev, R. A.; Fuller-Rowell, T. J.; Wu, F.; Wang, H.; Juang, H.; Moorthi, S.; Iredell, M.
2009-12-01
The upper atmosphere and ionosphere exhibit variability and phenomena that have been associated with planetary and tidal waves originating in the lower atmosphere. To study and be able to predict the effects of these global-scale dynamical perturbations on the coupled thermosphere-ionosphere-electrodynamics system a new coupled model is being developed under the IDEA project. To efficiently cross the infamous R2O “death valley”, from the outset the IDEA project leverages the natural synergy between NOAA’s National Weather Service’s (NWS) Space Weather Prediction and Environmental Modeling Centers and a NOAA-University of Colorado cooperative institute (CIRES). IDEA interactively couples a Whole Atmosphere Model (WAM) with ionosphere-plasmasphere and electrodynamics models. WAM is a 150-layer general circulation model (GCM) based on NWS’s operational weather prediction Global Forecast System (GFS) extended from its nominal top altitude of 62 km to over 600 km. It incorporates relevant physical processes including those responsible for the generation of tidal and planetary waves in the troposphere and stratosphere. Long-term simulations reveal realistic seasonal variability of tidal waves with a substantial contribution from non-migrating tidal modes, recently implicated in the observed morphology of the ionosphere. Such phenomena as the thermospheric Midnight Temperature Maximum (MTM), previously associated with the tides, are also realistically simulated for the first time.
NASA Astrophysics Data System (ADS)
Yasuda, Tetsuya; Iwakiri, Wataru B.; Tashiro, Makoto S.; Terada, Yukikatsu; Kouzu, Tomomi; Enoto, Teruaki; Nakagawa, Yujin E.; Bamba, Aya; Urata, Yuji; Yamaoka, Kazutaka; Ohno, Masanori; Shibata, Shinpei; Makishima, Kazuo
2015-06-01
The 2.1-s anomalous X-ray pulsar 1E 1547.0-5408 exhibited an X-ray outburst on 2009 January 22, emitting a large number of short bursts. The wide-band all-sky monitor (WAM) on-board Suzaku detected at least 254 bursts in the 160 keV-6.2 MeV band over the period of January 22 00:57-17:02 UT from the direction of 1E 1547.0-5408. One of these bursts, which occurred at 06:45:13, produced the brightest fluence in the 0.5-6.2 MeV range, with an averaged 0.16-6.2 MeV flux and extrapolated 25 keV-2 MeV fluence of about 1 × 10-5 erg cm-2 s-1 and about 3 × 10-4 erg cm-2, respectively. After pile-up corrections, the time-resolved WAM spectra of this burst were well-fitted in the 0.16-6.2 MeV range by two-component models; specifically, a blackbody plus an optically thin thermal bremsstrahlung or a combination of a blackbody and a power-law component with an exponential cut-off. These results are compared with previous works reporting the persistent emission and weaker short bursts followed by the same outburst.
Advanced light source: Compendium of user abstracts and technical reports,1993-1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
1997-04-01
This compendium contains abstracts written by users summarizing research completed or in progress from 1993-1996, ALS technical reports describing ongoing efforts related to improvement in machine operations and research and development projects, and information on ALS beamlines planned through 1998. Two tables of contents organize the user abstracts by beamline and by area of research, and an author index makes abstracts accessible by author and by principal investigator. Technical details for each beamline including whom to contact for additional information can be found in the beamline information section. Separate abstracts have been indexed into the database for contributions to thismore » compendium.« less
Worldwide Buoy Technology Survey. Volume 1. Report
1991-02-01
1522.2.9.3 The Remearch Instituite Netherlands (3tARIN) 155 2.2.9.4 Marine Analytics .. .. .. .. L.2.9.5 D&"a Sipyards . 157 2.2.10 Norway 2 .2-1.1 ~Ticn Plat...Technologies, Inc. (U.S. Manufacturer) VTS Vessel Traffic Services WAMS Waterway Analysis and Management Systems WATG Wave Activated Turbine Generator...this project. In an overall evaluation of the SRA system, such considerations should also be addressed. The USCG’s Waterway Analysis and Management
1987-07-01
and Fatiguing flandgrip,, 45 LiST OF TABLES NUMBER P AG F. I Anthroponetric Data of Subjects ...................... 29 2 (:harges In Peso , MBP and...Pressure ( Peso ): Intraesophageal pressut) wam measured from inflated esophageal balloons attached to a pressure transducer and taken to be a...during inspiration and less negative (upward) deflection during expiration. Peso was recorded foe the entire duration of the experimental period
2011-11-20
Breivik and Reistad 1994; Lionello et al. 1992, 1995; Abdalla et al. 2005; Emmanouil et al. 2007) and optimization of the direct model outputs by using...neutral winds and new stress tables in WAM. ECMWF Research Department Memo R60.9/JB/0400 Breivik LA, Reistad M (1994) Assimilation of ERS-1...geometry graduate texts in mathematics, vol 120, 2nd edn. Springer-Verlag, Berlin Emmanouil G, Galanis G, Kallos G, Breivik LA, Heilberg H, Reistad M
NASA Astrophysics Data System (ADS)
Rückwardt, M.; Göpfert, A.; Correns, M.; Schellhorn, M.; Linß, G.
2010-07-01
Coordinate measuring machines are high precession all-rounder in three dimensional measuring. Therefore the versatility of parameters and expandability of additionally hardware is very comprehensive. Consequently you need much expert knowledge of the user and mostly a lot of advanced information about the measuring object. In this paper a coordinate measuring machine and a specialized measuring machine are compared at the example of the measuring of eyeglass frames. For this case of three dimensional measuring challenges the main focus is divided into metrological and economical aspects. At first there is shown a fully automated method for tactile measuring of this abstract form. At second there is shown a comparison of the metrological characteristics of a coordinate measuring machine and a tracer for eyeglass frames. The result is in favour to the coordinate measuring machine. It was not surprising in these aspects. At last there is shown a comparison of the machine in front of the economical aspects.
STELAR: An experiment in the electronic distribution of astronomical literature
NASA Technical Reports Server (NTRS)
Warnock, A.; Vansteenburg, M. E.; Brotzman, L. E.; Gass, J.; Kovalsky, D.
1992-01-01
STELAR (Study of Electronic Literature for Astronomical Research) is a Goddard-based project designed to test methods of delivering technical literature in machine readable form. To that end, we have scanned a five year span of the ApJ, ApJ Supp, AJ and PASP, and have obtained abstracts for eight leading academic journals from NASA/STI CASI, which also makes these abstracts available through the NASA RECON system. We have also obtained machine readable versions of some journal volumes from the publishers, although in many instances, the final typeset versions are no longer available. The fundamental data object for the STELAR database is the article, a collection of items associated with a scientific paper - abstract, scanned pages (in a variety of formats), figures, OCR extractions, forward and backward references, errata and versions of the paper in various formats (e.g., TEX, SGML, PostScript, DVI). Articles are uniquely referenced in the database by journal name, volume number and page number. The selection and delivery of articles is accomplished through the WAIS (Wide Area Information Server) client/server models requiring only an Internet connection. Modest modifications to the server code have made it capable of delivering the multiple data types required by STELAR. WAIS is a platform independent and fully open multi-disciplinary delivery system, originally developed by Thinking Machines Corp. and made available free of charge. It is based on the ISO Z39.50 standard communications protocol. WAIS servers run under both UNIX and VMS. WAIS clients run on a wide variety of machines, from UNIX-based Xwindows systems to MS-DOS and macintosh microcomputers. The WAIS system includes full-test indexing and searching of documents, network interface and easy access to a variety of document viewers. ASCII versions of the CASI abstracts have been formatted for display and the full test of the abstracts has been indexed. The entire WAIS database of abstracts is now available for use by the astronomical community. Enhancements of the search and retrieval system are under investigation to include specialized searches (by reference, author or keyword, as opposed to full test searches), improved handling of word stems, improvements in relevancy criteria and other retrieval techniques, such as factor spaces. The STELAR project has been assisted by the full cooperation of the AAS, the ASP, the publishers of the academic journals, librarians from GSFC, NRAO and STScI, the Library of Congress, and the University of North Carolina at Chapel Hill.
A Unified Approach to the Synthesis of Fully Testable Sequential Machines
1989-10-01
N A Unified Approach to the Synthesis of Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer Abstract • In this paper we attempt to...research was supported in part by the Defense Advanced Research Projects Agency under contract N00014-87-K-0825. Author Information Devadas : Department...Fully Testable Sequential Maine(S P Sritiivas Devadas Departinent of Electrical Engineerinig anid Comivi Sciec Massachusetts Institute of Technology
An Analysis of Hardware-Assisted Virtual Machine Based Rootkits
2014-06-01
certain aspects of TPM implementation just to name a few. HyperWall is an architecture proposed by Szefer and Lee to protect guest VMs from...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The use of virtual machine (VM) technology has expanded rapidly since AMD and Intel implemented ...Intel VT-x implementations of Blue Pill to identify commonalities in the respective versions’ attack methodologies from both a functional and technical
Automatic Review of Abstract State Machines by Meta Property Verification
NASA Technical Reports Server (NTRS)
Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia
2010-01-01
A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.
Automated annotation of functional imaging experiments via multi-label classification
Turner, Matthew D.; Chakrabarti, Chayan; Jones, Thomas B.; Xu, Jiawei F.; Fox, Peter T.; Luger, George F.; Laird, Angela R.; Turner, Jessica A.
2013-01-01
Identifying the experimental methods in human neuroimaging papers is important for grouping meaningfully similar experiments for meta-analyses. Currently, this can only be done by human readers. We present the performance of common machine learning (text mining) methods applied to the problem of automatically classifying or labeling this literature. Labeling terms are from the Cognitive Paradigm Ontology (CogPO), the text corpora are abstracts of published functional neuroimaging papers, and the methods use the performance of a human expert as training data. We aim to replicate the expert's annotation of multiple labels per abstract identifying the experimental stimuli, cognitive paradigms, response types, and other relevant dimensions of the experiments. We use several standard machine learning methods: naive Bayes (NB), k-nearest neighbor, and support vector machines (specifically SMO or sequential minimal optimization). Exact match performance ranged from only 15% in the worst cases to 78% in the best cases. NB methods combined with binary relevance transformations performed strongly and were robust to overfitting. This collection of results demonstrates what can be achieved with off-the-shelf software components and little to no pre-processing of raw text. PMID:24409112
Gajare, Swaroop; Rao, J Ganeswara; Naidu, O D; Pradhan, Ashok Kumar
2017-08-13
Cascade tripping of power lines triggered by maloperation of zone-3 relays during stressed system conditions, such as load encroachment, power swing and voltage instability, has led to many catastrophic power failures worldwide, including Indian blackouts in 2012. With the introduction of wide-area measurement systems (WAMS) into the grids, real-time monitoring of transmission network condition is possible. A phasor measurement unit (PMU) sends time-synchronized data to a phasor data concentrator, which can provide a control signal to substation devices. The latency associated with the communication system makes WAMS suitable for a slower form of protection. In this work, a method to identify the faulted line using synchronized data from strategic PMU locations is proposed. Subsequently, a supervisory signal is generated for specific relays in the system for any disturbance or stressed condition. For a given system, an approach to decide the strategic locations for PMU placement is developed, which can be used for determining the minimum number of PMUs required for application of the method. The accuracy of the scheme is tested for faults during normal and stressed conditions in a New England 39-bus system simulated using EMTDC/PSCAD software. With such a strategy, maloperation of relays can be averted in many situations and thereby blackouts/large-scale disturbances can be prevented.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
ERIC Educational Resources Information Center
Palmer, Crescentia
A comparison of costs for computer-based searching of Psychological Abstracts and Educational Resources Information Center (ERIC) systems by the New York State Library at Albany was produced by combining data available from search request forms and from bills from the contract subscription service, the State University of New…
1980-05-31
34 International Journal of Man- Machine Studies , Vol. 9, No. 1, 1977, pp. 1-68. [16] Zimmermann, H. J., Theory and Applications of Fuzzy Sets, Institut...Boston, Inc., Hingham, MA, 1978. [18] Yager, R. R., "Multiple Objective Decision-Making Using Fuzzy Sets," International Journal of Man- Machine Studies ...Professor of Industria Engineering ... iv t TABLE OF CONTENTS page ABSTRACT .. .. . ...... . .... ...... ........ iii LIST OF TABLES
RI: Rheology as a Tool for Understanding the Mechanics of Live Ant Aggregations, Part 1
2016-11-04
measure rheological properties of biological fluids. Using this machine, we were able to characterize non -Newtonian fluids such as frog saliva...GA 30332 -0420 ABSTRACT Number of Papers published in peer-reviewed journals: Number of Papers published in non peer-reviewed journals: Final Report...order to measure rheological properties of biological fluids. Using this machine, we were able to characterize non -Newtonian fluids such as frog
NASA Astrophysics Data System (ADS)
Parker, A. O.; Schmidt, M. W.; Slowey, N. C.; Jobe, Z. R.; Marcantonio, F.
2014-12-01
Abrupt droughts in West Africa impart significant socio-economic impacts on the developing countries of this region, and yet a comprehensive understanding of the causes and duration of such droughts remains elusive. Much of the summertime rainfall associated with the West African Monsoon (WAM) falls within the Niger River basin and eventually drains into the eastern Gulf of Guinea, contributing to the low sea-surface salinity of this region. Of the limited number of studies that reconstruct Gulf of Guinea salinity through the deglacial, the most comprehensive of those is located ~ 400 km east of the Niger delta and may not be solely influenced by WAM runoff. Here, we present XRF and foraminiferal trace metal data from two new cores located less than 100 km from the Western Niger Delta. Radiocarbon dating of cores Grand 21 (4.72oN, 4.48oE) and Fan 17 (4.81oN, 4.41oE) produced near linear sedimentation rates of 20 cm/kyr and 15 cm/kyr respectively. Elemental sediment compositions from XRF core scanning reveal an abrupt 50% increase in SiO2 between 17-15 ka during Heinrich Event 1. This increase, coeval with increases of CaCO3 (+12%) content and Ba/Ti ratios suggests a large increase in primary productivity during H1. Values then decrease at the onset of the Bolling-Allerod (~14.6 kyr) until a similar, albeit smaller increase is recorded during the Younger Dryas beginning at 12.7 kyr. In contrast, FeO2 and TiO2 are thought to be a proxies of Niger River discharge strength and suggest a more gradual change in riverine discharge across the deglacial that is most likely driven by precession. These proxies suggest Niger River runoff was low from the LGM through Heinrich 1, gradually increasing around 13 ka. FeO2 and TiO2 values then peak between 11.5-7.5 kyr, consistent with the African Humid Period, before gradually decreasing through the mid-late Holocene. This deglacial pattern of riverine input is markedly different from previous reconstructions of WAM variability and does not appear to explain the large increases in primary production during H1 or the YD. To further investigate Niger River runoff and water column hydrography change in the Niger Delta across the deglacial, we will also present data from three planktonic foraminifera: Globigerinoides ruber, Neogloboquadrina dutertrei and Globorotalia crassaformis.
Simulation of the West African monsoon onset using the HadGEM3-RA regional climate model
NASA Astrophysics Data System (ADS)
Diallo, Ismaïla; Bain, Caroline L.; Gaye, Amadou T.; Moufouma-Okia, Wilfran; Niang, Coumba; Dieng, Mame D. B.; Graham, Richard
2014-08-01
The performance of the Hadley Centre Global Environmental Model version 3 regional climate model (HadGEM3-RA) in simulating the West African monsoon (WAM) is investigated. We focus on performance for monsoon onset timing and for rainfall totals over the June-July-August (JJA) season and on the model's representation of the underlying dynamical processes. Experiments are driven by the ERA-Interim reanalysis and follow the CORDEX experimental protocol. Simulations with the HadGEM3 global model, which shares a common physical formulation with HadGEM3-RA, are used to gain insight into the causes of HadGEM3-RA simulation errors. It is found that HadGEM3-RA simulations of monsoon onset timing are realistic, with an error in mean onset date of two pentads. However, the model has a dry bias over the Sahel during JJA of 15-20 %. Analysis suggests that this is related to errors in the positioning of the Saharan heat low, which is too far south in HadGEM3-RA and associated with an insufficient northward reach of the south-westerly low-level monsoon flow and weaker moisture convergence over the Sahel. Despite these biases HadGEM3-RA's representation of the general rainfall distribution during the WAM appears superior to that of ERA-Interim when using Global Precipitation Climatology Project or Tropical Rain Measurement Mission data as reference. This suggests that the associated dynamical features seen in HadGEM3-RA can complement the physical picture available from ERA-Interim. This approach is supported by the fact that the global HadGEM3 model generates realistic simulations of the WAM without the benefit of pseudo-observational forcing at the lateral boundaries; suggesting that the physical formulation shared with HadGEM3-RA, is able to represent the driving processes. HadGEM3-RA simulations confirm previous findings that the main rainfall peak near 10°N during June-August is maintained by a region of mid-tropospheric ascent located, latitudinally, between the cores of the African Easterly Jet and Tropical Easterly Jet that intensifies around the time of onset. This region of ascent is weaker and located further south near 5°N in the driving ERA-Interim reanalysis, for reasons that may be related to the coarser resolution or the physics of the underlying model, and this is consistent with a less realistic latitudinal rainfall profile than found in the HadGEM3-RA simulations.
A rule-based approach to model checking of UML state machines
NASA Astrophysics Data System (ADS)
Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz
2016-12-01
In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.
The evolution and practical application of machine translation system (1)
NASA Astrophysics Data System (ADS)
Tominaga, Isao; Sato, Masayuki
This paper describes a development, practical applicatioin, problem of a system, evaluation of practical system, and development trend of machine translation. Most recent system contains next four problems. 1) the vagueness of a text, 2) a difference of the definition of the terminology between different language, 3) the preparing of a large-scale translation dictionary, 4) the development of a software for the logical inference. Machine translation system is already used practically in many industry fields. However, many problems are not solved. The implementation of an ideal system will be after 15 years. Also, this paper described seven evaluation items detailedly. This English abstract was made by Mu system.
A Critical Review of Options for Tool and Workpiece Sensing
1989-06-02
Tool Temperature Control ." International Machine Tool Design Res., Vol. 7, pp. 465-75, 1967. 5. Cook, N. H., Subramanian, K., and Basile, S. A...if necessury and identify by block riumber) FIELD GROUP SUB-GROUP 1. Detectors 3. Control Equipment 1 08 2. Sensor Characteristics 4. Process Control ...will provide conceptual designs and recommend a system (Continued) 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21 ABSTRACT SECURITY CLASSIFICATION 0
Un-Building Blocks: A Model of Reverse Engineering and Applicable Heuristics
2015-12-01
CONCLUSIONS The machine does not isolate man from the great problems of nature but plunges him more deeply into them. Antoine de Saint-Exupery— Wind ...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Reverse engineering is the problem -solving activity that ensues when one takes a...Douglas Moses, Vice Provost for Academic Affairs iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Reverse engineering is the problem -solving
Kawano, Tomonori; Bouteau, François; Mancuso, Stefano
2012-11-01
The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed.
Kawano, Tomonori; Bouteau, François; Mancuso, Stefano
2012-01-01
The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed. PMID:23336016
ERIC Educational Resources Information Center
Chowdhury, Gobinda G.
2003-01-01
Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…
A grounded theory of abstraction in artificial intelligence.
Zucker, Jean-Daniel
2003-07-29
In artificial intelligence, abstraction is commonly used to account for the use of various levels of details in a given representation language or the ability to change from one level to another while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem proving, knowledge representation (in particular for spatial and temporal reasoning) and machine learning. In such contexts, abstraction is defined as a mapping between formalisms that reduces the computational complexity of the task at stake. By analysing the notion of abstraction from an information quantity point of view, we pinpoint the differences and the complementary role of reformulation and abstraction in any representation change. We contribute to extending the existing semantic theories of abstraction to be grounded on perception, where the notion of information quantity is easier to characterize formally. In the author's view, abstraction is best represented using abstraction operators, as they provide semantics for classifying different abstractions and support the automation of representation changes. The usefulness of a grounded theory of abstraction in the cartography domain is illustrated. Finally, the importance of explicitly representing abstraction for designing more autonomous and adaptive systems is discussed.
A Unified Access Model for Interconnecting Heterogeneous Wireless Networks
2015-05-01
Defined Networking, OpenFlow, WiFi, LTE 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 18 19a. NAME OF...Machine Configurations with WiFi and LTE 4 2.3 Three Virtual Machine Configurations with WiFi and LTE 5 3. Results and Discussion 5 4. Summary and...WiFi and long-term evolution ( LTE ), and created a communication pathway between them via a central controller node. Our simulation serves as a
1983-10-01
by block number) Naval Ship Structures; Composites . Glass Reinforced Plastics, Filament Winding, Minesweepers. 20. ABSTRACT (Continue on reverse side...associated with this method of manufacturing a ship hull out of Glass Reinforced Plastic (GRP). Winding machine and man- drel concepts were reviewed... machine and mandrel concepts were reviewed, as well as the structural requirements and possible materials. A design of a 1/5th scale (30 ft) model
Performance evaluation of WAVEWATCH III model in the Persian Gulf using different wind resources
NASA Astrophysics Data System (ADS)
Kazeminezhad, Mohammad Hossein; Siadatmousavi, Seyed Mostafa
2017-07-01
The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.
The crucial role of the Green Sahara in damping ENSO variability during the Holocene
NASA Astrophysics Data System (ADS)
Pausata, Francesco S. R.; Zhang, Qiong; Muschitiello, Francesco; Stager, Curt
2016-04-01
Several paleoclimate records show that the ENSO variability may have been remarkably smaller during the mid Holocene (MH) relative to today; however, MH model simulations in which only the orbital forcing is taken into account are not able to fully capture the magnitude of this change. We use a fully coupled simulation for 6000 yr BP (MH) in which we prescribed not only the MH orbital forcing but also Saharan vegetation and reduced dust concentrations. By performing a set of idealized experiments in which each forcing is changed in turn, we show that when accounting for both vegetated Sahara and reduced dust concentrations, the amplitude of the ENSO cycle and its variability are remarkably reduced (~25%) compared to case when only the orbital forcing is prescribed (only 7%). The changes in ENSO behavior are accompanied by damping of the Atlantic El Niño variability (almost 50%). The simulated changes in equatorial variability are connected to the momentous strengthening of the WAM monsoon, which extents all the way to the northernmost part of the Sahara desert. Such changes in the WAM and in the atmospheric circulation over the equatorial Atlantic led to a reduction of the Atlantic El Niño variability and affect ENSO behavior through the atmospheric circulation bridge between the Atlantic and the Pacific. Hence, our results suggest orbital forcing is likely not the only forcing at play behind the changes in ENSO behavior and point to the changes over equatorial Atlantic connected to the Sahara greening as a crucial factor in altering the ENSO spatiotemporal characteristic during the MH.
Recent variations in geopotential height associated with West African monsoon variability
NASA Astrophysics Data System (ADS)
Okoro, Ugochukwu K.; Chen, Wen; Nath, Debashis
2018-02-01
In the present study, the atmospheric circulation patterns associated with the seasonal West Africa (WA) monsoon (WAM) rainfall variability has been investigated. The observational rainfall data from the Climatic Research Unit (CRU) and atmospheric fields from the National Center for Environmental Prediction (NCEP) reanalysis 2, from 1979 to 2014, have been used. The rainfall variability extremes, classified as wet or dry years, are the outcomes of simultaneous 6-month SPI at the three rainfall zones, which shows increasing trends [Guinea Coast (GC = 0.012 year-1), Eastern Sudano Sahel (ESS = 0.045 year-1) and Western Sudano Sahel (WSS = 0.056 year-1) from Sen's slope]; however, it is significant only in the Sahel region (α = 0.05 and α = 0.001 at ESS and WSS, respectively, from Mann-Kendall test). The vertical profile of the geopotential height (GpH) during the wet and dry years reveals that the 700 hPa anomalies show remarkable pattern at about 8°N to 13°N. This shows varying correlation with the zonal averaged vertically integrated moisture flux convergence and rainfall anomalies, respectively, as well as the oceanic pulsations indexes [Ocean Nino Index (ONI) and South Atlantic Ocean dipole index (SAODI), significant from t test], identified as precursors to the Sahel and GC rainfall variability respectively. The role of GpH anomalies at 700 hPa has been identified as the facilitator to the West African Westerly Jet's input to the moisture flux transported over the WA. This is a new perspective of the circulation processes associated with WAM and serves as a basis for modeling investigations.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
...: http://www.epa.gov/dockets . Abstract: The sources subject to this rule (i.e., extraction plants, ceramic plants, foundries, incinerators, propellant plants, and machine shops which process beryllium and...
Active semi-supervised learning method with hybrid deep belief networks.
Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong
2014-01-01
In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.
Toward Millions of File System IOPS on Low-Cost, Commodity Hardware
Zheng, Da; Burns, Randal; Szalay, Alexander S.
2013-01-01
We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads. PMID:24402052
Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.
Zheng, Da; Burns, Randal; Szalay, Alexander S
2013-01-01
We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.
The impact of machine learning techniques in the study of bipolar disorder: A systematic review.
Librenza-Garcia, Diego; Kotzian, Bruno Jaskulski; Yang, Jessica; Mwangi, Benson; Cao, Bo; Pereira Lima, Luiza Nunes; Bermudez, Mariane Bagatin; Boeira, Manuela Vianna; Kapczinski, Flávio; Passos, Ives Cavalcante
2017-09-01
Machine learning techniques provide new methods to predict diagnosis and clinical outcomes at an individual level. We aim to review the existing literature on the use of machine learning techniques in the assessment of subjects with bipolar disorder. We systematically searched PubMed, Embase and Web of Science for articles published in any language up to January 2017. We found 757 abstracts and included 51 studies in our review. Most of the included studies used multiple levels of biological data to distinguish the diagnosis of bipolar disorder from other psychiatric disorders or healthy controls. We also found studies that assessed the prediction of clinical outcomes and studies using unsupervised machine learning to build more consistent clinical phenotypes of bipolar disorder. We concluded that given the clinical heterogeneity of samples of patients with BD, machine learning techniques may provide clinicians and researchers with important insights in fields such as diagnosis, personalized treatment and prognosis orientation. Copyright © 2017 Elsevier Ltd. All rights reserved.
USSR Space Life Sciences Digest. Index to issues 15-20
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor)
1989-01-01
This bibliography provides an index to issues 15 through 20 of the USSR Space Life Sciences Digest. There are two sections. The first section lists bibliographic citations of abstracts in these issues, grouped by topic area categories. The second section provides a key word index for the same abstracts. The topic categories include exobiology, space medicine and psychology, human performance and man-machine systems, various life/body systems, human behavior and adaptation, biospherics, and others.
USSR Space Life Sciences Digest. Index to issues 21-25
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor)
1990-01-01
This bibliography provides an index to issues 21 through 25 of the USSR Space Life Sciences Digest. There are two sections. The first section lists bibliographic citations of abstracts in these issues, grouped by topic area categories. The second section provides a key word index for the same abstracts. The topic categories include exobiology, space medicine and psychology, human performance and man-machine systems, various life/body systems, human behavior and adaptation, biospherics, and others.
USSR Space Life Sciences Digest. Index to issues 26-29
NASA Technical Reports Server (NTRS)
Stone, Lydia Razran (Editor)
1991-01-01
This bibliography provides an index to issues 26 through 29 of the USSR Space Life Sciences Digest. There are two sections. The first section lists bibliographic citations of abstracts in these issues, grouped by topic area categories. The second section provides a key word index for the same abstracts. The topic categories include exobiology, space medicine and psychology, human performance and man-machine systems, various life/body systems, human behavior and adaptation, biospherics, and others.
A grounded theory of abstraction in artificial intelligence.
Zucker, Jean-Daniel
2003-01-01
In artificial intelligence, abstraction is commonly used to account for the use of various levels of details in a given representation language or the ability to change from one level to another while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem proving, knowledge representation (in particular for spatial and temporal reasoning) and machine learning. In such contexts, abstraction is defined as a mapping between formalisms that reduces the computational complexity of the task at stake. By analysing the notion of abstraction from an information quantity point of view, we pinpoint the differences and the complementary role of reformulation and abstraction in any representation change. We contribute to extending the existing semantic theories of abstraction to be grounded on perception, where the notion of information quantity is easier to characterize formally. In the author's view, abstraction is best represented using abstraction operators, as they provide semantics for classifying different abstractions and support the automation of representation changes. The usefulness of a grounded theory of abstraction in the cartography domain is illustrated. Finally, the importance of explicitly representing abstraction for designing more autonomous and adaptive systems is discussed. PMID:12903672
Absorption of language concepts in the machine mind
NASA Astrophysics Data System (ADS)
Kollár, Ján
2016-06-01
In our approach, the machine mind is the applicative dynamic system represented by its algorithmically evolvable internal language. By other words, the mind and the language of mind are synonyms. Coming out from Shaumyan's semiotic theory of languages, we present the representation of language concepts in the machine mind as a result of our experiment, to show non-redundancy of the language of mind. To provide useful restriction for further research, we also introduce the hypothesis of semantic saturation in Computer-Computer communication, which indicates that a set of machines is not self-evolvable. The goal of our research is to increase the abstraction of Human-Computer and Computer-Computer communication. If we want humans and machines comunicate as a parent with the child, using different symbols and media, we must find the language of mind commonly usable by both machines and humans. In our opinion, there exist a kind of calm language of thinking, which we try to propose for machines in this paper. We separate the layers of a machine mind, we present the structure of the evolved mind and we discuss the selected properties. We are concentrating on the representation of symbolized concepts in the mind, that are languages, not just grammars, since they have meaning.
The universal numbers. From Biology to Physics.
Marchal, Bruno
2015-12-01
I will explain how the mathematicians have discovered the universal numbers, or abstract computer, and I will explain some abstract biology, mainly self-reproduction and embryogenesis. Then I will explain how and why, and in which sense, some of those numbers can dream and why their dreams can glue together and must, when we assume computationalism in cognitive science, generate a phenomenological physics, as part of a larger phenomenological theology (in the sense of the greek theologians). The title should have been "From Biology to Physics, through the Phenomenological Theology of the Universal Numbers", if that was not too long for a title. The theology will consist mainly, like in some (neo)platonist greek-indian-chinese tradition, in the truth about numbers' relative relations, with each others, and with themselves. The main difference between Aristotle and Plato is that Aristotle (especially in its common and modern christian interpretation) makes reality WYSIWYG (What you see is what you get: reality is what we observe, measure, i.e. the natural material physical science) where for Plato and the (rational) mystics, what we see might be only the shadow or the border of something else, which might be non physical (mathematical, arithmetical, theological, …). Since Gödel, we know that Truth, even just the Arithmetical Truth, is vastly bigger than what the machine can rationally justify. Yet, with Church's thesis, and the mechanizability of the diagonalizations involved, machines can apprehend this and can justify their limitations, and get some sense of what might be true beyond what they can prove or justify rationally. Indeed, the incompleteness phenomenon introduces a gap between what is provable by some machine and what is true about that machine, and, as Gödel saw already in 1931, the existence of that gap is accessible to the machine itself, once it is has enough provability abilities. Incompleteness separates truth and provable, and machines can justify this in some way. More importantly incompleteness entails the distinction between many intensional variants of provability. For example, the absence of reflexion (beweisbar(⌜A⌝) → A with beweisbar being Gödel's provability predicate) makes it impossible for the machine's provability to obey the axioms usually taken for a theory of knowledge. The most important consequence of this in the machine's possible phenomenology is that it provides sense, indeed arithmetical sense, to intensional variants of provability, like the logics of provability-and-truth, which at the propositional level can be mirrored by the logic of provable-and-true statements (beweisbar(⌜A⌝) ∧ A). It is incompleteness which makes this logic different from the logic of provability. Other variants, like provable-and-consistent, or provable-and-consistent-and-true, appears in the same way, and inherits the incompleteness splitting, unlike beweisbar(⌜A⌝) ∧ A. I will recall thought experience which motivates the use of those intensional variants to associate a knower and an observer in some canonical way to the machines or the numbers. We will in this way get an abstract and phenomenological theology of a machine M through the true logics of their true self-referential abilities (even if not provable, or knowable, by the machine itself), in those different intensional senses. Cognitive science and theoretical physics motivate the study of those logics with the arithmetical interpretation of the atomic sentences restricted to the "verifiable" (Σ1) sentences, which is the way to study the theology of the computationalist machine. This provides a logic of the observable, as expected by the Universal Dovetailer Argument, which will be recalled briefly, and which can lead to a comparison of the machine's logic of physics with the empirical logic of the physicists (like quantum logic). This leads also to a series of open problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
USSR Space Life Sciences Digest, issue 25
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor); Teeter, Ronald (Editor); Garshnek, Victoria (Editor); Rowe, Joseph (Editor)
1990-01-01
This is the twenty-fifth issue of NASA's Space Life Sciences Digest. It contains abstracts of 42 journal papers or book chapters published in Russian and of 3 Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. The abstracts in this issue have been identified as relevant to 26 areas of space biology and medicine. These areas include: adaptation, body fluids, botany, cardiovascular and respiratory systems, developmental biology, endocrinology, enzymology, equipment and instrumentation, exobiology, gravitational biology, habitability and environmental effects, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, psychology, radiobiology, reproductive system, and space biology and medicine.
An Analysis of the Need for a Whole-Body CT Scanner at US Darnall Army Community Hospital
1980-05-01
TASK IWORK UNIT ELEMENT NO. I NO.JC NO. rSSION NO. Ij6T’,WAM ’"Aa1W% A WHOLE BODY CT SCANNER AT DARNALL ARMY COMUNITY HOSPITAL 16PTR3OAL tUTHOR(S)* a...computerized axial tomography or CT. Computerized tomography experiments "were conducted by Godfrey Hounsfield at Central Research Laboratories, EMI, Ltd. in...remained the same, with clinical and nursing unit facilities to support a one division post. Presently, Fort Hood is the home of the III US Army Corps, the
Analysis of an Unusual Mirror in a 16th-Century Painting: A Museum Exercise for Physics Students
NASA Astrophysics Data System (ADS)
Swaminathan, Sudha; Lamelas, Frank
2017-04-01
Physics students at Worcester State University visit the Worcester Art Museum (WAM) at the end of a special 100-level course called Physics in Art. The students have studied geometrical optics, and they have been introduced to concepts in atomic physics. The purpose of the museum tour is to show how physics-based techniques can be used in a nontraditional lab setting. Other examples of the use of museum-based art in physics instruction include analyses of Pointillism and image resolution, and of reflections in soap bubbles in 17- and 18th-century paintings.
Evaluation of rainfall retrievals from SEVIRI reflectances over West Africa using TRMM-PR and CMORPH
NASA Astrophysics Data System (ADS)
Wolters, E. L. A.; van den Hurk, B. J. J. M.; Roebeling, R. A.
2011-02-01
This paper describes the evaluation of the KNMI Cloud Physical Properties - Precipitation Properties (CPP-PP) algorithm over West Africa. The algorithm combines condensed water path (CWP), cloud phase (CPH), cloud particle effective radius (re), and cloud-top temperature (CTT) retrievals from visible, near-infrared and thermal infrared observations of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard the Meteosat Second Generation (MSG) satellites to estimate rain occurrence frequency and rain rate. For the 2005 and 2006 monsoon seasons, it is investigated whether the CPP-PP algorithm is capable of retrieving rain occurrence frequency and rain rate over West Africa with sufficient accuracy, using Tropical Monsoon Measurement Mission Precipitation Radar (TRMM-PR) as reference. As a second goal, it is assessed whether SEVIRI is capable of monitoring the seasonal and daytime evolution of rainfall during the West African monsoon (WAM), using Climate Prediction Center Morphing Technique (CMORPH) rainfall observations. The SEVIRI-detected rainfall area agrees well with TRMM-PR, with the areal extent of rainfall by SEVIRI being ~10% larger than from TRMM-PR. The mean retrieved rain rate from CPP-PP is about 8% higher than from TRMM-PR. Examination of the TRMM-PR and CPP-PP cumulative frequency distributions revealed that differences between CPP-PP and TRMM-PR are generally within +/-10%. Relative to the AMMA rain gauge observations, CPP-PP shows very good agreement up to 5 mm h-1. However, at higher rain rates (5-16 mm h-1) CPP-PP overestimates compared to the rain gauges. With respect to the second goal of this paper, it was shown that both the accumulated precipitation and the seasonal progression of rainfall throughout the WAM is in good agreement with CMORPH, although CPP-PP retrieves higher amounts in the coastal region of West Africa. Using latitudinal Hovmüller diagrams, a fair correspondence between CPP-PP and CMORPH was found, which is reflected by high correlation coefficients (~0.7) for both rain rate and rain occurrence frequency. The daytime cycle of rainfall from CPP-PP shows distinctly different patterns for three different regions in West Africa throughout the WAM, with a decrease in dynamical range of rainfall near the Inter Tropical Convergence Zone (ITCZ). The dynamical range as retrieved from CPP-PP is larger than that from CMORPH. It is suggested that this results from both the better spatio-temporal resolution of SEVIRI, as well as from thermal infrared radiances being partly used by CMORPH, which likely smoothes the daytime precipitation signal, especially in case of cold anvils from convective systems. The promising results show that the CPP-PP algorithm, taking advantage of the high spatio-temporal resolution of SEVIRI, is of added value for monitoring daytime precipitation patterns in tropical areas.
78 FR 20101 - Access to Confidential Business Information by Chemical Abstract Services
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Methods for Effective Virtual Screening and Scaffold-Hopping in Chemical Compounds
2007-04-04
contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 12 19a...Opterons with 4 GB of memory . We used the descriptor- spaces GF, ECZ3, and ErG (described in Section 4) for the evaluating the methods introduced in...screening: Use of data fusion and machine learning to enchance the effectiveness of sim- ilarity searching. J. Chem. Info. Model., (46):462–470, 2006. [18] J
MetaJC++: A flexible and automatic program transformation technique using meta framework
NASA Astrophysics Data System (ADS)
Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.
2014-09-01
Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.
CONFOCAL MICROSCOPY SYSTEM PERFORMANCE: LASER POWER MEASUREMENTS
Laser power abstract
The reliability of the confocal laser-scanning microscope (CLSM) to obtain intensity measurements and quantify fluorescence data is dependent on using a correctly aligned machine that contains a stable laser power. The laser power test appears to be one ...
Woody species susceptibility to forest herbicides applied by ground machines
James H. Miller; M. Boyd Edwards
1996-01-01
Abstract. This study used a simple approach of post-treatment observations to colleot data on hexbicide effectiveness for common southeastern hardwoods and shrub species, and for loblolly pine. Both site preparation and release herbicides labeled for loblolly pine were examiued.
Rosen's (M,R) system as an X-machine.
Palmer, Michael L; Williams, Richard A; Gatherer, Derek
2016-11-07
Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly both irreducible to sub-models of its component states and non-computable on a Turing machine. (M,R) stands as an obstacle to both reductionist and mechanistic presentations of systems biology, principally due to its self-referential structure. If (M,R) has the properties claimed for it, computational systems biology will not be possible, or at best will be a science of approximate simulations rather than accurate models. Several attempts have been made, at both empirical and theoretical levels, to disprove this assertion by instantiating (M,R) in software architectures. So far, these efforts have been inconclusive. In this paper, we attempt to demonstrate why - by showing how both finite state machine and stream X-machine formal architectures fail to capture the self-referential requirements of (M,R). We then show that a solution may be found in communicating X-machines, which remove self-reference using parallel computation, and then synthesise such machine architectures with object-orientation to create a formal basis for future software instantiations of (M,R) systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yu, Wei; Clyne, Melinda; Dolan, Siobhan M; Yesupriya, Ajay; Wulf, Anja; Liu, Tiebin; Khoury, Muin J; Gwinn, Marta
2008-04-22
Synthesis of data from published human genetic association studies is a critical step in the translation of human genome discoveries into health applications. Although genetic association studies account for a substantial proportion of the abstracts in PubMed, identifying them with standard queries is not always accurate or efficient. Further automating the literature-screening process can reduce the burden of a labor-intensive and time-consuming traditional literature search. The Support Vector Machine (SVM), a well-established machine learning technique, has been successful in classifying text, including biomedical literature. The GAPscreener, a free SVM-based software tool, can be used to assist in screening PubMed abstracts for human genetic association studies. The data source for this research was the HuGE Navigator, formerly known as the HuGE Pub Lit database. Weighted SVM feature selection based on a keyword list obtained by the two-way z score method demonstrated the best screening performance, achieving 97.5% recall, 98.3% specificity and 31.9% precision in performance testing. Compared with the traditional screening process based on a complex PubMed query, the SVM tool reduced by about 90% the number of abstracts requiring individual review by the database curator. The tool also ascertained 47 articles that were missed by the traditional literature screening process during the 4-week test period. We examined the literature on genetic associations with preterm birth as an example. Compared with the traditional, manual process, the GAPscreener both reduced effort and improved accuracy. GAPscreener is the first free SVM-based application available for screening the human genetic association literature in PubMed with high recall and specificity. The user-friendly graphical user interface makes this a practical, stand-alone application. The software can be downloaded at no charge.
Inverse Regional Modeling with Adjoint-Free Technique
NASA Astrophysics Data System (ADS)
Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.
2016-02-01
The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.
NASA Astrophysics Data System (ADS)
Fawole, Olusegun G.; Cai, Xiaoming; Levine, James G.; Pinker, Rachel T.; MacKenzie, A. R.
2016-12-01
The West African region, with its peculiar climate and atmospheric dynamics, is a prominent source of aerosols. Reliable and long-term in situ measurements of aerosol properties are not readily available across the region. In this study, Version 2 Level 1.5 Aerosol Robotic Network (AERONET) data were used to study the absorption and size distribution properties of aerosols from dominant sources identified by trajectory analysis. The trajectory analysis was used to define four sources of aerosols over a 10 year period. Sorting the AERONET aerosol retrievals by these putative sources, the hypothesis that there exists an optically distinct gas flaring signal was tested. Dominance of each source cluster varies with season: desert-dust (DD) and biomass burning (BB) aerosols are dominant in months prior to the West African Monsoon (WAM); urban (UB) and gas flaring (GF) aerosol are dominant during the WAM months. BB aerosol, with single scattering albedo (SSA) at 675 nm value of 0.86 ± 0.03 and GF aerosol with SSA (675 nm) value of 0.9 ± 0.07, is the most absorbing of the aerosol categories. The range of Absorption Angstr&öm Exponent (AAE) for DD, BB, UB and GF classes are 1.99 ± 0.35, 1.45 ± 0.26, 1.21 ± 0.38 and 0.98 ± 0.25, respectively, indicating different aerosol composition for each source. The AAE (440-870 nm) and Angstr&öm Exponent (AE) (440-870 nm) relationships further show the spread and overlap of the variation of these optical and microphysical properties, presumably due in part to similarity in the sources of aerosols and in part, due to mixing of air parcels from different sources en route to the measurement site.
NASA Astrophysics Data System (ADS)
Alari, Victor; Staneva, Joanna; Breivik, Øyvind; Bidlot, Jean-Raymond; Mogensen, Kristian; Janssen, Peter
2016-08-01
Coupled circulation (NEMO) and wave model (WAM) system was used to study the effects of surface ocean waves on water temperature distribution and heat exchange at regional scale (the Baltic Sea). Four scenarios—including Stokes-Coriolis force, sea-state dependent energy flux (additional turbulent kinetic energy due to breaking waves), sea-state dependent momentum flux and the combination these forcings—were simulated to test the impact of different terms on simulated temperature distribution. The scenario simulations were compared to a control simulation, which included a constant wave-breaking coefficient, but otherwise was without any wave effects. The results indicate a pronounced effect of waves on surface temperature, on the distribution of vertical temperature and on upwelling's. Overall, when all three wave effects were accounted for, did the estimates of temperature improve compared to control simulation. During the summer, the wave-induced water temperature changes were up to 1 °C. In northern parts of the Baltic Sea, a warming of the surface layer occurs in the wave included simulations in summer months. This in turn reduces the cold bias between simulated and measured data, e.g. the control simulation was too cold compared to measurements. The warming is related to sea-state dependent energy flux. This implies that a spatio-temporally varying wave-breaking coefficient is necessary, because it depends on actual sea state. Wave-induced cooling is mostly observed in near-coastal areas and is the result of intensified upwelling in the scenario, when Stokes-Coriolis forcing is accounted for. Accounting for sea-state dependent momentum flux results in modified heat exchange at the water-air boundary which consequently leads to warming of surface water compared to control simulation.
Giner, Anna; Aldaba, Mikel; Arjona, Montserrat; Vilaseca, Meritxell; Pujol, Jaume
2015-10-01
To evaluate the usefulness of an infrared open-field autorefractor as a predictor of the refractive error when fitting multifocal contact lenses (MCL). Objective and subjective measurements of the non-cycloplegic distance refractive error were compared in patients wearing MCL. We used the Grand Seiko WAM-5500 autorefractor for the objective measurements. Three commercially available MCL were tested. Twenty-one eyes of sixteen healthy adults were included in the study. Over-refraction was evaluated in terms of spherical equivalent (SE) and astigmatic vectors (J0 and J45). The mean difference±SD of each parameter was calculated. The Kolmogorov-Smirnov test was used to verify the normal distribution. Pearson's correlation, Bland and Altman plot and paired sample t test were used to compare the results obtained with both methods. The mean difference between objective and subjective results of the SE over-refraction was 0.13±0.42D; for astigmatic vectors J0 and J45 were 0.03±0.32D and -0.00±0.17D, respectively. The Kolmogorov-Smirnov test showed a normal distribution for all parameters. The highest Pearson's correlation coefficients were obtained for the SE with values of 0.98 without MCL and 0.97 with MCL. The lowest were obtained for J45 with values of 0.65 without MCL and 0.75 with MCL. Significant correlations were obtained for each parameter. The paired sample t test failed to show significant differences in analyzed parameters except for J0 without MCL. The Grand Seiko WAM-5500 can be used as a screening method of over-refraction in the clinical fitting of MCL. Copyright © 2015 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Sea spray aerosol fluxes in the Baltic Sea region: Comparison of the WAM model with measurements
NASA Astrophysics Data System (ADS)
Markuszewski, Piotr; Kosecki, Szymon; Petelski, Tomasz
2017-08-01
Sea spray aerosol flux is an important element of sub-regional climate modeling. The majority of works related to this topic concentrate on open ocean research rather than on smaller, inland seas, e.g., the Baltic Sea. The Baltic Sea is one of the largest brackish inland seas by area, where major inflows of oceanic waters are rare. Furthermore, surface waves in the Baltic Sea have a relatively shorter lifespan in comparison with oceanic waves. Therefore, emission of sea spray aerosol may differ greatly from what is known from oceanic research and should be investigated. This article presents a comparison of sea spray aerosol measurements carried out on-board the s/y Oceania research ship with data calculated in accordance to the WAM model. The measurements were conducted in the southern region of the Baltic Sea during four scientific cruises. The gradient method was used to determinate aerosol fluxes. The fluxes were calculated for particles of diameter in range of 0.5-47 μm. The correlation between wind speed measured and simulated has a good agreement (correlation in range of 0.8). The comparison encompasses three different sea spray generation models. First, function proposed by Massel (2006) which is based only on wave parameters, such as significant wave height and peak frequency. Second, Callaghan (2013) which is based on Gong (2003) model (wind speed relation), and a thorough experimental analysis of whitecaps. Third, Petelski et al. (2014) which is based on in-situ gradient measurements with the function dependent on wind speed. The two first models which based on whitecaps analysis are insufficient. Moreover, the research shows strong relation between aerosol emission and wind speed history.
NASA Astrophysics Data System (ADS)
Lebel, T.; Janicot, S.; Redelsperger, J. L.; Parker, D. J.; Thorncroft, C. D.
2015-12-01
The AMMA international project aims at improving our knowledge and understanding of the West African monsoon and its variability with an emphasis on daily-to-interannual timescales. AMMA is motivated by an interest in fundamental scientific issues and by the societal need for improved prediction of the WAM and its impacts on water resources, health and food security for West African nations. The West African monsoon (WAM) has a distinctive annual cycle in rainfall that remains a challenge to understand and predict. The location of peak rainfall, which resides in the Northern Hemisphere throughout the year, moves from the ocean to the land in boreal spring. Around the end of June there is a rapid shift in the location of peak rainfall between the coast and around 10°N where it remains until about the end of August. In September the peak rainfall returns equatorward at a relatively steady pace and is located over the ocean again by November. The fact that the peak rainfall migrates irregularly compared to the peak solar heating is due to the interactions that occur between the land, the atmosphere and the ocean. To gain a better understanding of this complex climate system, a large international research programme was launched in 2002, the biggest of its kind into environment and climate ever attempted in Africa. AMMA has involved a comprehensive field experiment bringing together ocean, land and atmospheric measurements, on timescales ranging from hourly and daily variability up to the changes in seasonal activity over a number of years. This presentation will focus on the description of the field programme and its accomplishments, and address some key questions that have been recently identified to form the core of AMMA-Phase 2.
NASA Astrophysics Data System (ADS)
Lebel, T.; Janicot, S.; Redelsperger, J. L.; Parker, D. J.; Thorncroft, C. D.
2014-12-01
The AMMA international project aims at improving our knowledge and understanding of the West African monsoon and its variability with an emphasis on daily-to-interannual timescales. AMMA is motivated by an interest in fundamental scientific issues and by the societal need for improved prediction of the WAM and its impacts on water resources, health and food security for West African nations. The West African monsoon (WAM) has a distinctive annual cycle in rainfall that remains a challenge to understand and predict. The location of peak rainfall, which resides in the Northern Hemisphere throughout the year, moves from the ocean to the land in boreal spring. Around the end of June there is a rapid shift in the location of peak rainfall between the coast and around 10°N where it remains until about the end of August. In September the peak rainfall returns equatorward at a relatively steady pace and is located over the ocean again by November. The fact that the peak rainfall migrates irregularly compared to the peak solar heating is due to the interactions that occur between the land, the atmosphere and the ocean. To gain a better understanding of this complex climate system, a large international research programme was launched in 2002, the biggest of its kind into environment and climate ever attempted in Africa. AMMA has involved a comprehensive field experiment bringing together ocean, land and atmospheric measurements, on timescales ranging from hourly and daily variability up to the changes in seasonal activity over a number of years. This presentation will focus on the description of the field programme and its accomplishments, and address some key questions that have been recently identified to form the core of AMMA-Phase 2.
Wind energy utilization: A bibliography
NASA Technical Reports Server (NTRS)
1975-01-01
Bibliography cites documents published to and including 1974 with abstracts and references, and is indexed by topic, author, organization, title, and keywords. Topics include: Wind Energy Potential and Economic Feasibility, Utilization, Wind Power Plants and Generators, Wind Machines, Wind Data and Properties, Energy Storage, and related topics.
HiVy automated translation of stateflow designs for model checking verification
NASA Technical Reports Server (NTRS)
Pingree, Paula
2003-01-01
tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.
ERGONOMICS ABSTRACTS 48983-49619.
ERIC Educational Resources Information Center
Ministry of Technology, London (England). Warren Spring Lab.
THE LITERATURE OF ERGONOMICS, OR BIOTECHNOLOGY, IS CLASSIFIED INTO 15 AREAS--METHODS, SYSTEMS OF MEN AND MACHINES, VISUAL AND AUDITORY AND OTHER INPUTS AND PROCESSES, INPUT CHANNELS, BODY MEASUREMENTS, DESIGN OF CONTROLS AND INTEGRATION WITH DISPLAYS, LAYOUT OF PANELS AND CONSOLES, DESIGN OF WORK SPACE, CLOTHING AND PERSONAL EQUIPMENT, SPECIAL…
Towards a Better Distributed Framework for Learning Big Data
2017-06-14
UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This work aimed at solving issues in distributed machine learning. The PI’s team proposed...communication load. Finally, the team proposed the parallel least-squares policy iteration (parallel LSPI) to parallelize a reinforcement policy learning. 15
A two-way nesting procedure for the WAM model: Application to the Spanish coast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lahoz, M.G.; Albiach, J.C.C.
1997-02-01
The performance of the standard one-way nesting procedure for a regional application of a third-generation wave model is investigated. It is found that this nesting procedure is not applicable when the resolution has to be enhanced drastically, unless intermediate grids are placed between the coarse and the fine grid areas. This solution, in turn, requires an excess of computing resources. A two-way nesting procedure is developed and implemented in the model. Advantages and disadvantages of both systems are discussed. The model output for a test case is compared with observed data and the results are discussed in the paper.
A Computational Model of Human Table Tennis for Robot Application
NASA Astrophysics Data System (ADS)
Mülling, Katharina; Peters, Jan
Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a Barrett WAM.
Cang, Zixuan; Wei, Guo-Wei
2018-02-01
Protein-ligand binding is a fundamental biological process that is paramount to many other biological processes, such as signal transduction, metabolic pathways, enzyme construction, cell secretion, and gene expression. Accurate prediction of protein-ligand binding affinities is vital to rational drug design and the understanding of protein-ligand binding and binding induced function. Existing binding affinity prediction methods are inundated with geometric detail and involve excessively high dimensions, which undermines their predictive power for massive binding data. Topology provides the ultimate level of abstraction and thus incurs too much reduction in geometric information. Persistent homology embeds geometric information into topological invariants and bridges the gap between complex geometry and abstract topology. However, it oversimplifies biological information. This work introduces element specific persistent homology (ESPH) or multicomponent persistent homology to retain crucial biological information during topological simplification. The combination of ESPH and machine learning gives rise to a powerful paradigm for macromolecular analysis. Tests on 2 large data sets indicate that the proposed topology-based machine-learning paradigm outperforms other existing methods in protein-ligand binding affinity predictions. ESPH reveals protein-ligand binding mechanism that can not be attained from other conventional techniques. The present approach reveals that protein-ligand hydrophobic interactions are extended to 40Å away from the binding site, which has a significant ramification to drug and protein design. Copyright © 2017 John Wiley & Sons, Ltd.
Cylindrical Vector Beams for Rapid Polarization-Dependent Measurements in Atomic Systems
2011-12-05
www.opticsinfobase.org/abstract.cfm?URI=oe-18-24-25035. 16. S. Tripathi and K. C. Toussaint, Jr., “Rapid Mueller matrix polarimetry based on parallelized...optical trapping [11], atom guiding [12], laser machining [13], charged particle acceleration [14,15], and polarimetry [16]. Yet despite numerous
Ontology-Based Learner Categorization through Case Based Reasoning and Fuzzy Logic
ERIC Educational Resources Information Center
Sarwar, Sohail; García-Castro, Raul; Qayyum, Zia Ul; Safyan, Muhammad; Munir, Rana Faisal
2017-01-01
Learner categorization has a pivotal role in making e-learning systems a success. However, learner characteristics exploited at abstract level of granularity by contemporary techniques cannot categorize the learners effectively. In this paper, an architecture of e-learning framework has been presented that exploits the machine learning based…
Destruction of Knowledge: A Study of Journal Mutilation at a Large University Library.
ERIC Educational Resources Information Center
Constantinou, Constantia
1995-01-01
A study of 1264 incidents of journal mutilation at New York University indicates no relationship between the availability of indexing and abstracting services on CD-ROM databases and mutilation. Recommends posting warnings; raising awareness; providing adequate photocopiers, change, and vendor card machines; announcing closing time; encouraging…
Types for Correct Concurrent API Usage
2010-12-01
unique, full Here g is the state guarantee and A is the current abstract state of the object referenced by r. The ⊗ symbol is called the “ tensor ...to discover resources on a heterogeneous network. Votebox is an open-source implementation of software for voting machines. The Blocking queuemethod
Visualization of Learning Scenarios with UML4LD
ERIC Educational Resources Information Center
Laforcade, Pierre
2007-01-01
Present Educational Modelling Languages are used to formally specify abstract learning scenarios in a machine-interpretable format. Current tooling does not provide teachers/designers with some graphical facilities to help them in reusing existent scenarios. They need human-readable representations. This paper discusses the UML4LD experimental…
Swan, Anna Louise; Mobasheri, Ali; Allaway, David; Liddell, Susan
2013-01-01
Abstract Mass spectrometry is an analytical technique for the characterization of biological samples and is increasingly used in omics studies because of its targeted, nontargeted, and high throughput abilities. However, due to the large datasets generated, it requires informatics approaches such as machine learning techniques to analyze and interpret relevant data. Machine learning can be applied to MS-derived proteomics data in two ways. First, directly to mass spectral peaks and second, to proteins identified by sequence database searching, although relative protein quantification is required for the latter. Machine learning has been applied to mass spectrometry data from different biological disciplines, particularly for various cancers. The aims of such investigations have been to identify biomarkers and to aid in diagnosis, prognosis, and treatment of specific diseases. This review describes how machine learning has been applied to proteomics tandem mass spectrometry data. This includes how it can be used to identify proteins suitable for use as biomarkers of disease and for classification of samples into disease or treatment groups, which may be applicable for diagnostics. It also includes the challenges faced by such investigations, such as prediction of proteins present, protein quantification, planning for the use of machine learning, and small sample sizes. PMID:24116388
NASA Astrophysics Data System (ADS)
Knippertz, Peter; Hannak, Lisa; Fink, Andreas H.; Kniffka, Anke; Pante, Gregor
2017-04-01
Climate models struggle to realistically represent the West African monsoon (WAM), which hinders reliable future projections and the development of adequate adaption measures. Low-level clouds over southern West Africa (5-10°N, 8°W-8°E) during July-September are an integral part of the WAM through their effect on the surface energy balance and precipitation, but their representation in climate models has so far received little attention. These clouds usually form during the night near the level of the nocturnal low-level jet ( 950 hPa), thicken and spread until the mid-morning ( 09 UTC), and then break up and rise in the course of the day, typically to about 850 hPa. The low thermal contrast to the surface and the frequent presence of obscuring higher-level clouds make detection of the low-level clouds from space rather challenging. Here we use 30 years of output from 18 models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) as well as 20 years of output from 8 models participating in the Year of Tropical Convection (YoTC) experiments to identify cloud biases and their causes. A great advantage of the YoTC dataset is the 6-hourly output frequency, which allows an analysis of the diurnal cycle, and the availability of temperature and moisture tendencies from parameterized processes such as convection, radiation and boundary-layer turbulence. A comparison to earlier analyses based on CMIP3 output reveals rather limited improvements with regard to the represenation of low-level cloud and winds. Compared to ERA-Interim re-analyses, which shows satisfactory agreement with surface observations, many of the CMIP5 and YoTC models still have large biases in low-level cloudiness of both signs and a tendency to too high elevation and too weak diurnal cycles. At the same time, these models tend to have too strong low-level jets, the impact of which is unclear due to concomitant effects on temperature and moisture advection as well as turbulent mixing. Part of the differences between the models and ERA-Interim appear to be related to the different subgrid cloud schemes used. While nighttime tendencies in temperature and humidity are broadly realistic in most models, daytime tendencies show large variation in the vertical transport of heat and moisture. Many models simulate too low near-surface relative humidities, leading to insufficient low cloud cover, abundant solar radiation, and thus a too large diurnal cycle in temperature and relative humidity. Currently, targeted model sensitivity experiments are conducted to test possible feedback mechanisms between low clouds, radiation, boundary-layer dynamics, precipitation and the WAM circulation in the framework of the EU-funded DACCIWA (Dynamics-Aerosol-Chemistry-Cloud Interactions in West Africa) project (http://www.dacciwa.eu).
NASA Astrophysics Data System (ADS)
Mugan, Jonathan; Khalili, Aram E.
2014-05-01
Current computer systems are dumb automatons, and their blind execution of instructions makes them open to attack. Their inability to reason means that they don't consider the larger, constantly changing context outside their immediate inputs. Their nearsightedness is particularly dangerous because, in our complex systems, it is difficult to prevent all exploitable situations. Additionally, the lack of autonomous oversight of our systems means they are unable to fight through attacks. Keeping adversaries completely out of systems may be an unreasonable expectation, and our systems need to adapt to attacks and other disruptions to achieve their objectives. What is needed is an autonomous controller within the computer system that can sense the state of the system and reason about that state. In this paper, we present Self-Awareness Through Predictive Abstraction Modeling (SATPAM). SATPAM uses prediction to learn abstractions that allow it to recognize the right events at the right level of detail. These abstractions allow SATPAM to break the world into small, relatively independent, pieces that allow employment of existing reasoning methods. SATPAM goes beyond classification-based machine learning and statistical anomaly detection to be able to reason about the system, and SATPAM's knowledge representation and reasoning is more like that of a human. For example, humans intuitively know that the color of a car is not relevant to any mechanical problem, and SATPAM provides a plausible method whereby a machine can acquire such reasoning patterns. In this paper, we present the initial experimental results using SATPAM.
Ferrigno, C.F.
1986-01-01
Machine-readable files were developed for the High Plains Regional Aquifer-System Analysis project are stored on two magnetic tapes available from the U.S. Geological Survey. The first tape contains computer programs that were used to prepare, store, retrieve, organize, and preserve the areal interpretive data collected by the project staff. The second tape contains 134 data files that can be divided into five general classes: (1) Aquifer geometry data, (2) aquifer and water characteristics , (3) water levels, (4) climatological data, and (5) land use and water use data. (Author 's abstract)
Wind energy utilization: A bibliography with abstracts - Cumulative volume 1944/1974
NASA Technical Reports Server (NTRS)
1975-01-01
Bibliography, up to 1974 inclusive, of articles and books on utilization of wind power in energy generation. Worldwide literature is surveyed, and short abstracts are provided in many cases. The citations are grouped by subject: (1) general; (2) utilization; (3) wind power plants; (4) wind power generators (rural, synchronous, remote station); (5) wind machines (motors, pumps, turbines, windmills, home-built); (6) wind data and properties; (7) energy storage; and (8) related topics (control and regulation devices, wind measuring devices, blade design and rotors, wind tunnel simulation, aerodynamics). Gross-referencing is aided by indexes of authors, corporate sources, titles, and keywords.
ABSTRACT: There are thousands of environmental chemicals subject to regulatory decisions for endocrine disrupting potential. A promising approach to manage this large universe of untested chemicals is to use a prioritization filter that combines in vitro assays with in silico QSA...
Whet Students' Appetites with Food-Related Drafting Project
ERIC Educational Resources Information Center
Pucillo, John M.
2010-01-01
Students sometimes find introductory drafting and design a boring subject. They must learn the basic skills necessary for drafting and architecture and this may require repetition in order to reinforce those skills. One way to keep students interested is to have them draw objects they encounter in their own lives instead of abstract machine parts…
PASCAL Data Base: File Description and On Line Access on ESA/IRS.
ERIC Educational Resources Information Center
Pelissier, Denise
This report describes the PASCAL database, a machine readable version of the French abstract journal Bulletin Signaletique, which allows use of the file for (1) batch and online retrieval of information, (2) selective dissemination of information, and (3) publishing of the 50 sections of Bulletin Signaletique. The system, which covers nine…
Short guide to SDI profiling at ORNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomerance, H.S.
1976-06-01
ORNL has machine-searchable data bases that correspond to printed indexes and abstracts. This guide describes the peculiarities of those several data bases and the conventions of the ORNL search system so that users can write their own queries or search profiles and can interpret the part of the output that is encoded.
2017-11-01
Finite State Machine ............................................... 21 9 Main Ontological Concepts for Representing Structure of a Multi -Agent...19 NetLogo Simulation of persistent surveillance of circular plume by 4 UAVs ........................36 20 Flocking Emergent Behaviors in Multi -UAV...Region) - Undesirable Group Formation ................................................................................... 40 24 Two UAVs Moving in
Vieira, Sandra; Pinaya, Walter H L; Mechelli, Andrea
2017-03-01
Deep learning (DL) is a family of machine learning methods that has gained considerable attention in the scientific community, breaking benchmark records in areas such as speech and visual recognition. DL differs from conventional machine learning methods by virtue of its ability to learn the optimal representation from the raw data through consecutive nonlinear transformations, achieving increasingly higher levels of abstraction and complexity. Given its ability to detect abstract and complex patterns, DL has been applied in neuroimaging studies of psychiatric and neurological disorders, which are characterised by subtle and diffuse alterations. Here we introduce the underlying concepts of DL and review studies that have used this approach to classify brain-based disorders. The results of these studies indicate that DL could be a powerful tool in the current search for biomarkers of psychiatric and neurologic disease. We conclude our review by discussing the main promises and challenges of using DL to elucidate brain-based disorders, as well as possible directions for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
A review of supervised machine learning applied to ageing research.
Fabris, Fabio; Magalhães, João Pedro de; Freitas, Alex A
2017-04-01
Broadly speaking, supervised machine learning is the computational task of learning correlations between variables in annotated data (the training set), and using this information to create a predictive model capable of inferring annotations for new data, whose annotations are not known. Ageing is a complex process that affects nearly all animal species. This process can be studied at several levels of abstraction, in different organisms and with different objectives in mind. Not surprisingly, the diversity of the supervised machine learning algorithms applied to answer biological questions reflects the complexities of the underlying ageing processes being studied. Many works using supervised machine learning to study the ageing process have been recently published, so it is timely to review these works, to discuss their main findings and weaknesses. In summary, the main findings of the reviewed papers are: the link between specific types of DNA repair and ageing; ageing-related proteins tend to be highly connected and seem to play a central role in molecular pathways; ageing/longevity is linked with autophagy and apoptosis, nutrient receptor genes, and copper and iron ion transport. Additionally, several biomarkers of ageing were found by machine learning. Despite some interesting machine learning results, we also identified a weakness of current works on this topic: only one of the reviewed papers has corroborated the computational results of machine learning algorithms through wet-lab experiments. In conclusion, supervised machine learning has contributed to advance our knowledge and has provided novel insights on ageing, yet future work should have a greater emphasis in validating the predictions.
Kant, Vivek
2017-03-01
Jens Rasmussen's contribution to the field of human factors and ergonomics has had a lasting impact. Six prominent interrelated themes can be extracted from his research between 1961 and 1986. These themes form the basis of an engineering epistemology which is best manifested by his abstraction hierarchy. Further, Rasmussen reformulated technical reliability using systems language to enable a proper human-machine fit. To understand the concept of human-machine fit, he included the operator as a central component in the system to enhance system safety. This change resulted in the application of a qualitative and categorical approach for human-machine interaction design. Finally, Rasmussen's insistence on a working philosophy of systems design as being a joint responsibility of operators and designers provided the basis for averting errors and ensuring safe and correct system functioning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evaluation of Learning Strategies Training Program 94B10 Fort Lee, Virginia.
1981-11-30
MtASUMS5 MMIOD :j , 66 W( ..... ... c ......... I. cd4 wam; vaeow speed only _ oI0 fom soft dough . 100 00 ,. "k~ ano lightly flouredb’o4. R Olgbtly. ,pproxi...p*II u 9 waq moo tl dough is o. P 0 : toI a unilom P’#t* Wh-I B oured biscuit gtU~i. o.WeSuits on Pias in . 5 ’ ISpIsI tM a until lightly -bLwa.. A...0 0 06 ~F 00~ ~r 4 -44 LAH A-60 D. BBEADS AND SWEET DOUGHS no. 1 (2) BAKING POWDER BISCUITS FY7iLD: 100 Poithons (4 Pans) EACH PORTION: 2 Biscuits
Yu, Wei; Clyne, Melinda; Dolan, Siobhan M; Yesupriya, Ajay; Wulf, Anja; Liu, Tiebin; Khoury, Muin J; Gwinn, Marta
2008-01-01
Background Synthesis of data from published human genetic association studies is a critical step in the translation of human genome discoveries into health applications. Although genetic association studies account for a substantial proportion of the abstracts in PubMed, identifying them with standard queries is not always accurate or efficient. Further automating the literature-screening process can reduce the burden of a labor-intensive and time-consuming traditional literature search. The Support Vector Machine (SVM), a well-established machine learning technique, has been successful in classifying text, including biomedical literature. The GAPscreener, a free SVM-based software tool, can be used to assist in screening PubMed abstracts for human genetic association studies. Results The data source for this research was the HuGE Navigator, formerly known as the HuGE Pub Lit database. Weighted SVM feature selection based on a keyword list obtained by the two-way z score method demonstrated the best screening performance, achieving 97.5% recall, 98.3% specificity and 31.9% precision in performance testing. Compared with the traditional screening process based on a complex PubMed query, the SVM tool reduced by about 90% the number of abstracts requiring individual review by the database curator. The tool also ascertained 47 articles that were missed by the traditional literature screening process during the 4-week test period. We examined the literature on genetic associations with preterm birth as an example. Compared with the traditional, manual process, the GAPscreener both reduced effort and improved accuracy. Conclusion GAPscreener is the first free SVM-based application available for screening the human genetic association literature in PubMed with high recall and specificity. The user-friendly graphical user interface makes this a practical, stand-alone application. The software can be downloaded at no charge. PMID:18430222
Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H
2012-01-05
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.
Architectures for intelligent machines
NASA Technical Reports Server (NTRS)
Saridis, George N.
1991-01-01
The theory of intelligent machines has been recently reformulated to incorporate new architectures that are using neural and Petri nets. The analytic functions of an intelligent machine are implemented by intelligent controls, using entropy as a measure. The resulting hierarchical control structure is based on the principle of increasing precision with decreasing intelligence. Each of the three levels of the intelligent control is using different architectures, in order to satisfy the requirements of the principle: the organization level is moduled after a Boltzmann machine for abstract reasoning, task planning and decision making; the coordination level is composed of a number of Petri net transducers supervised, for command exchange, by a dispatcher, which also serves as an interface to the organization level; the execution level, include the sensory, planning for navigation and control hardware which interacts one-to-one with the appropriate coordinators, while a VME bus provides a channel for database exchange among the several devices. This system is currently implemented on a robotic transporter, designed for space construction at the CIRSSE laboratories at the Rensselaer Polytechnic Institute. The progress of its development is reported.
Modeling Medical Ethics through Intelligent Agents
NASA Astrophysics Data System (ADS)
Machado, José; Miranda, Miguel; Abelha, António; Neves, José; Neves, João
The amount of research using health information has increased dramatically over the last past years. Indeed, a significative number of healthcare institutions have extensive Electronic Health Records (EHR), collected over several years for clinical and teaching purposes, but are uncertain as to the proper circumstances in which to use them to improve the delivery of care to the ones in need. Research Ethics Boards in Portugal and elsewhere in the world are grappling with these issues, but lack clear guidance regarding their role in the creation of and access to EHRs. However, we feel we have an effective way to handle Medical Ethics if we look to the problem under a structured and more rational way. Indeed, we felt that physicians were not aware of the relevance of the subject in their pre-clinical years, but their interest increase when they were exposed to patients. On the other hand, once EHRs are stored in machines, we also felt that we had to find a way to ensure that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. Therefore, in this article we discuss the importance of machine ethics and the need for machines that represent ethical principles explicitly. It is also shown how a machine may abstract an ethical principle from a logical representation of ethical judgments and use that principle to guide its own behavior.
Goldstein, Benjamin A; Navar, Ann Marie; Carter, Rickey E
2017-06-14
Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.
ERIC Educational Resources Information Center
International Business Machines Corp., Milford, CT. Academic Information Systems.
This agenda lists activities scheduled for the second IBM (International Business Machines) Academic Information Systems University AEP (Advanced Education Projects) Conference, which was designed to afford the universities participating in the IBM-sponsored AEPs an opportunity to demonstrate their AEP experiments in educational computing. In…
Technical Reliability Studies. EOS/ESD Technology Abstracts
1981-01-01
MECHANISMS MELAMINE MESFETS MICROWAVE MIS 15025 AUTOMATIC MACHINE PRECAUTIONS FOR HOS/OiOS 15006 INSTRUCTIONS FOR INSTALLATION AND...ELIMINATION OF EOS INDUCED SECONDARY FAILURE MECHANISMS 15000 USE OF MELAMINE WORK-SURFACE FOR ESD POTENTIAL BLEED OFF 16141 MICROWAVE NANOSECOND... microwave devices, optoelectronics, and selected nonelectronic parts employed in military, space and commercial applications. In addition, a System
Evans, Lyn
2018-05-23
Abstract: From the civil engineering, to the manufacturing of the various magnet types, each building block of this extraordinary machine required ambitious leaps in innovation. This lecture will review the history of the LHC project, focusing on the many challenges -- scientific, technological, managerial -- that had to be met during the various phases of R&D, industrialization, construction, installation and commissioning.
Implications of Gendered Technology for Art Education: The Case Study of a Male Drawing Machine.
ERIC Educational Resources Information Center
Morbey, Mary Leigh
Opening with a discussion of AARON, an artificial intelligence symbol system that is used to generate computer drawings, this document makes the argument that AARON is based upon a way of knowing that is abstract, analytical, rationalist and thus representative of the dominant, western, male philosophical tradition. Male bias permeates the field…
Using the global positioning system to map disturbance patterns of forest harvesting machinery
T.P. McDonald; E.A. Carter; S.E. Taylor
2002-01-01
Abstract: A method was presented to transform sampled machine positional data obtained from a global positioning system (GPS) receiver into a two-dimensional raster map of number of passes as a function of location. The effect of three sources of error in the transformation process were investigated: path sampling rate (receiver sampling frequency);...
Characterizing diurnal and seasonal cycles in monsoon systems from TRMM and CEOP observations
NASA Technical Reports Server (NTRS)
Lau, William K. M.
2006-01-01
The CEOP Inter-Monsoon Study (CIMS) is one of the two main science drivers of CEOP that aims to (a) provide better understanding of fundamental physical processes in monsoon regions around the world, and (b) demonstrate the synergy and utility of CEOP data in providing a pathway for model physics evaluation and improvement. As the data collection phase for EOP-3 and EOP-4 is being completed, two full annual cycles (2003-2004) of research-quality data sets from satellites, reference sites, and model output location time series (MOLTS) have been processed and made available for data analyses and model validation studies. This article presents preliminary results of a CIMS study aimed at the characterization and intercomparison of all major monsoon systems. The CEOP reference site data proved its value in such exercises by being a powerful tool to cross-validate the TRMM data, and to intercompare with multi-model results in ongoing work. We use 6 years (1998-2003) of pentad CEOP/TRMM data with 2deg x 2.5deg latitude-longitude grid, over the domain of interests to define the monsoon climatological diurnal and annual cycles for the East Asian Monsoon (EAM), the South Asian Monsoon (SAM), the West Africa Monsoon (WAM), the North America/Mexican Monsoon (NAM), the South American Summer Monsoon (SASM) and the Australian Monsoon (AUM). As noted, the TRMM data used in the study were cross-validated using CEOP reference site data, where applicable. Results show that the observed diurnal cycle of rain peaked around late afternoon over monsoon land, and early morning over the oceans. The diurnal cycles in models tend to peak 2-3 hours earlier than observed. The seasonal cycles of the EAM and SAM show the strongest continentality, i.e, strong control by continental processes away from the ITCZ. The WAM, and the AUM shows the less continentality, i.e, strong control by the oceanic ITCZ.
Characterizing Diurnal and Seasonal Cycles in Monsoon Systems from TRMM and CEOP Observations
NASA Technical Reports Server (NTRS)
Lau, William K. M.
2007-01-01
The CEOP Inter-Monsoon Study (CIMS) is one of the two main science drivers of CEOP that aims to (a) provide better understanding of fundamental physical processes in monsoon regions around the world, and (b) demonstrate the synergy and utility of CEOP data in providing a pathway for model physics evaluation and improvement. As the data collection phase for EOP-3 and EOP-4 is being completed, two full annual cycles (2003-2004) of research-quality data sets from satellites, reference sites, and model output location time series (MOLTS) have been processed and made available for data analyses and model validation studies. This article presents preliminary results of a CIMS study aimed at the characterization and intercomparison of all major monsoon systems. The CEOP reference site data proved its value in such exercises by being a powerful tool to cross-validate the TRMM data, and to intercompare with multi-model results in ongoing work. We use 6 years (1998-2003) of pentad CEOP/TRMM data with 2 deg x 2.5 deg. latitude-longitude grid, over the domain of interests to define the monsoon climatological diurnal and annual cycles for the East Asian Monsoon (EAM), the South Asian Monsoon (SAM), the West Africa Monsoon (WAM), the North America/Mexican Monsoon (NAM), the South American Summer Monsoon (SASM) and the Australian Monsoon (AUM). As noted, the TRMM data used in the study were cross-validated using CEOP reference site data, where applicable. Results show that the observed diurnal cycle of rain peaked around late afternoon over monsoon land, and early morning over the oceans. The diurnal cycles in models tend to peak 2-3 hours earlier than observed. The seasonal cycles of the EAM and SAM show the strongest continentality, i.e, strong control by continental processes away from the ITCZ. The WAM, and the AUM shows the less continentality, i.e, strong control by the oceanic ITCZ.
Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, Naresh; Baone, Chaitanya; Veda, Santosh
2014-12-31
Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less
Cape Blanco wind farm feasibility study
NASA Astrophysics Data System (ADS)
1987-11-01
The Cape Blanco Wind Farm (CBWF) Feasibility Study was undertaken as a prototype for determining the feasibility of proposals for wind energy projects at Northwest sites. It was intended to test for conditions under which wind generation of electricity could be commercially feasible, not by another abstract survey of alternative technologies, but rather through a site-specific, machine-specific analysis of one proposal. Some of the study findings would be most pertinent to the Cape Blanco site - local problems require local solutions. Other findings would be readily applicable to other sites and other machines, and study methodologies would be designed to be modified for appraisal of other proposals. This volume discusses environmental, economic, and technical issues of the Wind Farm.
USSR Space Life Sciences Digest, Issue 18
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor); Donaldson, P. Lynn (Editor); Teeter, Ronald (Editor); Garshnek, Victoria (Editor); Rowe, Joseph (Editor)
1988-01-01
This is the 18th issue of NASA's USSR Life Sciences Digest. It contains abstracts of 50 papers published in Russian language periodicals or presented at conferences and of 8 new Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. A review of a recent Aviation Medicine Handbook is also included. The abstracts in this issue have been identified as relevant to 37 areas of space biology and medicine. These areas are: adaptation, aviation medicine, biological rhythms, biospherics, body fluids, cardiovascular and respiratory systems, cytology, developmental biology, endocrinology, enzymology, equipment and instrumentation, exobiology, gastrointestinal system, genetics, gravitational biology, group dynamics, habitability and environmental effects, hematology, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, perception, personnel selection, psychology, radiobiology, reproductive biology, space biology and medicine, and space industrialization.
USSR Space Life Sciences Digest, issue 16
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor); Teeter, Ronald (Editor); Siegel, Bette (Editor); Donaldson, P. Lynn (Editor); Leveton, Lauren B. (Editor); Rowe, Joseph (Editor)
1988-01-01
This is the sixteenth issue of NASA's USSR Life Sciences Digest. It contains abstracts of 57 papers published in Russian language periodicals or presented at conferences and of 2 new Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. An additional feature is the review of a book concerned with metabolic response to the stress of space flight. The abstracts included in this issue are relevant to 33 areas of space biology and medicine. These areas are: adaptation, biological rhythms, bionics, biospherics, body fluids, botany, cardiovascular and respiratory systems, developmental biology, endocrinology, enzymology, exobiology, gastrointestinal system, genetics, gravitational biology, habitability and environmental effects, hematology, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, perception, personnel selection, psychology, radiobiology, reproductive biology, and space biology.
NASA Astrophysics Data System (ADS)
Zhang, J. Y.; Jiang, Y.
2017-10-01
To ensure satisfactory dynamic performance of controllers in time-delayed power systems, a WAMS-based control strategy is investigated in the presence of output feedback delay. An integrated approach based on Pade approximation and particle swarm optimization (PSO) is employed for parameter configuration of PSS. The coordination configuration scheme of power system controllers is achieved by a series of stability constraints at the aim of maximizing the minimum damping ratio of inter-area mode of power system. The validity of this derived PSS is verified on a prototype power system. The findings demonstrate that the proposed approach for control design could damp the inter-area oscillation and enhance the small-signal stability.
MAP Fault Localization Based on Wide Area Synchronous Phasor Measurement Information
NASA Astrophysics Data System (ADS)
Zhang, Yagang; Wang, Zengping
2015-02-01
In the research of complicated electrical engineering, the emergence of phasor measurement units (PMU) is a landmark event. The establishment and application of wide area measurement system (WAMS) in power system has made widespread and profound influence on the safe and stable operation of complicated power system. In this paper, taking full advantage of wide area synchronous phasor measurement information provided by PMUs, we have carried out precise fault localization based on the principles of maximum posteriori probability (MAP). Large numbers of simulation experiments have confirmed that the results of MAP fault localization are accurate and reliable. Even if there are interferences from white Gaussian stochastic noise, the results from MAP classification are also identical to the actual real situation.
Cloud Fingerprinting: Using Clock Skews To Determine Co Location Of Virtual Machines
2016-09-01
DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Cloud computing has quickly revolutionized computing practices of organizations, to include the Department of... Cloud computing has quickly revolutionized computing practices of organizations, to in- clude the Department of Defense. However, security concerns...vi Table of Contents 1 Introduction 1 1.1 Proliferation of Cloud Computing . . . . . . . . . . . . . . . . . . 1 1.2 Problem Statement
Speech Processing and Recognition (SPaRe)
2011-01-01
results in the areas of automatic speech recognition (ASR), speech processing, machine translation (MT), natural language processing ( NLP ), and...Processing ( NLP ), Information Retrieval (IR) 16. SECURITY CLASSIFICATION OF: UNCLASSIFED 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME...Figure 9, the IOC was only expected to provide document submission and search; automatic speech recognition (ASR) for English, Spanish, Arabic , and
Fighting Through a Logistics Cyber Attack
2015-06-19
Chariot 800 - 1350 Gunpowder 1915 Machine Gun 1915 Tanks 1915 Aircraft 1935 Radar 1945 Nuclear Weapons 1960 Satellites 1989 GPS 2009 Cyber Weapon...primarily remained in the scientific and academic communities for the next 22 years ( Griffiths , 2002). The Internet as we recognize it today... Griffiths (2002), defines the Web as an abstract space information containing hyperlinked documents and other resources, identified by their Uniformed
Literature classification for semi-automated updating of biological knowledgebases
2013-01-01
Background As the output of biological assays increase in resolution and volume, the body of specialized biological data, such as functional annotations of gene and protein sequences, enables extraction of higher-level knowledge needed for practical application in bioinformatics. Whereas common types of biological data, such as sequence data, are extensively stored in biological databases, functional annotations, such as immunological epitopes, are found primarily in semi-structured formats or free text embedded in primary scientific literature. Results We defined and applied a machine learning approach for literature classification to support updating of TANTIGEN, a knowledgebase of tumor T-cell antigens. Abstracts from PubMed were downloaded and classified as either "relevant" or "irrelevant" for database update. Training and five-fold cross-validation of a k-NN classifier on 310 abstracts yielded classification accuracy of 0.95, thus showing significant value in support of data extraction from the literature. Conclusion We here propose a conceptual framework for semi-automated extraction of epitope data embedded in scientific literature using principles from text mining and machine learning. The addition of such data will aid in the transition of biological databases to knowledgebases. PMID:24564403
Exploiting the Dynamics of Soft Materials for Machine Learning
Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2018-01-01
Abstract Soft materials are increasingly utilized for various purposes in many engineering applications. These materials have been shown to perform a number of functions that were previously difficult to implement using rigid materials. Here, we argue that the diverse dynamics generated by actuating soft materials can be effectively used for machine learning purposes. This is demonstrated using a soft silicone arm through a technique of multiplexing, which enables the rich transient dynamics of the soft materials to be fully exploited as a computational resource. The computational performance of the soft silicone arm is examined through two standard benchmark tasks. Results show that the soft arm compares well to or even outperforms conventional machine learning techniques under multiple conditions. We then demonstrate that this system can be used for the sensory time series prediction problem for the soft arm itself, which suggests its immediate applicability to a real-world machine learning problem. Our approach, on the one hand, represents a radical departure from traditional computational methods, whereas on the other hand, it fits nicely into a more general perspective of computation by way of exploiting the properties of physical materials in the real world. PMID:29708857
Wood, Lisa A
2016-06-01
Attending to the material discursive constructions of the patient body within cone beam computed tomography (CBCT) imaging in radiotherapy treatments, in this paper I describe how bodies and machines co-create images. Using an analytical framework inspired by Science and Technology Studies and Feminist Technoscience, I describe the interplay between machines and bodies and the implications of materialities and agency. I argue that patients' bodies play a part in producing scans within acceptable limits of machines as set out through organisational arrangements. In doing so I argue that bodies are fabricated into the order of work prescribed and embedded within and around the CBCT system, becoming, not only the subject of resulting images, but part of that image. The scan is not therefore a representation of a passive subject (a body) but co-produced by the work of practitioners and patients who actively control (and contort) and discipline their body according to protocols and instructions and the CBCT system. In this way I suggest they are 'con-forming' the CBCT image. A Virtual Abstract of this paper can be found at: https://youtu.be/qysCcBGuNSM. © 2015 Foundation for the Sociology of Health & Illness.
Boosting compound-protein interaction prediction by deep learning.
Tian, Kai; Shao, Mingyu; Wang, Yang; Guan, Jihong; Zhou, Shuigeng
2016-11-01
The identification of interactions between compounds and proteins plays an important role in network pharmacology and drug discovery. However, experimentally identifying compound-protein interactions (CPIs) is generally expensive and time-consuming, computational approaches are thus introduced. Among these, machine-learning based methods have achieved a considerable success. However, due to the nonlinear and imbalanced nature of biological data, many machine learning approaches have their own limitations. Recently, deep learning techniques show advantages over many state-of-the-art machine learning methods in some applications. In this study, we aim at improving the performance of CPI prediction based on deep learning, and propose a method called DL-CPI (the abbreviation of Deep Learning for Compound-Protein Interactions prediction), which employs deep neural network (DNN) to effectively learn the representations of compound-protein pairs. Extensive experiments show that DL-CPI can learn useful features of compound-protein pairs by a layerwise abstraction, and thus achieves better prediction performance than existing methods on both balanced and imbalanced datasets. Copyright © 2016 Elsevier Inc. All rights reserved.
Nofre, David; Priestley, Mark; Alberts, Gerard
2014-01-01
Language is one of the central metaphors around which the discipline of computer science has been built. The language metaphor entered modern computing as part of a cybernetic discourse, but during the second half of the 1950s acquired a more abstract meaning, closely related to the formal languages of logic and linguistics. The article argues that this transformation was related to the appearance of the commercial computer in the mid-1950s. Managers of computing installations and specialists on computer programming in academic computer centers, confronted with an increasing variety of machines, called for the creation of "common" or "universal languages" to enable the migration of computer code from machine to machine. Finally, the article shows how the idea of a universal language was a decisive step in the emergence of programming languages, in the recognition of computer programming as a proper field of knowledge, and eventually in the way we think of the computer.
Machine-checked proofs of the design and implementation of a fault-tolerant circuit
NASA Technical Reports Server (NTRS)
Bevier, William R.; Young, William D.
1990-01-01
A formally verified implementation of the 'oral messages' algorithm of Pease, Shostak, and Lamport is described. An abstract implementation of the algorithm is verified to achieve interactive consistency in the presence of faults. This abstract characterization is then mapped down to a hardware level implementation which inherits the fault-tolerant characteristics of the abstract version. All steps in the proof were checked with the Boyer-Moore theorem prover. A significant results is the demonstration of a fault-tolerant device that is formally specified and whose implementation is proved correct with respect to this specification. A significant simplifying assumption is that the redundant processors behave synchronously. A mechanically checked proof that the oral messages algorithm is 'optimal' in the sense that no algorithm which achieves agreement via similar message passing can tolerate a larger proportion of faulty processor is also described.
USSR Space Life Sciences Digest, issue 19
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor); Donaldson, P. Lynn (Editor); Teeter, Ronald (Editor); Garshnek, Victoria (Editor); Rowe, Joseph (Editor)
1988-01-01
This is the 19th issue of NASA's USSR Space Life Sciences Digest. It contains abstracts of 47 papers published in Russian language periodicals or presented at conferences and of 5 new Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. Reports on two conferences, one on adaptation to high altitudes, and one on space and ecology are presented. A book review of a recent work on high altitude physiology is also included. The abstracts in this issue have been identified as relevant to 33 areas of space biology and medicine. These areas are: adaptation, biological rhythms, biospherics, body fluids, botany, cardiovascular and respiratory systems, cytology, developmental biology, endocrinology, enzymology, biology, group dynamics, habitability and environmental effects, hematology, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, perception, personnel selection, psychology, radiobiology, and space biology and medicine.
Using container orchestration to improve service management at the RAL Tier-1
NASA Astrophysics Data System (ADS)
Lahiff, Andrew; Collier, Ian
2017-10-01
In recent years container orchestration has been emerging as a means of gaining many potential benefits compared to a traditional static infrastructure, such as increased utilisation through multi-tenancy, improved availability due to self-healing, and the ability to handle changing loads due to elasticity and auto-scaling. To this end we have been investigating migrating services at the RAL Tier-1 to an Apache Mesos cluster. In this model the concept of individual machines is abstracted away and services are run in containers on a cluster of machines, managed by schedulers, enabling a high degree of automation. Here we describe Mesos, the infrastructure deployed at RAL, and describe in detail the explicit example of running a batch farm on Mesos.
JIGSAW: Preference-directed, co-operative scheduling
NASA Technical Reports Server (NTRS)
Linden, Theodore A.; Gaw, David
1992-01-01
Techniques that enable humans and machines to cooperate in the solution of complex scheduling problems have evolved out of work on the daily allocation and scheduling of Tactical Air Force resources. A generalized, formal model of these applied techniques is being developed. It is called JIGSAW by analogy with the multi-agent, constructive process used when solving jigsaw puzzles. JIGSAW begins from this analogy and extends it by propagating local preferences into global statistics that dynamically influence the value and variable ordering decisions. The statistical projections also apply to abstract resources and time periods--allowing more opportunities to find a successful variable ordering by reserving abstract resources and deferring the choice of a specific resource or time period.
The scheme machine: A case study in progress in design derivation at system levels
NASA Technical Reports Server (NTRS)
Johnson, Steven D.
1995-01-01
The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection. It consists of four tightly coupled processes--processor, collector, allocator, memory--with a very non-trivial synchronization relationship. Finally, there are deep issues of representation for the run-time objects of a symbolic processing language. The research centers on verification through integrated formal reasoning systems, but is also involved with modeling and prototyping environments. Since the derivation algebra is basd on an executable modeling language, there is opportunity to incorporate design animation in the design process. We are looking for ways to move smoothly and incrementally from executable specifications into hardware realization. For example, we can run the garbage collector specification, a Scheme program, directly against the physical memory prototype, and similarly, the instruction processor model against the heap implementation.
Index to NASA Tech Briefs, 1974
NASA Technical Reports Server (NTRS)
1975-01-01
The following information was given for 1974: (1) abstracts of reports dealing with new technology derived from the research and development activities of NASA or the U.S. Atomic Energy Commission, arranged by subjects: electronics/electrical, electronics/electrical systems, physical sciences, materials/chemistry, life sciences, mechanics, machines, equipment and tools, fabrication technology, and computer programs, (2) indexes for the above documents: subject, personal author, originating center.
Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 259)
NASA Technical Reports Server (NTRS)
1984-01-01
A bibliography containing 476 documents introduced into the NASA scientific and technical information system in May 1984 is presented. The primary subject categories included are: life sciences, aerospace medicine, behavioral sciences, man/system technology, life support, and planetary biology. Topics extensively represented were space flight stress, man machine systems, weightlessness, human performance, mental performance, and spacecraft environments. Abstracts for each citation are given.
Abstract Machines for Polymorphous Computing
2007-12-01
s/ /s/ MARK NOVAK WARREN H. DEBANY, Jr. Work Unit Manager Technical Advisor, Information Grid Division Information...models and LLCs have been developed for Raw, MONARCH [18][19], TRIPS [20][21], and Smart Memories [22][23]. These research projects were conducted...used here. In our approach on Raw, two key concepts are used to fully leverage the Raw architecture [34]. First, the tile grid is viewed as a
Model A: High-Temperature Tribometer
1992-02-01
spring loaded collet which grips the pin. In previous machines Inconel 625 collets and sleeves with 450 contact angles were used without collet...Triboeter, high temperature, friction, wear 11 1 08__ 19 ABSTRACT (Continue on revere if necewry and identify by blck number) A high temperature...tribometer has been specifically designed and fabricated to accurately measure, in real time, friction and wear characteristics of materials at temperatures
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2013-01-01
The available curated data lag behind current biological knowledge contained in the literature. Text mining can assist biologists and curators to locate and access this knowledge, for instance by characterizing the functional profile of publications. Gene Ontology (GO) category assignment in free text already supports various applications, such as powering ontology-based search engines, finding curation-relevant articles (triage) or helping the curator to identify and encode functions. Popular text mining tools for GO classification are based on so called thesaurus-based--or dictionary-based--approaches, which exploit similarities between the input text and GO terms themselves. But their effectiveness remains limited owing to the complex nature of GO terms, which rarely occur in text. In contrast, machine learning approaches exploit similarities between the input text and already curated instances contained in a knowledge base to infer a functional profile. GO Annotations (GOA) and MEDLINE make possible to exploit a growing amount of curated abstracts (97 000 in November 2012) for populating this knowledge base. Our study compares a state-of-the-art thesaurus-based system with a machine learning system (based on a k-Nearest Neighbours algorithm) for the task of proposing a functional profile for unseen MEDLINE abstracts, and shows how resources and performances have evolved. Systems are evaluated on their ability to propose for a given abstract the GO terms (2.8 on average) used for curation in GOA. We show that since 2006, although a massive effort was put into adding synonyms in GO (+300%), our thesaurus-based system effectiveness is rather constant, reaching from 0.28 to 0.31 for Recall at 20 (R20). In contrast, thanks to its knowledge base growth, our machine learning system has steadily improved, reaching from 0.38 in 2006 to 0.56 for R20 in 2012. Integrated in semi-automatic workflows or in fully automatic pipelines, such systems are more and more efficient to provide assistance to biologists. DATABASE URL: http://eagl.unige.ch/GOCat/
Diagnosing Mechanisms of Oceanic Influence on Sahel Precipitation Variability
NASA Astrophysics Data System (ADS)
Pomposi, Catherine A.
The West African Monsoon (WAM) is a significant component of the global monsoon system and plays a key role in the annual cycle of precipitation in the Sahel region of Africa (10°N to 20°N) during the summer months (July to September). Rainfall in the Sahel varies on timescales ranging from seasons to millennia as a result of changes in the WAM. In the last century, the Sahel experienced a relatively wet period (prior to the 1960s) followed by a period of severe drought (1970s-1980s) with higher-frequency variability superimposed on this low-frequency background signal. Understanding precipitation variability like that which occurred over the 20th Century and its impact on Sahel precipitation is critically important for skillful hydroclimate predictions and disaster preparedness in the region. Previous work has shown that the WAM responds to both internal atmospheric variability and external oceanic forcing. A large fraction of 20th Century Sahel rainfall variability has been linked to nearby and remote oceanic forcing from the Atlantic, Pacific, and Indian Oceans, suggesting that the ocean is the primary driver of variability. However, the mechanisms underlying the influence of sea surface temperature (SST) forcing to land based precipitation and the relative importance of the roles of different basins are not as well understood. To this end, the work completed in this thesis examines the physical mechanisms linking oceanic forcing to recent precipitation variability in the Sahel and identifies them alongside large-scale environmental conditions. A series of moisture budget decomposition studies are performed for the Sahel in order to understand the processes that govern regional hydroclimate variability on decadal and interannual time scales. The results show that the oceanic forcing of atmospheric mass convergence and divergence explains the moisture balance patterns in the region to first order on the timescales considered. On decadal timescales, forcing by the Indian and Atlantic Oceans correlate strongly with precipitation variability. The combination of a warm Indian Ocean and negative gradient across the Atlantic forces anomalous circulation patterns that result in net moisture divergence by the mean and transient flow. Together with negative moisture advection, these processes result in a strong drying of the Sahel during the later part of the 20th Century. Diagnosis of moisture budget and circulation components within the main rainbelt and along the monsoon margins show that changes to the mass convergence are related to the magnitude of precipitation that falls in the region, while the advection of dry air is associated with the maximum latitudinal extent of precipitation. On interannual timescales, results show that warm conditions in the Eastern Tropical Pacific remotely force anomalously dry conditions primarily through affecting the low-troposphere mass divergence field. This behavior is related to increased subsidence over the tropical Atlantic and into the Sahel and an anomalous westward flow of moisture from the continent, both resulting in a coherent drying pattern. The interannual signal is then further explored, particularly in light of the expected link between the El Nino Southern Oscillation and dry conditions in the Sahel, notably unseen during the historic El Nino event of 2015. Motivated by this, recent El Nino years and their precipitation signature in the Sahel along with the associated large-scale environmental conditions are examined. Two different outcomes for Sahel summer season are defined; an anomalously wet or an anomalously dry season coincident with El Nino conditions. The different precipitation patterns are distinguished by increased moisture supply for the wet years, which can be driven by both regional oceanic conditions that favor increased moisture convergence over the continent as well as weaker El Nino forcing. Finally, a series of new idealized SST-forced experiments that explore the causal link between oceanic forcing and the response of convection in the region on daily time resolution are discussed and preliminary results shown. These experiments aim to understand how convection in the Sahel responds to SST forcing using transient model simulations that track the evolving response of the WAM through time, day-by-day, under different oceanic conditions. Preliminary results show the stark differences in seasonal precipitation that occur when anomalies of opposite sign are applied in parts of the Atlantic and Pacific basin. There is also a suggestion of a difference in the timing of the rainy season when the model is run with different SST configurations.
Feel, imagine and learn! - Haptic augmented simulation and embodied instruction in physics learning
NASA Astrophysics Data System (ADS)
Han, In Sook
The purpose of this study was to investigate the potentials and effects of an embodied instructional model in abstract concept learning. This embodied instructional process included haptic augmented educational simulation as an instructional tool to provide perceptual experiences as well as further instruction to activate those previous experiences with perceptual simulation. In order to verify the effectiveness of this instructional model, haptic augmented simulation with three different haptic levels (force and kinesthetic, kinesthetic, and non-haptic) and instructional materials (narrative and expository) were developed and their effectiveness tested. 220 fifth grade students were recruited to participate in the study from three elementary schools located in lower SES neighborhoods in Bronx, New York. The study was conducted for three consecutive weeks in regular class periods. The data was analyzed using ANCOVA, ANOVA, and MANOVA. The result indicates that haptic augmented simulations, both the force and kinesthetic and the kinesthetic simulations, was more effective than the non-haptic simulation in providing perceptual experiences and helping elementary students to create multimodal representations about machines' movements. However, in most cases, force feedback was needed to construct a fully loaded multimodal representation that could be activated when the instruction with less sensory modalities was being given. In addition, the force and kinesthetic simulation was effective in providing cognitive grounding to comprehend a new learning content based on the multimodal representation created with enhanced force feedback. Regarding the instruction type, it was found that the narrative and the expository instructions did not make any difference in activating previous perceptual experiences. These findings suggest that it is important to help students to make a solid cognitive ground with perceptual anchor. Also, sequential abstraction process would deepen students' understanding by providing an opportunity to practice their mental simulation by removing sensory modalities used one by one and to gradually reach abstract level of understanding where students can imagine the machine's movements and working mechanisms with only abstract language without any perceptual supports.
Analyzing Array Manipulating Programs by Program Transformation
NASA Technical Reports Server (NTRS)
Cornish, J. Robert M.; Gange, Graeme; Navas, Jorge A.; Schachte, Peter; Sondergaard, Harald; Stuckey, Peter J.
2014-01-01
We explore a transformational approach to the problem of verifying simple array-manipulating programs. Traditionally, verification of such programs requires intricate analysis machinery to reason with universally quantified statements about symbolic array segments, such as "every data item stored in the segment A[i] to A[j] is equal to the corresponding item stored in the segment B[i] to B[j]." We define a simple abstract machine which allows for set-valued variables and we show how to translate programs with array operations to array-free code for this machine. For the purpose of program analysis, the translated program remains faithful to the semantics of array manipulation. Based on our implementation in LLVM, we evaluate the approach with respect to its ability to extract useful invariants and the cost in terms of code size.
Improving the Automated Detection and Analysis of Secure Coding Violations
2014-06-01
eliminating software vulnerabilities and other flaws. The CERT Division produces books and courses that foster a security mindset in developers, and...website also provides a virtual machine containing a complete build of the Rosecheckers project on Linux . The Rosecheckers project leverages the...Compass/ROSE6 project developed at Law- rence Livermore National Laboratory. This project provides a high-level API for accessing the abstract syntax tree
Dome: Distributed Object Migration Environment
1994-05-01
Best Available Copy AD-A281 134 Computer Science Dome: Distributed object migration environment Adam Beguelin Erik Seligman Michael Starkey May 1994...Beguelin Erik Seligman Michael Starkey May 1994 CMU-CS-94-153 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Dome... Linda [4], Isis [2], and Express [6] allow a pro- grammer to treat a heterogeneous network of computers as a parallel machine. These tools allow the
Machine Visual Motion Detection Modeled on Vertebrate Retina
1988-01-01
18. NUMBER OF PAGES 9 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form...mechanism of direction selectivity. (a) shows the use of persistent lateral inhibition to block conduction in the null direction. ( b ) shows the use of...Bipolar elements Bipolar ( B ) elements compare the inputs from local receptor and horizontal elements, passing on the positive value of the difference
Learning by Reading for Robust Reasoning in Intelligent Agents
2018-04-24
SUPPLEMENTARY NOTES 14. ABSTRACT Our hypotheses are that analogical processing plays multiple roles in enabling machines to learn by reading, and that...systems). Our overall hypotheses are that analogical processing plays multiple roles in learning by reading, and that qualitative representations provide...from reading this text? Narrative function can be seen as a kind of communication act, but the idea goes a bit beyond that. Communication acts are
Literature Mining of Pathogenesis-Related Proteins in Human Pathogens for Database Annotation
2009-10-01
person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control...submission and for literature mining result display with automatically tagged abstracts. I. Literature data sets for machine learning algorithm training...mass spectrometry) proteomics data from Burkholderia strains. • Task1 ( M13 -15): Preliminary analysis of the Burkholderia proteomic space
New Abstractions for Mobile Connectivity and Resource Management
2016-05-01
networked systems, con- sisting of replicated backend services and mobile , multi-homed clients. We derive a state machine for ECCP supporting migration...makes ECCP useful not only for mobility of client devices, but also for backend services which are increasingly run in VMs or containers on platforms...layers of the network stack, instead of the traditional IP/port, improve mobility for clients and backend services and reduce unnecessary coupling of
Experimental Evaluation of Cold-Sprayed Copper Rotating Bands for Large-Caliber Projectiles
2015-05-01
ABSTRACT A copper rotating band is the munition component responsible for both obturation and transfer of torque from the gun barrel’s rifling to the...munition, thereby causing the projectile to spin. Pure copper, copper alloy, and brass rotating bands are typically fabricated to steel munitions using...Machine Shop for fabrication; and the Transonic Experimental Facility for facilitating the gun -launch experiments. vi INTENTIONALLY LEFT BLANK
USSR and Eastern Europe Scientific Abstracts, Biomedical and Behavioral Sciences, Number 81.
1977-11-28
Hydrobiology 21 Industrial Microbiology 22 Industrial Toxicology 31 Marine Mammals 35 Microbiology 36 Molecular Biology 38 Neuros ciences...in progress. Factors involved in increasing productivity were calculated and presented in 4 tables: duration of use of equipment in 1 day (hours...machines no longer in production but omits materials on some new equipment and some new forms of organization of the work of the agrochemical
Computer Generation of Natural Language from a Deep Conceptual Base
1974-01-01
It would be useful to have machines which could read scientific documents, newspaper articles , novels, etc., and translate them into other...preparing abstracts :or articles and in headline writing (at least in those cases in which headlines are used as an indication of article content...above), a definite or indefinite article is attached to the noun phrase. The selection of color and size adjectives is made in .. fashion
Building a protein name dictionary from full text: a machine learning term extraction approach.
Shi, Lei; Campagne, Fabien
2005-04-07
The majority of information in the biological literature resides in full text articles, instead of abstracts. Yet, abstracts remain the focus of many publicly available literature data mining tools. Most literature mining tools rely on pre-existing lexicons of biological names, often extracted from curated gene or protein databases. This is a limitation, because such databases have low coverage of the many name variants which are used to refer to biological entities in the literature. We present an approach to recognize named entities in full text. The approach collects high frequency terms in an article, and uses support vector machines (SVM) to identify biological entity names. It is also computationally efficient and robust to noise commonly found in full text material. We use the method to create a protein name dictionary from a set of 80,528 full text articles. Only 8.3% of the names in this dictionary match SwissProt description lines. We assess the quality of the dictionary by studying its protein name recognition performance in full text. This dictionary term lookup method compares favourably to other published methods, supporting the significance of our direct extraction approach. The method is strong in recognizing name variants not found in SwissProt.
Building a protein name dictionary from full text: a machine learning term extraction approach
Shi, Lei; Campagne, Fabien
2005-01-01
Background The majority of information in the biological literature resides in full text articles, instead of abstracts. Yet, abstracts remain the focus of many publicly available literature data mining tools. Most literature mining tools rely on pre-existing lexicons of biological names, often extracted from curated gene or protein databases. This is a limitation, because such databases have low coverage of the many name variants which are used to refer to biological entities in the literature. Results We present an approach to recognize named entities in full text. The approach collects high frequency terms in an article, and uses support vector machines (SVM) to identify biological entity names. It is also computationally efficient and robust to noise commonly found in full text material. We use the method to create a protein name dictionary from a set of 80,528 full text articles. Only 8.3% of the names in this dictionary match SwissProt description lines. We assess the quality of the dictionary by studying its protein name recognition performance in full text. Conclusion This dictionary term lookup method compares favourably to other published methods, supporting the significance of our direct extraction approach. The method is strong in recognizing name variants not found in SwissProt. PMID:15817129
Athanasopoulos, Panagiotis G.; Hadjittofi, Christopher; Dharmapala, Arinda Dinesh; Orti-Rodriguez, Rafael Jose; Ferro, Alessandra; Nasralla, David; Konstantinidou, Sofia K.; Malagó, Massimo
2016-01-01
Abstract Donor organ shortage continues to limit the availability of liver transplantation, a successful and established therapy of end-stage liver diseases. Strategies to mitigate graft shortage include the utilization of marginal livers and recently ex-situ normothermic machine perfusion devices. A 59-year-old woman with cirrhosis due to primary sclerosing cholangitis was offered an ex-situ machine perfused graft with unnoticed severe injury of the suprahepatic vasculature due to road traffic accident. Following a complex avulsion, repair and reconstruction of all donor hepatic veins as well as the suprahepatic inferior vena cava, the patient underwent a face-to-face piggy-back orthotopic liver transplantation and was discharged on the 11th postoperative day after an uncomplicated recovery. This report illustrates the operative technique to utilize an otherwise unusable organ, in the current environment of donor shortage and declining graft quality. Normothermic machine perfusion can definitely play a role in increasing the graft pool, without compromising the quality of livers who had vascular or other damage before being ex-situ perfused. Furthermore, it emphasizes the importance of promptly and thoroughly communicating organ injuries, as well as considering all reconstructive options within the level of expertise at the recipient center. PMID:27082550
Clark, Alex M; Williams, Antony J; Ekins, Sean
2015-01-01
The current rise in the use of open lab notebook techniques means that there are an increasing number of scientists who make chemical information freely and openly available to the entire community as a series of micropublications that are released shortly after the conclusion of each experiment. We propose that this trend be accompanied by a thorough examination of data sharing priorities. We argue that the most significant immediate benefactor of open data is in fact chemical algorithms, which are capable of absorbing vast quantities of data, and using it to present concise insights to working chemists, on a scale that could not be achieved by traditional publication methods. Making this goal practically achievable will require a paradigm shift in the way individual scientists translate their data into digital form, since most contemporary methods of data entry are designed for presentation to humans rather than consumption by machine learning algorithms. We discuss some of the complex issues involved in fixing current methods, as well as some of the immediate benefits that can be gained when open data is published correctly using unambiguous machine readable formats. Graphical AbstractLab notebook entries must target both visualisation by scientists and use by machine learning algorithms.
Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Han, Fang; Scott, Stephen L
Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less
Programming languages and compiler design for realistic quantum hardware.
Chong, Frederic T; Franklin, Diana; Martonosi, Margaret
2017-09-13
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Programming languages and compiler design for realistic quantum hardware
NASA Astrophysics Data System (ADS)
Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret
2017-09-01
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Is searching full text more effective than searching abstracts?
Lin, Jimmy
2009-01-01
Background With the growing availability of full-text articles online, scientists and other consumers of the life sciences literature now have the ability to go beyond searching bibliographic records (title, abstract, metadata) to directly access full-text content. Motivated by this emerging trend, I posed the following question: is searching full text more effective than searching abstracts? This question is answered by comparing text retrieval algorithms on MEDLINE® abstracts, full-text articles, and spans (paragraphs) within full-text articles using data from the TREC 2007 genomics track evaluation. Two retrieval models are examined: bm25 and the ranking algorithm implemented in the open-source Lucene search engine. Results Experiments show that treating an entire article as an indexing unit does not consistently yield higher effectiveness compared to abstract-only search. However, retrieval based on spans, or paragraphs-sized segments of full-text articles, consistently outperforms abstract-only search. Results suggest that highest overall effectiveness may be achieved by combining evidence from spans and full articles. Conclusion Users searching full text are more likely to find relevant articles than searching only abstracts. This finding affirms the value of full text collections for text retrieval and provides a starting point for future work in exploring algorithms that take advantage of rapidly-growing digital archives. Experimental results also highlight the need to develop distributed text retrieval algorithms, since full-text articles are significantly longer than abstracts and may require the computational resources of multiple machines in a cluster. The MapReduce programming model provides a convenient framework for organizing such computations. PMID:19192280
Machine learning for epigenetics and future medical applications
Holder, Lawrence B.; Haque, M. Muksitul; Skinner, Michael K.
2017-01-01
ABSTRACT Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review. PMID:28524769
NASA Astrophysics Data System (ADS)
Calo, M.; Tramelli, A.; Troise, C.; de Natale, G.
2015-12-01
Campi Flegrei (southern Italy) is one of the most studied calderas of the world due to its geothermal potential that was exploited since Romans' age, and its eruption and seismic risk, affecting a densely populated region. The caldera is marked by strong vertical deformations of the soil called bradyseisms, which are often accompanied by seismic crises. In particular the bradyseismic crises of 1982-84 are remembered for the large number of earthquakes that exceeded 16000 events recorded. Seismicity has been used to model the distribution of the elastic parameters with the aim to study the volcano behavior. However, till now seismic velocity models, calculated with standard tomography, failed in resolving small structures (<1.5-2km) located also at shallow depth, which could be responsible of small eruption as the last one that originated the Monte Nuovo monogenic cone in 1538. Here we show Vp and Vp/Vs models carried out by applying an enhanced seismic tomography that uses the Double Difference method (DD, Zhang and Thurber, 2003) complemented with the Weighted Average Model post-processing (WAM, Calò et al., 2009, Calò et al., 2011, 2013). The 3D models obtained with this procedure benefit of the high resolving power due to DD method, which uses both absolute and differential data, and of the improved reliability offered by WAM, which allows to overcome the drawbacks of the standard inversion methods. Our approach allowed to image structures with linear dimension of 0.5-1.2km, resulting in an improvement of the resolving power at least two times of the other published models (e.g. Priolo et al., 2012). Results show small bodies of high Vp and Vp/Vs at shallow depth (2.5-3.5 km) that could be associated either with magmatic intrusions or fluid saturated rocks, probably responsible of unrest episodes. At shallower depth (0.5-2.0 km), the Vp/Vs model is able to discern between water- and gas- bearing regions giving insight on the assessment of the potential of the geothermal reservoir.
Gortais, Bernard
2003-01-01
In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659
Gortais, Bernard
2003-07-29
In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music.
Understanding of anesthesia machine function is enhanced with a transparent reality simulation.
Fischler, Ira S; Kaschub, Cynthia E; Lizdas, David E; Lampotang, Samsun
2008-01-01
Photorealistic simulations may provide efficient transfer of certain skills to the real system, but by being opaque may fail to encourage deeper learning of the structure and function of the system. Schematic simulations that are more abstract, with less visual fidelity but make system structure and function transparent, may enhance deeper learning and optimize retention and transfer of learning. We compared learning effectiveness of these 2 modes of externalizing the output of a common simulation engine (the Virtual Anesthesia Machine, VAM) that models machine function and dynamics and responds in real time to user interventions such as changes in gas flow or ventilation. Undergraduate students (n = 39) and medical students (n = 35) were given a single, 1-hour guided learning session with either a Transparent or an Opaque version of the VAM simulation. The following day, the learners' knowledge of machine components, function, and dynamics was tested. The Transparent-VAM groups scored higher than the Opaque-VAM groups on a set of multiple-choice questions concerning conceptual knowledge about anesthesia machines (P = 0.009), provided better and more complete explanations of component function (P = 0.003), and were more accurate in remembering and inferring cause-and-effect dynamics of the machine and relations among components (P = 0.003). Although the medical students outperformed undergraduates on all measures, a similar pattern of benefits for the Transparent VAM was observed for these 2 groups. Schematic simulations that transparently allow learners to visualize, and explore, underlying system dynamics and relations among components may provide a more effective mental model for certain systems. This may lead to a deeper understanding of how the system works, and therefore, we believe, how to detect and respond to potentially adverse situations.
Park, Eunjeong; Chang, Hyuk-Jae; Nam, Hyo Suk
2017-04-18
The pronator drift test (PDT), a neurological examination, is widely used in clinics to measure motor weakness of stroke patients. The aim of this study was to develop a PDT tool with machine learning classifiers to detect stroke symptoms based on quantification of proximal arm weakness using inertial sensors and signal processing. We extracted features of drift and pronation from accelerometer signals of wearable devices on the inner wrists of 16 stroke patients and 10 healthy controls. Signal processing and feature selection approach were applied to discriminate PDT features used to classify stroke patients. A series of machine learning techniques, namely support vector machine (SVM), radial basis function network (RBFN), and random forest (RF), were implemented to discriminate stroke patients from controls with leave-one-out cross-validation. Signal processing by the PDT tool extracted a total of 12 PDT features from sensors. Feature selection abstracted the major attributes from the 12 PDT features to elucidate the dominant characteristics of proximal weakness of stroke patients using machine learning classification. Our proposed PDT classifiers had an area under the receiver operating characteristic curve (AUC) of .806 (SVM), .769 (RBFN), and .900 (RF) without feature selection, and feature selection improves the AUCs to .913 (SVM), .956 (RBFN), and .975 (RF), representing an average performance enhancement of 15.3%. Sensors and machine learning methods can reliably detect stroke signs and quantify proximal arm weakness. Our proposed solution will facilitate pervasive monitoring of stroke patients. ©Eunjeong Park, Hyuk-Jae Chang, Hyo Suk Nam. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.04.2017.
Sing, David C; Metz, Lionel N; Dudli, Stefan
2017-06-01
Retrospective review. To identify the top 100 spine research topics. Recent advances in "machine learning," or computers learning without explicit instructions, have yielded broad technological advances. Topic modeling algorithms can be applied to large volumes of text to discover quantifiable themes and trends. Abstracts were extracted from the National Library of Medicine PubMed database from five prominent peer-reviewed spine journals (European Spine Journal [ESJ], The Spine Journal [SpineJ], Spine, Journal of Spinal Disorders and Techniques [JSDT], Journal of Neurosurgery: Spine [JNS]). Each abstract was entered into a latent Dirichlet allocation model specified to discover 100 topics, resulting in each abstract being assigned a probability of belonging in a topic. Topics were named using the five most frequently appearing terms within that topic. Significance of increasing ("hot") or decreasing ("cold") topic popularity over time was evaluated with simple linear regression. From 1978 to 2015, 25,805 spine-related research articles were extracted and classified into 100 topics. Top two most published topics included "clinical, surgeons, guidelines, information, care" (n = 496 articles) and "pain, back, low, treatment, chronic" (424). Top two hot trends included "disc, cervical, replacement, level, arthroplasty" (+0.05%/yr, P < 0.001), and "minimally, invasive, approach, technique" (+0.05%/yr, P < 0.001). By journal, the most published topics were ESJ-"operative, surgery, postoperative, underwent, preoperative"; SpineJ-"clinical, surgeons, guidelines, information, care"; Spine-"pain, back, low, treatment, chronic"; JNS- "tumor, lesions, rare, present, diagnosis"; JSDT-"cervical, anterior, plate, fusion, ACDF." Topics discovered through latent Dirichlet allocation modeling represent unbiased meaningful themes relevant to spine care. Topic dynamics can provide historical context and direction for future research for aspiring investigators and trainees interested in spine careers. Please explore https://singdc.shinyapps.io/spinetopics. N A.
Analysis of a Distributed Pulse Power System Using a Circuit Analysis Code
1979-06-01
dose rate was then integrated to give a number that could be compared with measure- ments made using thermal luminescent dosimeters ( TLD ’ s). Since...NM 8 7117 AND THE BDM CORPORATION, ALBUQUERQUE, NM 87106 Abstract A sophisticated computer code (SCEPTRE), used to analyze electronic circuits...computer code (SCEPTRE), used to analyze electronic circuits, was used to evaluate the performance of a large flash X-ray machine. This device was
Composing Data and Process Descriptions in the Design of Software Systems.
1988-05-01
accompanying ’data’ specification. So, for example, the bank account of Section 2.2.3 became ACC = open? d -- ACCIin(d) ACCA = payin? p --* ACCeosi(Ap) wdraw...w --* ACCtidraw(A,w) bal! balance(A) --+ ACCA I close -+ STOP where A has abstract type Account , with operators (that is, side-effect free functions...n accounts .................. 43 3.5 Non-deterministic merge ........ ........................... 45 4.1 Specification of a ticket machine system
2013-12-01
study of nature, just as they have in mathematics . Hence, even in our day of hyper abstract thinking , mathematics continues to be the language of...way of thinking . 2. Those successfully completing education and apprenticeship have professed a self-sacrificing commitment to serving society...overreaches. Pinker points out that the contextual school ignores the predictive reality of science and mathematics .73 This does not mean that metaphors
2008-09-01
2004), forward scattering and backscattering from a sand dollar test, a bivalve shell , and a machined aluminum disk of similar size were measured over a...Abstract Benthic shells can contribute greatly to the scattering variability of the ocean bottom, particularly at low grazing angles. Among the...effects of shell aggregates are increased scattering strength and potential subcritical angle penetration of the seafloor. Sand dollars (Dendraster
2008-09-01
results. In Stanton and Chu (2004), forward scattering and backscattering from a sand dollar test, a bivalve shell , and a machined aluminum disk of...Oceanographic Institution Abstract Benthic shells can contribute greatly to the scattering variability of the ocean bottom, particularly at low...grazing angles. Among the effects of shell aggregates are increased scattering strength and potential subcritical angle penetration of the seafloor
1978-09-12
the population. Only a socialist, planned economy can cope with such problems. However, the in- creasing complexity of the tasks faced’ by...the development of systems allowing man-machine dialogue does not decrease, but rather increase the complexity of the systems involved, simply...shifting the complexity to another sphere, where it is invisible to the human utilizing the system. Figures 5; refer- ences 3: 2 Russian, 1 Western
Proceedings of the international meeting on thermal nuclear reactor safety. Vol. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Separate abstracts are included for each of the papers presented concerning current issues in nuclear power plant safety; national programs in nuclear power plant safety; radiological source terms; probabilistic risk assessment methods and techniques; non LOCA and small-break-LOCA transients; safety goals; pressurized thermal shocks; applications of reliability and risk methods to probabilistic risk assessment; human factors and man-machine interface; and data bases and special applications.
Multi-Entity Bayesian Networks Learning in Predictive Situation Awareness
2013-06-01
evaluated on a case study from PROGNOS. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18...algorithm for MEBN. The methods are evaluated on a case study from PROGNOS. 1 INTRODUCTION Over the past two decades, machine learning has...the MFrag of the child node. Lastly, in the third For-Loop, for all resident nodes in the MTheory, LPDs are generated by MLE. 5 CASE STUDY
PUP: An Architecture to Exploit Parallel Unification in Prolog
1988-03-01
environment stacking mo del similar to the Warren Abstract Machine [23] since it has been shown to be super ior to other known models (see [21]). The storage...execute in groups of independent operations. Unifications belonging to different group s may not overlap. Also unification operations belonging to the...since all parallel operations on the unification units must complete before any of the units can star t executing the next group of parallel
Trends and developments in industrial machine vision: 2013
NASA Astrophysics Data System (ADS)
Niel, Kurt; Heinzl, Christoph
2014-03-01
When following current advancements and implementations in the field of machine vision there seems to be no borders for future developments: Calculating power constantly increases, and new ideas are spreading and previously challenging approaches are introduced in to mass market. Within the past decades these advances have had dramatic impacts on our lives. Consumer electronics, e.g. computers or telephones, which once occupied large volumes, now fit in the palm of a hand. To note just a few examples e.g. face recognition was adopted by the consumer market, 3D capturing became cheap, due to the huge community SW-coding got easier using sophisticated development platforms. However, still there is a remaining gap between consumer and industrial applications. While the first ones have to be entertaining, the second have to be reliable. Recent studies (e.g. VDMA [1], Germany) show a moderately increasing market for machine vision in industry. Asking industry regarding their needs the main challenges for industrial machine vision are simple usage and reliability for the process, quick support, full automation, self/easy adjustment at changing process parameters, "forget it in the line". Furthermore a big challenge is to support quality control: Nowadays the operator has to accurately define the tested features for checking the probes. There is an upcoming development also to let automated machine vision applications find out essential parameters in a more abstract level (top down). In this work we focus on three current and future topics for industrial machine vision: Metrology supporting automation, quality control (inline/atline/offline) as well as visualization and analysis of datasets with steadily growing sizes. Finally the general trend of the pixel orientated towards object orientated evaluation is addressed. We do not directly address the field of robotics taking advances from machine vision. This is actually a fast changing area which is worth an own contribution.
Programming the Navier-Stokes computer: An abstract machine model and a visual editor
NASA Technical Reports Server (NTRS)
Middleton, David; Crockett, Tom; Tomboulian, Sherry
1988-01-01
The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.
Asquith, William H.; Bumgarner, Johnathan R.
2014-01-01
The mean monthly offset storages of Canyon Lake during the Drought Quartile were 110 acre-ft (20 percent); 448 acre-ft (40 percent); 754 acre-ft (60 percent); 1,080 acre-ft (80 percent); and 1,090 acre-ft (100 percent). A particular mean was interpreted as follows: the value of 754 acre-ft for the 60-percent brush-management scenario implies that, on average, this scenario indicates an additional 754 acre-ft per month of storage in Canyon Lake relative to the baseline during the Drought Quartile. All of the five scenarios resulted in an increase on average to water supply relative to the baseline scenario during the Drought Quartile through the SWAT-WAM linkage.
NASA Astrophysics Data System (ADS)
Niswatin, C.; Latief, M. A.; Suharyadi, S.
2018-02-01
This research aims to uncover the fact about engineering students in dealing with composing abstracts for their final projects. The research applies a descriptive qualitative quantitative design. The data were collected through questioners involving 104 engineering students, including the alumni at Politeknik Kota Malang, Indonesia. Furthermore, interviews were carried out to explain the details where necessary to support the primary data. It is found that the common problems faced by engineering students include 1) combining words into sentences, 2) identifying the most appropriate technical terms in engineering, and 3) applying grammar in context. To cope with those difficulties they demanded translation application machines, supported by peer-proofreaders. In addition, they considerably engaged personal tutoring with the lectures more than three times.
Software architecture for time-constrained machine vision applications
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.
2013-01-01
Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.
Probabilistic and machine learning-based retrieval approaches for biomedical dataset retrieval
Karisani, Payam; Qin, Zhaohui S; Agichtein, Eugene
2018-01-01
Abstract The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie PMID:29688379
Producibility and Serviceability of Kevlar-49 Structures Made on Hot Layup Tools
1975-05-01
changes for a typical airframe composite part and established improved machining practices for Kevlar-49. Some of the more signifi- cant conclusions...reverse side if necessary 8nd identify by block number) Composite Materials Inlet Fairing Helicopters Hot Layup Tools (HLT) Kevlar -49 20. ABSTRACT...CLASSlFlCATlON OF THIS PAGE(Whm Data Bnlorod) 0 Demonstrate the low cost aspects of using Hot Layup Tools (HLT) to fabricate composite structures. a
1991-05-01
was received as bar stocks in the work hardened condition. Before machining, the copper rods were annealed at 400 °C in argon for one hour. This...ABSTRACT Large deformation uniaxial compression and fixed-end torsion (simple shear) experiments were conducted on annealed OFHC Copper to obtain its... annealing treatment produced an average grain diameter of 45 jim. Experimental Procedure Compression Tests All the compression tests were conducted with
Flow Instability Tests for a Particle Bed Reactor Nuclear Thermal Rocket Fuel Element
1993-05-01
2.0 with GWBASIC or higher (DOS 5.0 was installed on the machine). Since the source code was written in BASIC, it was easy to make modifications...8217 AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for Public Release IAW 190-1 Distribution Unlimited MICHAEL M. BRICKER, SMSgt, USAF Chief...Administration 13. ABSTRACT (Maximum 200 words) i.14. SUBJECT TERMS 15. NUMBER OF PAGES 339 16. PRICE CODE . SECURITY CLASSIFICATION 18. SECURITY
Modeling Large-Scale Networks Using Virtual Machines and Physical Appliances
2014-01-27
downloaded and run locally. The lab solution couldn’t be based on ActiveX because the military Report Documentation Page Form ApprovedOMB No. 0704-0188...unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 disallowed ActiveX support on...its systems, which made running an RDP client over ActiveX not possible. The challenges the SEI encountered in delivering the instruction were
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draeger, E. W.
The Advanced Architecture and Portability Specialists team (AAPS) worked with a select set of LLNL application teams to develop and/or implement a portability strategy for next-generation architectures. The team also investigated new and updated programming models and helped develop programming abstractions targeting maintainability and performance portability. Significant progress was made on both fronts in FY17, resulting in multiple applications being significantly more prepared for the nextgeneration machines than before.
Automated Virtual Machine Introspection for Host-Based Intrusion Detection
2009-03-01
boxes represent the code and data sections of each process in memory with arrows representing hooks planted by malware to jump to the malware code...a useful indication of intrusion, it is also susceptible to mimicry and concurrency attacks [Pro03,Wat07]. Additionally, most research abstracts away...sequence of system calls that accomplishes his or her intent [WS02]. This “ mimicry attack” takes advantage of the fact that many HIDS discard the pa
NASA Astrophysics Data System (ADS)
Brereton, Margot Felicity
A series of short engineering exercises and design projects was created to help students learn to apply abstract knowledge to physical experiences with hardware. The exercises involved designing machines from kits of materials and dissecting and analyzing familiar household products. Students worked in teams. During the activities students brought their knowledge of engineering fundamentals to bear. Videotape analysis was used to identify and characterize the ways in which hardware contributed to learning fundamental concepts. Structural and qualitative analyses of videotaped activities were undertaken. Structural analysis involved counting the references to theory and hardware and the extent of interleaving of references in activity. The analysis found that there was much more discussion linking fundamental concepts to hardware in some activities than in others. The analysis showed that the interleaving of references to theory and hardware in activity is observable and quantifiable. Qualitative analysis was used to investigate the dialog linking concepts and hardware. Students were found to advance their designs and their understanding of engineering fundamentals through a negotiation process in which they pitted abstract concepts against hardware behavior. Through this process students sorted out theoretical assumptions and causal relations. In addition they discovered design assumptions, functional connections and physical embodiments of abstract concepts in hardware, developing a repertoire of familiar hardware components and machines. Hardware was found to be integral to learning, affecting the course of inquiry and the dynamics of group interaction. Several case studies are presented to illustrate the processes at work. The research illustrates the importance of working across the boundary between abstractions and experiences with hardware in order to learn engineering and physical sciences. The research findings are: (a) the negotiation process by which students discover fundamental concepts in hardware (and three central causes of negotiation breakdown); (b) a characterization of the ways that material systems contribute to learning activities, (the seven roles of hardware in learning); (c) the characteristics of activities that support discovering fundamental concepts in hardware (plus several engineering exercises); (d) a research methodology to examine how students learn in practice.
Dynamic state estimation assisted power system monitoring and protection
NASA Astrophysics Data System (ADS)
Cui, Yinan
The advent of phasor measurement units (PMUs) has unlocked several novel methods to monitor, control, and protect bulk electric power systems. This thesis introduces the concept of "Dynamic State Estimation" (DSE), aided by PMUs, for wide-area monitoring and protection of power systems. Unlike traditional State Estimation where algebraic variables are estimated from system measurements, DSE refers to a process to estimate the dynamic states associated with synchronous generators. This thesis first establishes the viability of using particle filtering as a technique to perform DSE in power systems. The utility of DSE for protection and wide-area monitoring are then shown as potential novel applications. The work is presented as a collection of several journal and conference papers. In the first paper, we present a particle filtering approach to dynamically estimate the states of a synchronous generator in a multi-machine setting considering the excitation and prime mover control systems. The second paper proposes an improved out-of-step detection method for generators by means of angular difference. The generator's rotor angle is estimated with a particle filter-based dynamic state estimator and the angular separation is then calculated by combining the raw local phasor measurements with this estimate. The third paper introduces a particle filter-based dual estimation method for tracking the dynamic states of a synchronous generator. It considers the situation where the field voltage measurements are not readily available. The particle filter is modified to treat the field voltage as an unknown input which is sequentially estimated along with the other dynamic states. The fourth paper proposes a novel framework for event detection based on energy functions. The key idea is that any event in the system will leave a signature in WAMS data-sets. It is shown that signatures for four broad classes of disturbance events are buried in the components that constitute the energy function for the system. This establishes a direct correspondence (or mapping) between an event and certain component(s) of the energy function. The last paper considers the dynamic latency effect when the measurements and estimated dynamics are transmitted from remote ends to a centralized location through the networks.
Memarian, Negar; Torre, Jared B.; Haltom, Kate E.; Stanton, Annette L.
2017-01-01
Abstract Affect labeling (putting feelings into words) is a form of incidental emotion regulation that could underpin some benefits of expressive writing (i.e. writing about negative experiences). Here, we show that neural responses during affect labeling predicted changes in psychological and physical well-being outcome measures 3 months later. Furthermore, neural activity of specific frontal regions and amygdala predicted those outcomes as a function of expressive writing. Using supervised learning (support vector machines regression), improvements in four measures of psychological and physical health (physical symptoms, depression, anxiety and life satisfaction) after an expressive writing intervention were predicted with an average of 0.85% prediction error [root mean square error (RMSE) %]. The predictions were significantly more accurate with machine learning than with the conventional generalized linear model method (average RMSE: 1.3%). Consistent with affect labeling research, right ventrolateral prefrontal cortex (RVLPFC) and amygdalae were top predictors of improvement in the four outcomes. Moreover, RVLPFC and left amygdala predicted benefits due to expressive writing in satisfaction with life and depression outcome measures, respectively. This study demonstrates the substantial merit of supervised machine learning for real-world outcome prediction in social and affective neuroscience. PMID:28992270
Workshop on Algorithms for Time-Series Analysis
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2012-04-01
abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.
Mutual information, neural networks and the renormalization group
NASA Astrophysics Data System (ADS)
Koch-Janusz, Maciej; Ringel, Zohar
2018-06-01
Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.
Anesthesiology, automation, and artificial intelligence
Alexander, John C.; Joshi, Girish P.
2018-01-01
ABSTRACT There have been many attempts to incorporate automation into the practice of anesthesiology, though none have been successful. Fundamentally, these failures are due to the underlying complexity of anesthesia practice and the inability of rule-based feedback loops to fully master it. Recent innovations in artificial intelligence, especially machine learning, may usher in a new era of automation across many industries, including anesthesiology. It would be wise to consider the implications of such potential changes before they have been fully realized. PMID:29686578
Control of Surface Attack by Gallium Alloys in Electrical Contacts.
1986-03-28
and atmospheric control but does not allow visual observation of the contact brushes. This machine is a small homopolar motor built from mild steel...collectors,gallium, homopolar devices,liquid metals,~- is. ABSTRACT ICNI.. .. w 41N"w -~dv.mp.d Wrllt by Itabata" * Electrical contact between a copp’er...32 5 Test rig with felt metal brushes 32 6 Homopolar test apparatus 33 7 Rewetting of alloy track 33 8 Alloy track after running with finger 34 brushes
The Use of the MASCOT Philosophy for the Construction of Ada Programs,
1983-10-01
dependent units must be recompiled. Because of Ada’s commitment to abstract data types tasks are treated as data types with certain restrictions. A task...3.3.3.1.4 End of Slice Action The scheduling algorithm determines, for each type of Slice termination, how the Scheduler treats Activities whose Slice has...Pools. The MASCOT Machine treats them as constructionally equivalent (refer 3.3.1.1.1). Because of the constraints brought in by the formulation of
Report of the 22nd Annual Congress of the International Liver Transplantation Society.
Diaz, Geraldine C; Zerillo, Jeron; Singhal, Ashish; Hibi, Taizo; Vitale, Alessandro; Levitsky, Josh; Renz, John F
2017-02-01
The 2016 Annual Congress of the International Liver Transplantation Society was held in Seoul, South Korea in May. The 22nd Congress marked the largest multidisciplinary liver transplantation meeting in Asia since 2010. The principal themes were living donation, allocation, immunosuppression, machine preservation, novel treatment of hepatitis C, and expansion of the deceased-donor allograft pool. This report presents select abstracts from the scientific sessions within the context of the published literature to serve as a quick reference.
2009-05-04
William Gibson in his novel, Neuromancer. It calls cyberspace a “consensual hallucination .” …A graphic representation of data abstracted from...someone can perform activities and create effects. However, the operational implications of the word “domain” deserve amplification. OPERATIONAL...and create certain effects, humans sometimes need machines, electronics or other technology. In general, technology allows an expanded, but not
1980-03-01
ordinates. 3. APPARATUS 3.1 Models 3.1.1 Wings. - The three semispan wing models were each machined from a solid billet of 17 - 4PH stainless steel by a... 17 . DISTRIBUTION STATEMENT (of the abstract entered in Block 20, If different from Report) 18. SUPPLEMENTARY NOTES 1. KEY WORDS (Continue on reverse...Results .. ................. .... 17 5.3.1 Force data .. ................ ...... 17 5.3.2 Pressure data. .. ............... ..... 17 5.3.3 Fuselage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hauer, John F.; Mittelstadt, William; Martin, Kenneth E.
During 2005 and 2006 the Western Electricity Coordinating Council (WECC) performed three major tests of western system dynamics. These tests used a Wide Area Measurement System (WAMS) based primarily on Phasor Measurement Units (PMUs) to determine response to events including the insertion of the 1400-MW Chief Joseph braking resistor, probing signals, and ambient events. Test security was reinforced through real-time analysis of wide area effects, and high-quality data provided dynamic profiles for interarea modes across the entire western interconnection. The tests established that low-level optimized pseudo-random ±20-MW probing with the Pacific DC Intertie (PDCI) roughly doubles the apparent noise thatmore » is natural to the power system, providing sharp dynamic information with negligible interference to system operations. Such probing is an effective alternative to use of the 1400-MW Chief Joseph dynamic brake, and it is under consideration as a standard means for assessing dynamic security.« less
Operating systems. [of computers
NASA Technical Reports Server (NTRS)
Denning, P. J.; Brown, R. L.
1984-01-01
A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.
NASA Technical Reports Server (NTRS)
Caines, P. E.
1999-01-01
The work in this research project has been focused on the construction of a hierarchical hybrid control theory which is applicable to flight management systems. The motivation and underlying philosophical position for this work has been that the scale, inherent complexity and the large number of agents (aircraft) involved in an air traffic system imply that a hierarchical modelling and control methodology is required for its management and real time control. In the current work the complex discrete or continuous state space of a system with a small number of agents is aggregated in such a way that discrete (finite state machine or supervisory automaton) controlled dynamics are abstracted from the system's behaviour. High level control may then be either directly applied at this abstracted level, or, if this is in itself of significant complexity, further layers of abstractions may be created to produce a system with an acceptable degree of complexity at each level. By the nature of this construction, high level commands are necessarily realizable at lower levels in the system.
The need to approximate the use-case in clinical machine learning
Saeb, Sohrab; Jayaraman, Arun; Mohr, David C.; Kording, Konrad P.
2017-01-01
Abstract The availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map those data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is vital to reliably quantify their prediction accuracy. Cross-validation (CV) is the standard approach where the accuracy of such algorithms is evaluated on part of the data the algorithm has not seen during training. However, for this procedure to be meaningful, the relationship between the training and the validation set should mimic the relationship between the training set and the dataset expected for the clinical use. Here we compared two popular CV methods: record-wise and subject-wise. While the subject-wise method mirrors the clinically relevant use-case scenario of diagnosis in newly recruited subjects, the record-wise strategy has no such interpretation. Using both a publicly available dataset and a simulation, we found that record-wise CV often massively overestimates the prediction accuracy of the algorithms. We also conducted a systematic review of the relevant literature, and found that this overly optimistic method was used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning-based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as inaccurate results can mislead both clinicians and data scientists. PMID:28327985
A future of living machines?: International trends and prospects in biomimetic and biohybrid systems
NASA Astrophysics Data System (ADS)
Prescott, Tony J.; Lepora, Nathan; Vershure, Paul F. M. J.
2014-03-01
Research in the fields of biomimetic and biohybrid systems is developing at an accelerating rate. Biomimetics can be understood as the development of new technologies using principles abstracted from the study of biological systems, however, biomimetics can also be viewed from an alternate perspective as an important methodology for improving our understanding of the world we live in and of ourselves as biological organisms. A biohybrid entity comprises at least one artificial (engineered) component combined with a biological one. With technologies such as microscale mobile computing, prosthetics and implants, humankind is moving towards a more biohybrid future in which biomimetics helps us to engineer biocompatible technologies. This paper reviews recent progress in the development of biomimetic and biohybrid systems focusing particularly on technologies that emulate living organisms—living machines. Based on our recent bibliographic analysis [1] we examine how biomimetics is already creating life-like robots and identify some key unresolved challenges that constitute bottlenecks for the field. Drawing on our recent research in biomimetic mammalian robots, including humanoids, we review the future prospects for such machines and consider some of their likely impacts on society, including the existential risk of creating artifacts with significant autonomy that could come to match or exceed humankind in intelligence. We conclude that living machines are more likely to be a benefit than a threat but that we should also ensure that progress in biomimetics and biohybrid systems is made with broad societal consent.
NASA Astrophysics Data System (ADS)
Boone, A. A.; Xue, Y.; Ruth, C.; De Sales, F.; Hagos, S.; Mahanama, S. P. P.; Schiro, K.; Song, G.; Wang, G.; Koster, R. D.; Mechoso, C. R.
2014-12-01
There is increasing evidence from numerical studies that anthropogenic land-use and land-cover changes (LULCC) can potentially induce significant variations on the regional scale climate. However, the magnitude of these variations likely depends on the local strength of the coupling between the surface and the atmosphere, the magnitude of the surface biophysical changes and how the key processes linking the surface with the atmosphere are parameterized within a particular model framework. One key hot-spot which has received considerable attention is the Sahelian region of West Africa, for which numerous studies have reported a significant increase in anthropogenic pressure on the already limited natural resources in this region, notably in terms of land use conversion and degradation. Thus, there is a pressing need to better understand the impacts of potential land degradation on the West African Monsoon (WAM) system. One of the main goals of the West African Monsoon Modeling andEvaluation project phase 2 (WAMMEII) is to provide basic understandingof LULCC on the regional climate over West Africa, and to evaluate thesensitivity of the seasonal variability of the WAM to LULCC. Theprescribed LULCC is based on recent 50 year period which represents amaximum feasible degradation scenario. In the current study, the LULCCis applied to five state of the art global climate models over afive-year period. The imposed LULCC results in a model-average 5-7%increase in surface albedo: the corresponding lower surface netradiation mainly results in a significant reduction in surfaceevaporation (upwards of 1 mm per day over a large part of the Sahel)which leads to less convective heating of the atmosphere, lowermoisture convergence, increased subsidence and reduced cloud coverover the LULCC zone. The overall impact can be characterized as asubstantial drought effect resulting in a reduction in annual rainfallof 20-40% in the Sahel and a southward shift of the monsoon. In broadagreement with previous studies, the impact of degradation on theregional climate is found to be variable among the different coupledmodels, however, the signal is stronger and a more consistent betweenthe models here which is likely related to our emphasis onprioritizing a consistent impact on the biophysical properties of thesurface.
Exploiting the systematic review protocol for classification of medical abstracts.
Frunza, Oana; Inkpen, Diana; Matwin, Stan; Klement, William; O'Blenis, Peter
2011-01-01
To determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers. The test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload. For the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%. The per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Boilard, Patrick
Even though powder metallurgy (P/M) is a near net shape process, a large number of parts still require one or more machining operations during the course of their elaboration and/or their finishing. The main objectives of the work presented in this thesis are centered on the elaboration of blends with enhanced machinability, as well as helping with the definition and in the characterization of the machinability of P/M parts. Enhancing machinability can be done in various ways, through the use of machinability additives and by decreasing the amount of porosity of the parts. These different ways of enhancing machinability have been investigated thoroughly, by systematically planning and preparing series of samples in order to obtain valid and repeatable results leading to meaningful conclusions relevant to the P/M domain. Results obtained during the course of the work are divided into three main chapters: (1) the effect of machining parameters on machinability, (2) the effect of additives on machinability, and (3) the development and the characterization of high density parts obtained by liquid phase sintering. Regarding the effect of machining parameters on machinability, studies were performed on parameters such as rotating speed, feed, tool position and diameter of the tool. Optimal cutting parameters are found for drilling operations performed on a standard FC-0208 blend, for different machinability criteria. Moreover, study of material removal rates shows the sensitivity of the machinability criteria for different machining parameters and indicates that thrust force is more regular than tool wear and slope of the drillability curve in the characterization of machinability. The chapter discussing the effect of various additives on machinability reveals many interesting results. First, work carried out on MoS2 additions reveals the dissociation of this additive and the creation of metallic sulphides (namely CuxS sulphides) when copper is present. Results also show that it is possible to reduce the amount of MoS2 in the blend so as to lower the dimensional change and the cost (blend Mo8A), while enhancing machinability and keeping hardness values within the same range (70 HRB). Second, adding enstatite (MgO·SiO2) permits the observation of the mechanisms occurring with the use of this additive. It is found that the stability of enstatite limits the diffusion of graphite during sintering, leading to the presence of free graphite in the pores, thus enhancing machinability. Furthermore, a lower amount of graphite in the matrix leads to a lower hardness, which is also beneficial to machinability. It is also found that the presence of copper enhances the diffusion of graphite, through the formation of a liquid phase during sintering. With the objective of improving machinability by reaching higher densities, blends were developed for densification through liquid phase sintering. High density samples are obtained (>7.5 g/cm3) for blends prepared with Fe-C-P constituents, namely with 0.5%P and 2.4%C. By systematically studying the effect of different parameters, the importance of the chemical composition (mainly the carbon content) and the importance of the sintering cycle (particularly the cooling rate) are demonstrated. Moreover, various heat treatments studied illustrate the different microstructures achievable for this system, showing various amounts of cementite, pearlite and free graphite. Although the machinability is limited for samples containing large amounts of cementite, it can be greatly improved with very slow cooling, leading to graphitization of the carbon in presence of phosphorus. Adequate control of the sintering cycle on samples made from FGS1625 powder leads to the obtention of high density (≥7.0 g/cm 3) microstructures containing various amounts of pearlite, ferrite and free graphite. Obtaining ferritic microstructures with free graphite designed for very high machinability (tool wear <1.0%) or fine pearlitic microstructures with excellent mechanical properties (transverse rupture strength >1600 MPa) is therefore possible. These results show that improvement of machinability through higher densities is limited by microstructure. Indeed, for the studied samples, microstructure is dominant in the determination of machinability, far more important than density, judging by the influence of cementite or of the volume fraction of free graphite on machinability for example. (Abstract shortened by UMI.)
Simulation of an array-based neural net model
NASA Technical Reports Server (NTRS)
Barnden, John A.
1987-01-01
Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.
Graduate student theses supported by DOE`s Environmental Sciences Division
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cushman, Robert M.; Parra, Bobbi M.
1995-07-01
This report provides complete bibliographic citations, abstracts, and keywords for 212 doctoral and master`s theses supported fully or partly by the U.S. Department of Energy`s Environmental Sciences Division (and its predecessors) in the following areas: Atmospheric Sciences; Marine Transport; Terrestrial Transport; Ecosystems Function and Response; Carbon, Climate, and Vegetation; Information; Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP); Atmospheric Radiation Measurement (ARM); Oceans; National Institute for Global Environmental Change (NIGEC); Unmanned Aerial Vehicles (UAV); Integrated Assessment; Graduate Fellowships for Global Change; and Quantitative Links. Information on the major professor, department, principal investigator, and program area is given for each abstract.more » Indexes are provided for major professor, university, principal investigator, program area, and keywords. This bibliography is also available in various machine-readable formats (ASCII text file, WordPerfect{reg_sign} files, and PAPYRUS{trademark} files).« less
Experience with abstract notation one
NASA Technical Reports Server (NTRS)
Harvey, James D.; Weaver, Alfred C.
1990-01-01
The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.
Automatic processing of spoken dialogue in the home hemodialysis domain.
Lacson, Ronilda; Barzilay, Regina
2005-01-01
Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging for humans because of the lack of structure and the verbosity of dialogues. This work presents a first step towards automatic analysis of spoken medical dialogue. The backbone of our approach is an abstraction of a dialogue into a sequence of semantic categories. This abstraction uncovers structure in informal, verbose conversation between a caregiver and a patient, thereby facilitating automatic processing of dialogue content. Our method induces this structure based on a range of linguistic and contextual features that are integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). This work demonstrates the feasibility of automatically processing spoken medical dialogue.
High Level Analysis, Design and Validation of Distributed Mobile Systems with
NASA Astrophysics Data System (ADS)
Farahbod, R.; Glässer, U.; Jackson, P. J.; Vajihollahi, M.
System design is a creative activity calling for abstract models that facilitate reasoning about the key system attributes (desired requirements and resulting properties) so as to ensure these attributes are properly established prior to actually building a system. We explore here the practical side of using the abstract state machine (ASM) formalism in combination with the CoreASM open source tool environment for high-level design and experimental validation of complex distributed systems. Emphasizing the early phases of the design process, a guiding principle is to support freedom of experimentation by minimizing the need for encoding. CoreASM has been developed and tested building on a broad scope of applications, spanning computational criminology, maritime surveillance and situation analysis. We critically reexamine here the CoreASM project in light of three different application scenarios.
Offline detection of broken rotor bars in AC induction motors
NASA Astrophysics Data System (ADS)
Powers, Craig Stephen
ABSTRACT. OFFLINE DETECTION OF BROKEN ROTOR BARS IN AC INDUCTION MOTORS. The detection of the broken rotor bar defect in medium- and large-sized AC induction machines is currently one of the most difficult tasks for the motor condition and monitoring industry. If a broken rotor bar defect goes undetected, it can cause a catastrophic failure of an expensive machine. If a broken rotor bar defect is falsely determined, it wastes time and money to physically tear down and inspect the machine only to find an incorrect diagnosis. Previous work in 2009 at Baker/SKF-USA in collaboration with the Korea University has developed a prototype instrument that has been highly successful in correctly detecting the broken rotor bar defect in ACIMs where other methods have failed. Dr. Sang Bin and his students at the Korea University have been using this prototype instrument to help the industry save money in the successful detection of the BRB defect. A review of the current state of motor conditioning and monitoring technology for detecting the broken rotor bar defect in ACIMs shows improved detection of this fault is still relevant. An analysis of previous work in the creation of this prototype instrument leads into the refactoring of the software and hardware into something more deployable, cost effective and commercially viable.
Obtaining the Thermal Efficiency of a Steam Railroad Machine Toy According Dale's Cone of Learning
NASA Astrophysics Data System (ADS)
Bautista-Hernandez, Omar Tomas; Ruiz-Chavarria, Gregorio
2011-03-01
Physics is crucial to understanding the world around us, the world inside us, and the world beyond us. It is the most basic and fundamental science, hence, our interest in developing innovative strategies supported by the imagination and knowledge to make the learning process funny, attractive and interesting to people, so, we can help to change the general idea that Physics is an abstract and complicated science. We all know this instinctively, however, turn-of-the-century educationist Edgar Dale illustrated this with research when he developed the Cone of Learning - which states that after two weeks we remember only 10% of what we read, but we remember 90% of what we do. Based on that theory, we obtain the thermal efficiency of a steam railroad machine -this is a toy train that could be bought at any department store-, and show you the great percentage of energy lost when moving this railroad machine, just as the real life is. While doing this practice we don't focus on the results itself, instead, we try to demostrate that physics is funny and it is not difficult to learn. We must stress that this practice was done with pre-universitary and univesitary students, however, can be shown to the community in general.
The Fifth Generation. An annotated bibliography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bramer, M.; Bramer, D.
The Japanese Fifth Generation Computer System project constitutes a radical reappraisal of the functions which an advanced computer system should be able to perform, the programming languages needed to implement such functions, and the machine architectures suitable for supporting the chosen languages. The book guides the reader through the ever-growing literature on the project, and the international responses, including the United Kingdom Government's Alvey Program and the MCC Program in the United States. Evaluative abstracts are given, including books, journal articles, unpublished reports and material at both overview and technical levels.
Machine‐Assisted Organic Synthesis
Fitzpatrick, Daniel E.; Myers, Rebecca M.; Battilocchio, Claudio; Ingham, Richard. J.
2015-01-01
Abstract In this Review we describe how the advent of machines is impacting on organic synthesis programs, with particular emphasis on the practical issues associated with the design of chemical reactors. In the rapidly changing, multivariant environment of the research laboratory, equipment needs to be modular to accommodate high and low temperatures and pressures, enzymes, multiphase systems, slurries, gases, and organometallic compounds. Additional technologies have been developed to facilitate more specialized reaction techniques such as electrochemical and photochemical methods. All of these areas create both opportunities and challenges during adoption as enabling technologies. PMID:26193360
AceCloud: Molecular Dynamics Simulations in the Cloud.
Harvey, M J; De Fabritiis, G
2015-05-26
We present AceCloud, an on-demand service for molecular dynamics simulations. AceCloud is designed to facilitate the secure execution of large ensembles of simulations on an external cloud computing service (currently Amazon Web Services). The AceCloud client, integrated into the ACEMD molecular dynamics package, provides an easy-to-use interface that abstracts all aspects of interaction with the cloud services. This gives the user the experience that all simulations are running on their local machine, minimizing the learning curve typically associated with the transition to using high performance computing services.
IEEE 1982. Proceedings of the international conference on cybernetics and society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-01-01
The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
NASA Astrophysics Data System (ADS)
Pekedis, Mahmut; Mascerañas, David; Turan, Gursoy; Ercan, Emre; Farrar, Charles R.; Yildiz, Hasan
2015-08-01
For the last two decades, developments in damage detection algorithms have greatly increased the potential for autonomous decisions about structural health. However, we are still struggling to build autonomous tools that can match the ability of a human to detect and localize the quantity of damage in structures. Therefore, there is a growing interest in merging the computational and cognitive concepts to improve the solution of structural health monitoring (SHM). The main object of this research is to apply the human-machine cooperative approach on a tower structure to detect damage. The cooperation approach includes haptic tools to create an appropriate collaboration between SHM sensor networks, statistical compression techniques and humans. Damage simulation in the structure is conducted by releasing some of the bolt loads. Accelerometers are bonded to various locations of the tower members to acquire the dynamic response of the structure. The obtained accelerometer results are encoded in three different ways to represent them as a haptic stimulus for the human subjects. Then, the participants are subjected to each of these stimuli to detect the bolt loosened damage in the tower. Results obtained from the human-machine cooperation demonstrate that the human subjects were able to recognize the damage with an accuracy of 88 ± 20.21% and response time of 5.87 ± 2.33 s. As a result, it is concluded that the currently developed human-machine cooperation SHM may provide a useful framework to interact with abstract entities such as data from a sensor network.
Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia
2012-06-01
Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Do Doppler color flow algorithms for mapping disturbed flow make sense?
Gardin, J M; Lobodzinski, S M
1990-01-01
It has been suggested that a major advantage of Doppler color flow mapping is its ability to visualize areas of disturbed ("turbulent") flow, for example, in valvular stenosis or regurgitation and in shunts. To investigate how various color flow mapping instruments display disturbed flow information, color image processing was used to evaluate the most common velocity-variance color encoding algorithms of seven commercially available ultrasound machines. In six of seven machines, green was reportedly added by the variance display algorithms to map areas of disturbed flow. The amount of green intensity added to each pixel along the red and blue portions of the velocity reference color bar was calculated for each machine. In this study, velocities displayed on the reference color bar ranged from +/- 46 to +/- 64 cm/sec, depending on the Nyquist limit. Of note, changing the Nyquist limits depicted on the color reference bars did not change the distribution of the intensities of red, blue, or green within the contour of the reference map, but merely assigned different velocities to the pixels. Most color flow mapping algorithms in our study added increasing intensities of green to increasing positive (red) or negative (blue) velocities along their color reference bars. Most of these machines also added increasing green to red and blue color intensities horizontally across their reference bars as a marker of increased variance (spectral broadening). However, at any given velocity, marked variations were noted between different color flow mapping instruments in the amount of green added to their color velocity reference bars.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-05-04
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene
2018-05-16
Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.
Vita, Randi; Overton, James A; Mungall, Christopher J; Sette, Alessandro
2018-01-01
Abstract The Immune Epitope Database (IEDB), at www.iedb.org, has the mission to make published experimental data relating to the recognition of immune epitopes easily available to the scientific public. By presenting curated data in a searchable database, we have liberated it from the tables and figures of journal articles, making it more accessible and usable by immunologists. Recently, the principles of Findability, Accessibility, Interoperability and Reusability have been formulated as goals that data repositories should meet to enhance the usefulness of their data holdings. We here examine how the IEDB complies with these principles and identify broad areas of success, but also areas for improvement. We describe short-term improvements to the IEDB that are being implemented now, as well as a long-term vision of true ‘machine-actionable interoperability’, which we believe will require community agreement on standardization of knowledge representation that can be built on top of the shared use of ontologies. PMID:29688354
A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services
NASA Astrophysics Data System (ADS)
Cho, Kenjiro; Birman, Kenneth P.
1994-05-01
This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.
FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN
NASA Astrophysics Data System (ADS)
Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando
2014-06-01
The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis
Gong, Xiajing; Hu, Meng
2018-01-01
Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640
Khara, Dinesh C; Berger, Yaron; Ouldridge, Thomas E
2018-01-01
Abstract We present a detailed coarse-grained computer simulation and single molecule fluorescence study of the walking dynamics and mechanism of a DNA bipedal motor striding on a DNA origami. In particular, we study the dependency of the walking efficiency and stepping kinetics on step size. The simulations accurately capture and explain three different experimental observations. These include a description of the maximum possible step size, a decrease in the walking efficiency over short distances and a dependency of the efficiency on the walking direction with respect to the origami track. The former two observations were not expected and are non-trivial. Based on this study, we suggest three design modifications to improve future DNA walkers. Our study demonstrates the ability of the oxDNA model to resolve the dynamics of complex DNA machines, and its usefulness as an engineering tool for the design of DNA machines that operate in the three spatial dimensions. PMID:29294083
RESTful M2M Gateway for Remote Wireless Monitoring for District Central Heating Networks
Cheng, Bo; Wei, Zesan
2014-01-01
In recent years, the increased interest in energy conservation and environmental protection, combined with the development of modern communication and computer technology, has resulted in the replacement of distributed heating by central heating in urban areas. This paper proposes a Representational State Transfer (REST) Machine-to-Machine (M2M) gateway for wireless remote monitoring for a district central heating network. In particular, we focus on the resource-oriented RESTful M2M gateway architecture, and present an uniform devices abstraction approach based on Open Service Gateway Initiative (OSGi) technology, and implement the resource mapping mechanism between resource address mapping mechanism between RESTful resources and the physical sensor devices, and present the buffer queue combined with polling method to implement the data scheduling and Quality of Service (QoS) guarantee, and also give the RESTful M2M gateway open service Application Programming Interface (API) set. The performance has been measured and analyzed. Finally, the conclusions and future work are presented. PMID:25436650
RESTful M2M gateway for remote wireless monitoring for district central heating networks.
Cheng, Bo; Wei, Zesan
2014-11-27
In recent years, the increased interest in energy conservation and environmental protection, combined with the development of modern communication and computer technology, has resulted in the replacement of distributed heating by central heating in urban areas. This paper proposes a Representational State Transfer (REST) Machine-to-Machine (M2M) gateway for wireless remote monitoring for a district central heating network. In particular, we focus on the resource-oriented RESTful M2M gateway architecture, and present an uniform devices abstraction approach based on Open Service Gateway Initiative (OSGi) technology, and implement the resource mapping mechanism between resource address mapping mechanism between RESTful resources and the physical sensor devices, and present the buffer queue combined with polling method to implement the data scheduling and Quality of Service (QoS) guarantee, and also give the RESTful M2M gateway open service Application Programming Interface (API) set. The performance has been measured and analyzed. Finally, the conclusions and future work are presented.
Goldstein, Ayelet; Shahar, Yuval
2016-06-01
Design and implement an intelligent free-text summarization system: The system's input includes large numbers of longitudinal, multivariate, numeric and symbolic clinical raw data, collected over varying periods of time, and in different complex contexts, and a suitable medical knowledge base. The system then automatically generates a textual summary of the data. We aim to prove the feasibility of implementing such a system, and to demonstrate its potential benefits for clinicians and for enhancement of quality of care. We have designed a new, domain-independent, knowledge-based system, the CliniText system, for automated summarization in free text of longitudinal medical records of any duration, in any context. The system is composed of six components: (1) A temporal abstraction module generates all possible abstractions from the patient's raw data using a temporal-abstraction knowledge base; (2) The abductive reasoning module infers abstractions or events from the data, which were not explicitly included in the database; (3) The pruning module filters out raw or abstract data based on predefined heuristics; (4) The document structuring module organizes the remaining raw or abstract data, according to the desired format; (5) The microplanning module, groups the raw or abstract data and creates referring expressions; (6) The surface realization module, generates the text, and applies the grammar rules of the chosen language. We have performed an initial technical evaluation of the system in the cardiac intensive-care and diabetes domains. We also summarize the results of a more detailed evaluation study that we have performed in the intensive-care domain that assessed the completeness, correctness, and overall quality of the system's generated text, and its potential benefits to clinical decision making. We assessed these measures for 31 letters originally composed by clinicians, and for the same letters when generated by the CliniText system. We have successfully implemented all of the components of the CliniText system in software. We have also been able to create a comprehensive temporal-abstraction knowledge base to support its functionality, mostly in the intensive-care domain. The initial technical evaluation of the system in the cardiac intensive-care and diabetes domains has shown great promise, proving the feasibility of constructing and operating such systems. The detailed results of the evaluation in the intensive-care domain are out of scope of the current paper, and we refer the reader to a more detailed source. In all of the letters composed by clinicians, there were at least two important items per letter missed that were included by the CliniText system. The clinicians' letters got a significantly better grade in three out of four measured quality parameters, as judged by an expert; however, the variance in the quality was much higher in the clinicians' letters. In addition, three clinicians answered questions based on the discharge letter 40% faster, and answered four out of the five questions equally well or significantly better, when using the CliniText-generated letters, than when using the clinician-composed letters. Constructing a working system for automated summarization in free text of large numbers of varying periods of multivariate longitudinal clinical data is feasible. So is the construction of a large knowledge base, designed to support such a system, in a complex clinical domain, such as the intensive-care domain. The integration of the quality and functionality results suggests that the optimal discharge letter should exploit both human and machine, possibly by creating a machine-generated draft that will be polished by a human clinician. Copyright © 2016 Elsevier Inc. All rights reserved.
"What is relevant in a text document?": An interpretable machine learning approach
Arras, Leila; Horn, Franziska; Montavon, Grégoire; Müller, Klaus-Robert
2017-01-01
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. PMID:28800619
Deep learning of mutation-gene-drug relations from the literature.
Lee, Kyubum; Kim, Byounggun; Choi, Yonghwa; Kim, Sunkyu; Shin, Wonho; Lee, Sunwon; Park, Sungjoon; Kim, Seongsoon; Tan, Aik Choon; Kang, Jaewoo
2018-01-25
Molecular biomarkers that can predict drug efficacy in cancer patients are crucial components for the advancement of precision medicine. However, identifying these molecular biomarkers remains a laborious and challenging task. Next-generation sequencing of patients and preclinical models have increasingly led to the identification of novel gene-mutation-drug relations, and these results have been reported and published in the scientific literature. Here, we present two new computational methods that utilize all the PubMed articles as domain specific background knowledge to assist in the extraction and curation of gene-mutation-drug relations from the literature. The first method uses the Biomedical Entity Search Tool (BEST) scoring results as some of the features to train the machine learning classifiers. The second method uses not only the BEST scoring results, but also word vectors in a deep convolutional neural network model that are constructed from and trained on numerous documents such as PubMed abstracts and Google News articles. Using the features obtained from both the BEST search engine scores and word vectors, we extract mutation-gene and mutation-drug relations from the literature using machine learning classifiers such as random forest and deep convolutional neural networks. Our methods achieved better results compared with the state-of-the-art methods. We used our proposed features in a simple machine learning model, and obtained F1-scores of 0.96 and 0.82 for mutation-gene and mutation-drug relation classification, respectively. We also developed a deep learning classification model using convolutional neural networks, BEST scores, and the word embeddings that are pre-trained on PubMed or Google News data. Using deep learning, the classification accuracy improved, and F1-scores of 0.96 and 0.86 were obtained for the mutation-gene and mutation-drug relations, respectively. We believe that our computational methods described in this research could be used as an important tool in identifying molecular biomarkers that predict drug responses in cancer patients. We also built a database of these mutation-gene-drug relations that were extracted from all the PubMed abstracts. We believe that our database can prove to be a valuable resource for precision medicine researchers.
WE-G-BRA-05: IROC Houston On-Site Audits and Parameters That Affect Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Dromgoole, L; Alvarez, P
Purpose: To highlight the IROC Houston on-site dosimetry audit program, and to investigate the impact of clinical conditions on the frequency of errors/recommendations noted by IROC Houston. Methods: The results of IROC Houston on-site audits from 2000-present were abstracted and compared to clinical parameters, this included 409 institutions and 1020 linacs. In particular, we investigated the frequency of recommendations versus year, and the impact of repeat visits on the number of recommendations. We also investigated the impact on the number of recommendations of several clinical parameters: the number and age of the linacs, the linac/TPS combination, and the scope ofmore » the QA program. Results: The number of recommendations per institution (3.1 average) has shown decline between 2000 and present, although the number of recommendations per machine (0.89) has not changed. Previous IROC Houston site visits did not Result in fewer recommendations on a repeat visit, but IROC Houston tests have changed substantially during the last 15 years as radiotherapy technology has changed. There was no impact on the number of recommendations based on the number of machines at the institution or the age of a given machine. The fewest recommendations were observed for Varian-Eclipse combinations (0.71 recs/machine), while Elekta- Pinnacle combinations yielded the most (1.62 recs/machine). Finally, in the TG-142 era (post-2010), those institutions that had a QA recommendation (n=77) had significantly more other recommendations (1.83 per institution) than those that had no QA rec (n=12, 1.33 per institution). Conclusion: Establishing and maintaining a successful radiotherapy program is challenging and areas of improvement can routinely be identified. Clinical conditions such as linac-TPS combinations and the establishment of a good QA program impact the frequency of errors/deficiencies identified by IROC Houston during their on-site review process.« less
Intelligent Robotic Systems Study (IRSS), phase 3
NASA Technical Reports Server (NTRS)
1991-01-01
This phase of the Intelligent Robotic Systems Study (IRSS) examines some basic dynamics and control issues for a space manipulator attached to its worksite through a compliant base. One example of this scenario is depicted, which is a simplified, planar representation of the Flight Telerobotic Servicer (FTS) Development Test Flight 2 (DTF-2) experiment. The system consists of 4 major components: (1) dual FTS arms to perform dextrous tasks; (2) the main body to house power and electronics; (3) an Attachment Stabilization and Positioning Subsystem (ASPS) to provide coarse positioning and stabilization of the arms, and (4) the Worksite Attachment Mechanism (WAM) which anchors the system to its worksite, such as a Space Station truss node or Shuttle bay platform. The analysis is limited to the DTF-2 scenario. The goal is to understand the basic interaction dynamics between the arm, the positioner and/or stabilizer, and the worksite. The dynamics and controls simulation model are described. Analysis and simulation results are presented.
NASA Technical Reports Server (NTRS)
Genuardi, Michael T.
1993-01-01
One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jing; Li, Yuan-Yuan; Shanghai Center for Bioinformation Technology, Shanghai 200235
2012-03-02
Highlights: Black-Right-Pointing-Pointer Proper dataset partition can improve the prediction of deleterious nsSNPs. Black-Right-Pointing-Pointer Partition according to original residue type at nsSNP is a good criterion. Black-Right-Pointing-Pointer Similar strategy is supposed promising in other machine learning problems. -- Abstract: Many non-synonymous SNPs (nsSNPs) are associated with diseases, and numerous machine learning methods have been applied to train classifiers for sorting disease-associated nsSNPs from neutral ones. The continuously accumulated nsSNP data allows us to further explore better prediction approaches. In this work, we partitioned the training data into 20 subsets according to either original or substituted amino acid type at the nsSNPmore » site. Using support vector machine (SVM), training classification models on each subset resulted in an overall accuracy of 76.3% or 74.9% depending on the two different partition criteria, while training on the whole dataset obtained an accuracy of only 72.6%. Moreover, the dataset was also randomly divided into 20 subsets, but the corresponding accuracy was only 73.2%. Our results demonstrated that partitioning the whole training dataset into subsets properly, i.e., according to the residue type at the nsSNP site, will improve the performance of the trained classifiers significantly, which should be valuable in developing better tools for predicting the disease-association of nsSNPs.« less
Muhsen, Ibrahim N; ElHassan, Tusneem; Hashmi, Shahrukh K
2018-06-08
Currently, the evidence-based literature on healthcare is expanding exponentially. The opportunities provided by the advancement in artificial intelligence (AI) tools i.e. machine learning are appealing in tackling many of the current healthcare challenges. Thus, AI integration is expanding in most fields of healthcare, including the field of hematology. This study aims to review the current applications of AI in the field hematopoietic cell transplant (HCT). Literature search was done involving the following databases: Ovid-Medline including in-Process and Other Non-Indexed Citations and google scholar. The abstracts of the following professional societies: American Society of Haematology (ASH), American Society for Blood and Marrow Transplantation (ASBMT) and European Society for Blood and Marrow Transplantation (EBMT) were also screened. Literature review showed that the integration of AI in the field of HCT has grown remarkably in the last decade and confers promising avenues in diagnosis and prognosis within HCT populations targeting both pre and post-transplant challenges. Studies on AI integration in HCT have many limitations that include poorly tested algorithms, lack of generalizability and limited use of different AI tools. Machine learning techniques in HCT is an intense area of research that needs a lot of development and needs extensive support from hematology and HCT societies / organizations globally since we believe that this would be the future practice paradigm. Key words: Artificial intelligence, machine learning, hematopoietic cell transplant.
Design and fabrication of complete dentures using CAD/CAM technology
Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi
2017-01-01
Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686
NASA Technical Reports Server (NTRS)
Wild, Christian; Eckhardt, Dave
1987-01-01
The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.
Introducing meta-services for biomedical information extraction
Leitner, Florian; Krallinger, Martin; Rodriguez-Penagos, Carlos; Hakenberg, Jörg; Plake, Conrad; Kuo, Cheng-Ju; Hsu, Chun-Nan; Tsai, Richard Tzong-Han; Hung, Hsi-Chuan; Lau, William W; Johnson, Calvin A; Sætre, Rune; Yoshida, Kazuhiro; Chen, Yan Hua; Kim, Sun; Shin, Soo-Yong; Zhang, Byoung-Tak; Baumgartner, William A; Hunter, Lawrence; Haddow, Barry; Matthews, Michael; Wang, Xinglong; Ruch, Patrick; Ehrler, Frédéric; Özgür, Arzucan; Erkan, Güneş; Radev, Dragomir R; Krauthammer, Michael; Luong, ThaiBinh; Hoffmann, Robert; Sander, Chris; Valencia, Alfonso
2008-01-01
We introduce the first meta-service for information extraction in molecular biology, the BioCreative MetaServer (BCMS; ). This prototype platform is a joint effort of 13 research groups and provides automatically generated annotations for PubMed/Medline abstracts. Annotation types cover gene names, gene IDs, species, and protein-protein interactions. The annotations are distributed by the meta-server in both human and machine readable formats (HTML/XML). This service is intended to be used by biomedical researchers and database annotators, and in biomedical language processing. The platform allows direct comparison, unified access, and result aggregation of the annotations. PMID:18834497
Challenges at Petascale for Pseudo-Spectral Methods on Spheres (A Last Hurrah?)
NASA Technical Reports Server (NTRS)
Clune, Thomas
2011-01-01
Conclusions: a) Proper software abstractions should enable rapid-exploration of platform-specific optimizations/ tradeoffs. b) Pseudo-spectra! methods are marginally viable for at least some classes of petascaie problems. i.e., GPU based machine with good bisection would be best. c) Scalability at exascale is possible, but the necessary resolution will make algorithm prohibitively expensive. Efficient implementations of realistic global transposes are mtricate and tedious in MPI. PS at petascaie requires exploration of a variety of strategies for spreading local and remote communic3tions. PGAS allows far simpler implementation and thus rapid exploration of variants.
Machine Phase Fullerene Nanotechnology: 1996
NASA Technical Reports Server (NTRS)
Globus, Al; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
NASA has used exotic materials for spacecraft and experimental aircraft to good effect for many decades. In spite of many advances, transportation to space still costs about $10,000 per pound. Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. These studies and others suggest enormous potential for aerospace systems. Unfortunately, methods to realize diamonoid nanotechnology are at best highly speculative. Recent computational efforts at NASA Ames Research Center and computation and experiment elsewhere suggest that a nanotechnology of machine phase functionalized fullerenes may be synthetically relatively accessible and of great aerospace interest. Machine phase materials are (hypothetical) materials consisting entirely or in large part of microscopic machines. In a sense, most living matter fits this definition. To begin investigation of fullerene nanotechnology, we used molecular dynamics to study the properties of carbon nanotube based gears and gear/shaft configurations. Experiments on C60 and quantum calculations suggest that benzyne may react with carbon nanotubes to form gear teeth. Han has computationally demonstrated that molecular gears fashioned from (14,0) single-walled carbon nanotubes and benzyne teeth should operate well at 50-100 gigahertz. Results suggest that rotation can be converted to rotating or linear motion, and linear motion may be converted into rotation. Preliminary results suggest that these mechanical systems can be cooled by a helium atmosphere. Furthermore, Deepak has successfully simulated using helical electric fields generated by a laser to power fullerene gears once a positive and negative charge have been added to form a dipole. Even with mechanical motion, cooling, and power; creating a viable nanotechnology requires support structures, computer control, a system architecture, a variety of components, and some approach to manufacture. Additional information is contained within the original extended abstract.
Thermodynamic work from operational principles
NASA Astrophysics Data System (ADS)
Gallego, R.; Eisert, J.; Wilming, H.
2016-10-01
In recent years we have witnessed a concentrated effort to make sense of thermodynamics for small-scale systems. One of the main difficulties is to capture a suitable notion of work that models realistically the purpose of quantum machines, in an analogous way to the role played, for macroscopic machines, by the energy stored in the idealisation of a lifted weight. Despite several attempts to resolve this issue by putting forward specific models, these are far from realistically capturing the transitions that a quantum machine is expected to perform. In this work, we adopt a novel strategy by considering arbitrary kinds of systems that one can attach to a quantum thermal machine and defining work quantifiers. These are functions that measure the value of a transition and generalise the concept of work beyond those models familiar from phenomenological thermodynamics. We do so by imposing simple operational axioms that any reasonable work quantifier must fulfil and by deriving from them stringent mathematical condition with a clear physical interpretation. Our approach allows us to derive much of the structure of the theory of thermodynamics without taking the definition of work as a primitive. We can derive, for any work quantifier, a quantitative second law in the sense of bounding the work that can be performed using some non-equilibrium resource by the work that is needed to create it. We also discuss in detail the role of reversibility and correlations in connection with the second law. Furthermore, we recover the usual identification of work with energy in degrees of freedom with vanishing entropy as a particular case of our formalism. Our mathematical results can be formulated abstractly and are general enough to carry over to other resource theories than quantum thermodynamics.
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Liu, Nehemiah T; Salinas, Jose
2016-11-01
Although air transport medical services are today an integral part of trauma systems in most developed countries, to date, there are no reviews on recent innovations in civilian en route care. The purpose of this systematic review was to identify potential machine learning and new vital signs monitoring technologies in civilian en route care that could help close civilian and military capability gaps in monitoring and the early detection and treatment of various trauma injuries. MEDLINE, the Cochrane Database of Systematic Reviews, and citation review of relevant primary and review articles were searched for studies involving civilian en route care, air medical transport, and technologies from January 2005 to November 2015. Data were abstracted on study design, population, year, sponsors, innovation category, details of technologies, and outcomes. Thirteen observational studies involving civilian medical transport met inclusion criteria. Studies either focused on machine learning and software algorithms (n = 5), new vital signs monitoring (n = 6), or both (n = 2). Innovations involved continuous digital acquisition of physiologic data and parameter extraction. Importantly, all studies (n = 13) demonstrated improved outcomes where applicable and potential use during civilian and military en route care. However, almost all studies required further validation in prospective and/or randomized controlled trials. Potential machine learning technologies and monitoring of novel vital signs such as heart rate variability and complexity in civilian en route care could help enhance en route care for our nation's war fighters. In a complex global environment, they could potentially fill capability gaps such as monitoring and the early detection and treatment of various trauma injuries. However, the impact of these innovations and technologies will require further validation before widespread acceptance and prehospital use. Systematic review, level V.
Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G.; Ultsch, Alfred
2018-01-01
Abstract The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models. PMID:28700537
Context recognition for a hyperintensional inference machine
NASA Astrophysics Data System (ADS)
Duží, Marie; Fait, Michal; Menšík, Marek
2017-07-01
The goal of this paper is to introduce the algorithm of context recognition in the functional programming language TIL-Script, which is a necessary condition for the implementation of the TIL-Script inference machine. The TIL-Script language is an operationally isomorphic syntactic variant of Tichý's Transparent Intensional Logic (TIL). From the formal point of view, TIL is a hyperintensional, partial, typed λ-calculus with procedural semantics. Hyperintensional, because TIL λ-terms denote procedures (defined as TIL constructions) producing set-theoretic functions rather than the functions themselves; partial, because TIL is a logic of partial functions; and typed, because all the entities of TIL ontology, including constructions, receive a type within a ramified hierarchy of types. These features make it possible to distinguish three levels of abstraction at which TIL constructions operate. At the highest hyperintensional level the object to operate on is a construction (though a higher-order construction is needed to present this lower-order construction as an object of predication). At the middle intensional level the object to operate on is the function presented, or constructed, by a construction, while at the lowest extensional level the object to operate on is the value (if any) of the presented function. Thus a necessary condition for the development of an inference machine for the TIL-Script language is recognizing a context in which a construction occurs, namely extensional, intensional and hyperintensional context, in order to determine the type of an argument at which a given inference rule can be properly applied. As a result, our logic does not flout logical rules of extensional logic, which makes it possible to develop a hyperintensional inference machine for the TIL-Script language.
Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon
2016-12-01
This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-01-01
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry. PMID:29734699
A hierarchical SVG image abstraction layer for medical imaging
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer
2010-03-01
As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.
NASA Technical Reports Server (NTRS)
Hudlicka, Eva; Corker, Kevin
1988-01-01
In this paper, a problem-solving system which uses a multilevel causal model of its domain is described. The system functions in the role of a pilot's assistant in the domain of commercial air transport emergencies. The model represents causal relationships among the aircraft subsystems, the effectors (engines, control surfaces), the forces that act on an aircraft in flight (thrust, lift), and the aircraft's flight profile (speed, altitude, etc.). The causal relationships are represented at three levels of abstraction: Boolean, qualitative, and quantitative, and reasoning about causes and effects can take place at each of these levels. Since processing at each level has different characteristics with respect to speed, the type of data required, and the specificity of the results, the problem-solving system can adapt to a wide variety of situations. The system is currently being implemented in the KEE(TM) development environment on a Symbolics Lisp machine.
Neural Representations of Physics Concepts.
Mason, Robert A; Just, Marcel Adam
2016-06-01
We used functional MRI (fMRI) to assess neural representations of physics concepts (momentum, energy, etc.) in juniors, seniors, and graduate students majoring in physics or engineering. Our goal was to identify the underlying neural dimensions of these representations. Using factor analysis to reduce the number of dimensions of activation, we obtained four physics-related factors that were mapped to sets of voxels. The four factors were interpretable as causal motion visualization, periodicity, algebraic form, and energy flow. The individual concepts were identifiable from their fMRI signatures with a mean rank accuracy of .75 using a machine-learning (multivoxel) classifier. Furthermore, there was commonality in participants' neural representation of physics; a classifier trained on data from all but one participant identified the concepts in the left-out participant (mean accuracy = .71 across all nine participant samples). The findings indicate that abstract scientific concepts acquired in an educational setting evoke activation patterns that are identifiable and common, indicating that science education builds abstract knowledge using inherent, repurposed brain systems. © The Author(s) 2016.
USSR Space Life Sciences Digest, issue 8
NASA Technical Reports Server (NTRS)
Hooke, L. R. (Editor); Teeter, R. (Editor); Teeter, R. (Editor); Teeter, R. (Editor); Teeter, R. (Editor); Teeter, R. (Editor)
1985-01-01
This is the eighth issue of NASA's USSR Space Life Sciences Digest. It contains abstracts of 48 papers recently published in Russian language periodicals and bound collections and of 10 new Soviet monographs. Selected abstracts are illustrated with figures and tables. Additional features include reviews of two Russian books on radiobiology and a description of the latest meeting of an international working group on remote sensing of the Earth. Information about English translations of Soviet materials available to readers is provided. The topics covered in this issue have been identified as relevant to 33 areas of aerospace medicine and space biology. These areas are: adaptation, biological rhythms, biospherics, body fluids, botany, cardiovascular and respiratory systems, cosmonaut training, cytology, endocrinology, enzymology, equipment and instrumentation, exobiology, gastrointestinal system, genetics, group dynamics, habitability and environment effects, hematology, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, personnel selection, psychology, reproductive biology, and space biology and medicine.
NASA Technical Reports Server (NTRS)
Xue, Yongkang; De Sales, Fernando; Lau, William K-M; Boone, Aaron; Kim, Kyu-Myong; Mechoso, Carlos R.; Wang, Guiling; Kucharski, Fred; Schiro, Kathleen; Hosaka, Masahiro;
2016-01-01
The second West African Monsoon Modeling and Evaluation Project Experiment (WAMME II) is designed to improve understanding of the possible roles and feedbacks of sea surface temperature (SST), land use land cover change (LULCC), and aerosols forcings in the Sahel climate system at seasonal to decadal scales. The WAMME II strategy is to apply prescribed observationally based anomaly forcing, i.e., idealized but realistic forcing, in simulations by climate models to test the relative impacts of such forcings in producingamplifying the Sahelian seasonal and decadal climate variability, including the great 20th century drought. This is the first multi-model experiment specifically designed to simultaneously evaluate relative contributions of multiple external forcings to the Sahel decadal precipitation anomalies between the 1980s and the 1950s that is used to characterize the Sahel 1980s drought in this study. The WAMME II models have consistently demonstrated that SST is the major contributor to the 20th century Sahel drought. Under the influence of the maximum possible SST forcing, WAMME II model ensemble mean can produce up to 60 of the precipitation difference between the 1980s and the 1950s. The present paper also delineated the role of SSTs in triggering and maintaining the Sahel drought. The impact of SSTs in individual oceans is also examined and consensus and discrepancies are reported. Among the different ocean basins, the WAMME II models show the consensus that the Indian Ocean SST has the largest impact on the precipitation temporal evolution associated with the ITCZ movement before the WAM onset while the Pacific Ocean SST greatly contributes to the summer WAM drought. This paper also compares the SST effect with the LULCC effect. Results show that with prescribed land forcing the WAMME II model ensemble mean produces about 40 of the precipitation difference between the 1980s and the 1950s, which is less than the SST contribution but still of first order in the Sahel climate system. The role of land surface processes 61 in responding to and amplifying the drought is also identified. The results suggest that catastrophic consequences are likely to occur in the regional Sahel climate when SST anomalies in individual ocean basins and in land conditions combine synergistically to favor drought. These preliminary WAMME results need to be further evaluated with different experimental designs and different models.
Heymann, Michael; Degani, Asaf
2007-04-01
We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.
A brain-based account of “basic-level” concepts
Bauer, Andrew James; Just, Marcel Adam
2017-01-01
This study provides a brain-based account of how object concepts at an intermediate (basic) level of specificity are represented, offering an enriched view of what it means for a concept to be a basic-level concept, a research topic pioneered by Rosch and others (Rosch et al., 1976). Applying machine learning techniques to fMRI data, it was possible to determine the semantic content encoded in the neural representations of object concepts at basic and subordinate levels of abstraction. The representation of basic-level concepts (e.g. bird) was spatially broad, encompassing sensorimotor brain areas that encode concrete object properties, and also language and heteromodal integrative areas that encode abstract semantic content. The representation of subordinate-level concepts (robin) was less widely distributed, concentrated in perceptual areas that underlie concrete content. Furthermore, basic-level concepts were representative of their subordinates in that they were neurally similar to their typical but not atypical subordinates (bird was neurally similar to robin but not woodpecker). The findings provide a brain-based account of the advantages that basic-level concepts enjoy in everyday life over subordinate-level concepts: the basic level is a broad topographical representation that encompasses both concrete and abstract semantic content, reflecting the multifaceted yet intuitive meaning of basic-level concepts. PMID:28826947
A brain-based account of "basic-level" concepts.
Bauer, Andrew James; Just, Marcel Adam
2017-11-01
This study provides a brain-based account of how object concepts at an intermediate (basic) level of specificity are represented, offering an enriched view of what it means for a concept to be a basic-level concept, a research topic pioneered by Rosch and others (Rosch et al., 1976). Applying machine learning techniques to fMRI data, it was possible to determine the semantic content encoded in the neural representations of object concepts at basic and subordinate levels of abstraction. The representation of basic-level concepts (e.g. bird) was spatially broad, encompassing sensorimotor brain areas that encode concrete object properties, and also language and heteromodal integrative areas that encode abstract semantic content. The representation of subordinate-level concepts (robin) was less widely distributed, concentrated in perceptual areas that underlie concrete content. Furthermore, basic-level concepts were representative of their subordinates in that they were neurally similar to their typical but not atypical subordinates (bird was neurally similar to robin but not woodpecker). The findings provide a brain-based account of the advantages that basic-level concepts enjoy in everyday life over subordinate-level concepts: the basic level is a broad topographical representation that encompasses both concrete and abstract semantic content, reflecting the multifaceted yet intuitive meaning of basic-level concepts. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadayappan, Ponnuswamy
Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. We propose a new approach to the data and work distribution model provided by system software based on the unifying formalism of an abstract file system. The proposed hierarchical data model providesmore » simple, familiar visibility and access to data structures through the file system hierarchy, while providing fault tolerance through selective redundancy. The hierarchical task model features work queues whose form and organization are represented as file system objects. Data and work are both first class entities. By exposing the relationships between data and work to the runtime system, information is available to optimize execution time and provide fault tolerance. The data distribution scheme provides replication (where desirable and possible) for fault tolerance and efficiency, and it is hierarchical to make it possible to take advantage of locality. The user, tools, and applications, including legacy applications, can interface with the data, work queues, and one another through the abstract file model. This runtime environment will provide multiple interfaces to support traditional Message Passing Interface applications, languages developed under DARPA's High Productivity Computing Systems program, as well as other, experimental programming models. We will validate our runtime system with pilot codes on existing platforms and will use simulation to validate for exascale-class platforms. In this final report, we summarize research results from the work done at the Ohio State University towards the larger goals of the project listed above.« less
Utilization and Monetization of Healthcare Data in Developing Countries
Bram, Joshua T.; Warwick-Clark, Boyd; Obeysekare, Eric; Mehta, Khanjan
2015-01-01
Abstract In developing countries with fledgling healthcare systems, the efficient deployment of scarce resources is paramount. Comprehensive community health data and machine learning techniques can optimize the allocation of resources to areas, epidemics, or populations most in need of medical aid or services. However, reliable data collection in low-resource settings is challenging due to a wide range of contextual, business-related, communication, and technological factors. Community health workers (CHWs) are trusted community members who deliver basic health education and services to their friends and neighbors. While an increasing number of programs leverage CHWs for last mile data collection, a fundamental challenge to such programs is the lack of tangible incentives for the CHWs. This article describes potential applications of health data in developing countries and reviews the challenges to reliable data collection. Four practical CHW-centric business models that provide incentive and accountability structures to facilitate data collection are presented. Creating and strengthening the data collection infrastructure is a prerequisite for big data scientists, machine learning experts, and public health administrators to ultimately elevate and transform healthcare systems in resource-poor settings. PMID:26487984
NASA Astrophysics Data System (ADS)
Gengenbach, Ulrich K.; Hofmann, Andreas; Engelhardt, Friedhelm; Scharnowell, Rudolf; Koehler, Bernd
2001-10-01
A large number of microgrippers has been developed in industry and academia. Although the importance of hybrid integration techniques and hence the demand for assembly tools grows continuously a large part of these developments has not yet been used in industrial production. The first grippers developed for microassembly were basically vacuum grippers and downscaled tweezers. Due to increasingly complex assembly tasks more and more functionality such as sensing or additional functions such as adhesive dispensing has been integrated into gripper systems over the last years. Most of these gripper systems are incompatible since there exists no standard interface to the assembly machine and no standard for the internal modules and interfaces. Thus these tools are not easily interchangeable between assembly machines and not easily adaptable to assembly tasks. In order to alleviate this situation a construction kit for modular microgrippers is being developed. It is composed of modules with well defined interfaces that can be combined to build task specific grippers. An abstract model of a microgripper is proposed as a tool to structure the development of the construction kit. The modular concept is illustrated with prototypes.
Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.
2017-01-01
Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324
Automated isotope identification algorithm using artificial neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair
There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less
Automated isotope identification algorithm using artificial neural networks
Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair
2017-04-12
There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less
Design and analysis of an unconventional permanent magnet linear machine for energy harvesting
NASA Astrophysics Data System (ADS)
Zeng, Peng
This Ph.D. dissertation proposes an unconventional high power density linear electromagnetic kinetic energy harvester, and a high-performance two-stage interface power electronics to maintain maximum power abstraction from the energy source and charge the Li-ion battery load with constant current. The proposed machine architecture is composed of a double-sided flat type silicon steel stator with winding slots, a permanent magnet mover, coil windings, a linear motion guide and an adjustable spring bearing. The unconventional design of the machine is that NdFeB magnet bars in the mover are placed with magnetic fields in horizontal direction instead of vertical direction and the same magnetic poles are facing each other. The derived magnetic equivalent circuit model proves the average air-gap flux density of the novel topology is as high as 0.73 T with 17.7% improvement over that of the conventional topology at the given geometric dimensions of the proof-of-concept machine. Subsequently, the improved output voltage and power are achieved. The dynamic model of the linear generator is also developed, and the analytical equations of output maximum power are derived for the case of driving vibration with amplitude that is equal, smaller and larger than the relative displacement between the mover and the stator of the machine respectively. Furthermore, the finite element analysis (FEA) model has been simulated to prove the derived analytical results and the improved power generation capability. Also, an optimization framework is explored to extend to the multi-Degree-of-Freedom (n-DOF) vibration based linear energy harvesting devices. Moreover, a boost-buck cascaded switch mode converter with current controller is designed to extract the maximum power from the harvester and charge the Li-ion battery with trickle current. Meanwhile, a maximum power point tracking (MPPT) algorithm is proposed and optimized for low frequency driving vibrations. Finally, a proof-of-concept unconventional permanent magnet (PM) linear generator is prototyped and tested to verify the simulation results of the FEA model. For the coil windings of 33, 66 and 165 turns, the output power of the machine is tested to have the output power of 65.6 mW, 189.1 mW, and 497.7 mW respectively with the maximum power density of 2.486 mW/cm3.
2013-01-01
Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh-ishi, Katsuyoshi, E-mail: oh-ishi@kc.chuo-u.ac.jp; Nagumo, Kenta; Tateishi, Kazuya
Mo-Re-C compounds containing Mo{sub 7}Re{sub 13}C with the β-Mn structure were synthesized with high-melting-temperature metals Mo, Re, and C powders using a conventional solid state method with a planetary ball milling machine instead of the arc melting method. Use of the ball milling machine was necessary to obtain Mo{sub 7}Re{sub 13}C with the β-Mn structure using the solid state method. Almost single-phase Mo{sub 7}Re{sub 13}C with a trace of impurity were obtained using the synthesis method. By XRF and lattice parameter measurements on the samples, Fe element existed in the compound synthesized using the planetary ball milling machine with amore » pot and balls made of steel, though Fe element was not detected in the compound synthesized using a pot and balls made of tungsten carbide. The former compound containg the Fe atom did not show superconductivity but the latter compound without the Fe atom showed superconductivity at 6.1 K. - Graphical abstract: Temperature dependence of the magnetic susceptibility measured under 10 Oe for the superconducting PBM-T samples without Fe element and non-superconducting PBM-S with Fe element. The inset is the enlarged view of the data for the PBM-S sample.« less
NASA Astrophysics Data System (ADS)
Graham, Matthew; Gray, N.; Burke, D.
2010-01-01
Many activities in the era of data-intensive astronomy are predicated upon some transference of domain knowledge and expertise from human to machine. The semantic infrastructure required to support this is no longer a pipe dream of computer science but a set of practical engineering challenges, more concerned with deployment and performance details than AI abstractions. The application of such ideas promises to help in such areas as contextual data access, exploiting distributed annotation and heterogeneous sources, and intelligent data dissemination and discovery. In this talk, we will review the status and use of semantic technologies in astronomy, particularly to address current problems in astroinformatics, with such projects as SKUA and AstroCollation.
HNS/Teflon, a new heat resistant explosive
NASA Technical Reports Server (NTRS)
Heller, H.; Bertram, A. L.
1973-01-01
HNS/Teflon (90/10) is a new pressed explosive developed for use in the Apollo program. The major advantages of HNS/Teflon are (1) excellent thermal stability at elevated temperatures, (2) superior resistance to sublimation at high temperatures and low pressures and (3) ease of molding powder preparation, pressing and machining. The impact sensitivity of HNS/Teflon is between that of Comp B and Comp A-3 while its explosive performance is about the same as TNT. Under the severe environmental conditions of the moon's surface, this explosive successfully performed its intended function of generating seismic waves in the Apollo ALSEP and LSPE experiments. (Modified author abstract)
LHCb Dockerized Build Environment
NASA Astrophysics Data System (ADS)
Clemencic, M.; Belin, M.; Closier, J.; Couturier, B.
2017-10-01
Used as lightweight virtual machines or as enhanced chroot environments, Linux containers, and in particular the Docker abstraction over them, are more and more popular in the virtualization communities. The LHCb Core Software team decided to investigate how to use Docker containers to provide stable and reliable build environments for the different supported platforms, including the obsolete ones which cannot be installed on modern hardware, to be used in integration builds, releases and by any developer. We present here the techniques and procedures set up to define and maintain the Docker images and how these images can be used to develop on modern Linux distributions for platforms otherwise not accessible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forest, E.; Hirata, Kohji
A methodological discussion is given for single particle beam dynamics in circular machines. The discussions are introductory, but (or, even therefore) we avoid to rely on too much simplified concepts. We treat things from a very general and fundamental point of view, because this is the easiest and rightest way to teach how to simulate particle motion and how to analyze its results. We give some principles of particle tracking free from theoretical prejudices. We also introduce some transparent methods to deduce the necessary information from the tracking: many of the traditional beam-dynamics concepts can be abstracted from them asmore » approximate quantities which are valid in certain limiting cases.« less
A bottom-up approach to MEDLINE indexing recommendations.
Jimeno-Yepes, Antonio; Wilkowski, Bartłomiej; Mork, James G; Van Lenten, Elizabeth; Fushman, Dina Demner; Aronson, Alan R
2011-01-01
MEDLINE indexing performed by the US National Library of Medicine staff describes the essence of a biomedical publication in about 14 Medical Subject Headings (MeSH). Since 2002, this task is assisted by the Medical Text Indexer (MTI) program. We present a bottom-up approach to MEDLINE indexing in which the abstract is searched for indicators for a specific MeSH recommendation in a two-step process. Supervised machine learning combined with triage rules improves sensitivity of recommendations while keeping the number of recommended terms relatively small. Improvement in recommendations observed in this work warrants further exploration of this approach to MTI recommendations on a larger set of MeSH headings.
Andrews, Mike; Weislogel, Mark; Moeck, Peter; Stone-Sundberg, Jennifer; Birkes, Derek; Hoffert, Madeline Paige; Lindeman, Adam; Morrill, Jeff; Fercak, Ondrej; Friedman, Sasha; Gunderson, Jeff; Ha, Anh; McCollister, Jack; Chen, Yongkang; Geile, John; Wollman, Andrew; Attari, Babak; Botnen, Nathan; Vuppuluri, Vasant; Shim, Jennifer; Kaminsky, Werner; Adams, Dustin; Graft, John
2014-01-01
Abstract Since the inception of 3D printing, an evolutionary process has taken place in which specific user and customer needs have crossed paths with the capabilities of a growing number of machines to create value-added businesses. Even today, over 30 years later, the growth of 3D printing and its utilization for the good of society is often limited by the various users' understanding of the technology for their specific needs. This article presents an overview of current 3D printing technologies and shows numerous examples from a multitude of fields from manufacturing to education. PMID:28473997
Cold Steel, Weak Flesh: Mechanism, Masculinity and the Anxieties of Late Victorian Empire
Brown, Michael
2017-01-01
Abstract This article considers the reception and representation of advanced military technology in late nineteenth- and early twentieth-century Britain. It argues that technologies such as the breech-loading rifle and the machine gun existed in an ambiguous relationship with contemporary ideas about martial masculinities and in many cases served to fuel anxieties about the physical prowess of the British soldier. In turn, these anxieties encouraged a preoccupation in both military and popular domains with that most visceral of weapons, the bayonet, an obsession which was to have profound consequences for British military thinking at the dawn of the First World War. PMID:28620269
Region Templates: Data Representation and Management for High-Throughput Image Analysis
Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Klasky, Scott; Saltz, Joel
2015-01-01
We introduce a region template abstraction and framework for the efficient storage, management and processing of common data types in analysis of large datasets of high resolution images on clusters of hybrid computing nodes. The region template abstraction provides a generic container template for common data structures, such as points, arrays, regions, and object sets, within a spatial and temporal bounding box. It allows for different data management strategies and I/O implementations, while providing a homogeneous, unified interface to applications for data storage and retrieval. A region template application is represented as a hierarchical dataflow in which each computing stage may be represented as another dataflow of finer-grain tasks. The execution of the application is coordinated by a runtime system that implements optimizations for hybrid machines, including performance-aware scheduling for maximizing the utilization of computing devices and techniques to reduce the impact of data transfers between CPUs and GPUs. An experimental evaluation on a state-of-the-art hybrid cluster using a microscopy imaging application shows that the abstraction adds negligible overhead (about 3%) and achieves good scalability and high data transfer rates. Optimizations in a high speed disk based storage implementation of the abstraction to support asynchronous data transfers and computation result in an application performance gain of about 1.13×. Finally, a processing rate of 11,730 4K×4K tiles per minute was achieved for the microscopy imaging application on a cluster with 100 nodes (300 GPUs and 1,200 CPU cores). This computation rate enables studies with very large datasets. PMID:26139953
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
NASA Astrophysics Data System (ADS)
Xie, L.; Pietrafesa, L. J.; Wu, K.
2003-02-01
A three-dimensional wave-current coupled modeling system is used to examine the influence of waves on coastal currents and sea level. This coupled modeling system consists of the wave model-WAM (Cycle 4) and the Princeton Ocean Model (POM). The results from this study show that it is important to incorporate surface wave effects into coastal storm surge and circulation models. Specifically, we find that (1) storm surge models without coupled surface waves generally under estimate not only the peak surge but also the coastal water level drop which can also cause substantial impact on the coastal environment, (2) introducing wave-induced surface stress effect into storm surge models can significantly improve storm surge prediction, (3) incorporating wave-induced bottom stress into the coupled wave-current model further improves storm surge prediction, and (4) calibration of the wave module according to minimum error in significant wave height does not necessarily result in an optimum wave module in a wave-current coupled system for current and storm surge prediction.
Assessment of the importance of the current-wave coupling in the shelf ocean forecasts
NASA Astrophysics Data System (ADS)
Jordà, G.; Bolaños, R.; Espino, M.; Sánchez-Arcilla, A.
2006-10-01
The effects of wave-current interactions on shelf ocean forecasts is investigated in the framework of the MFSTEP (Mediterranean Forecasting System Project Towards Enviromental Predictions) project. A one way sequential coupling approach is adopted to link the wave model (WAM) to the circulation model (SYMPHONIE). The coupling of waves and currents has been done considering four main processes: wave refraction due to currents, surface wind drag and bo€ttom drag modifications due to waves, and the wave induced mass flux. The coupled modelling system is implemented in the southern Catalan shelf (NW Mediterranean), a region with characteristics similar to most of the Mediterranean shelves. The sensitivity experiments are run in a typical operational configuration. The wave refraction by currents seems to be not very relevant in a microtidal context such as the western Mediterranean. The main effect of waves on current forecasts is through the modification of the wind drag. The Stokes drift also plays a significant role due to its spatial and temporal characteristics. Finally, the enhanced bottom friction is just noticeable in the inner shelf.
Wide-Area Situational Awareness of Power Grids with Limited Phasor Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Nieplocha, Jarek
Lack of situational awareness has been identified as one of root causes for the August 14, 2003 Northeast Blackout in North America. To improve situational awareness, the Department of Energy (DOE) launched several projects to deploy Wide Area Measurement Systems (WAMS) in different interconnections. Compared to the tens of thousands of buses, the number of Phasor Measurement Units (PMUs) is quite limited and not enough to achieve the observability for the whole interconnections. To utilize the limited number of PMU measurements to improve situational awareness, this paper proposes to combine PMU measurement data and power flow equations to form amore » hybrid power flow model. Technically, a model which combines the concept of observable islands and modeling of power flow conditions, is proposed. The model is called a Hybrid Power Flow Model as it has both PMU measurements and simulation assumptions, which describes prior knowledge available about whole power systems. By solving the hybrid power flow equations, the proposed method can be used to derive power system states to improve the situational awareness of a power grid.« less
Research on large-scale wind farm modeling
NASA Astrophysics Data System (ADS)
Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng
2017-01-01
Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.
Synchrophasor Based Tracking Three-Phase State Estimator and It's Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phadke, A. G.; Thorp, James; Centeno, Virgilio
2013-08-31
Electric power infrastructure is one of the critical resources of the nation. Its reliability in the face of natural or man-made catastrophes is of paramount importance for the economic and public health wellbeing of a modern society. Maintaining high levels of security for the high voltage transmission back bone of the electric supply network is a task requiring access to modern monitoring tools. These tools have been made particularly effective with the advent of synchronized phasor measurement units (PMUs) which became available in late 1990s, and have now become an indispensable for optimal monitoring, protection and control of the powermore » grid. The present project was launched with an objective of demonstrating the value of the Wide Area Measurement System (WAMS) using PMUs and its applications on the Dominion Virginia Power High Voltage transmission grid. Virginia Tech is the birth place of PMUs, and was chosen to be the Principal Investigator of this project. In addition to Dominion Virginia Power, Quanta Technology of Raleigh, NC was selected to be co-Principal Investigators of this project.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Lu; Albright, Austin P; Rahimpour, Alireza
Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-areamore » oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements« less
Lobelo, Felipe; Kelli, Heval M.; Tejedor, Sheri Chernetsky; Pratt, Michael; McConnell, Michael V.; Martin, Seth S.; Welk, Gregory J.
2017-01-01
Physical activity (PA) interventions constitute a critical component of cardiovascular disease (CVD) risk reduction programs. Objective mobile health (mHealth) software applications (apps) and wearable activity monitors (WAMs) can advance both assessment and integration of PA counseling in clinical settings and support community-based PA interventions. The use of mHealth technology for CVD risk reduction is promising, but integration into routine clinical care and population health management has proven challenging. The increasing diversity of available technologies and the lack of a comprehensive guiding framework are key barriers for standardizing data collection and integration. This paper reviews the validity, utility and feasibility of implementing mHealth technology in clinical settings and proposes an organizational framework to support PA assessment, counseling and referrals to community resources for CVD risk reduction interventions. This integration framework can be adapted to different clinical population needs. It should also be refined as technologies and regulations advance under an evolving health care system landscape in the United States and globally. PMID:26923067
Lobelo, Felipe; Kelli, Heval M; Tejedor, Sheri Chernetsky; Pratt, Michael; McConnell, Michael V; Martin, Seth S; Welk, Gregory J
2016-01-01
Physical activity (PA) interventions constitute a critical component of cardiovascular disease (CVD) risk reduction programs. Objective mobile health (mHealth) software applications (apps) and wearable activity monitors (WAMs) can advance both assessment and integration of PA counseling in clinical settings and support community-based PA interventions. The use of mHealth technology for CVD risk reduction is promising, but integration into routine clinical care and population health management has proven challenging. The increasing diversity of available technologies and the lack of a comprehensive guiding framework are key barriers for standardizing data collection and integration. This paper reviews the validity, utility and feasibility of implementing mHealth technology in clinical settings and proposes an organizational framework to support PA assessment, counseling and referrals to community resources for CVD risk reduction interventions. This integration framework can be adapted to different clinical population needs. It should also be refined as technologies and regulations advance under an evolving health care system landscape in the United States and globally. Copyright © 2016 Elsevier Inc. All rights reserved.
Hindcast of extreme sea states in North Atlantic extratropical storms
NASA Astrophysics Data System (ADS)
Ponce de León, Sonia; Guedes Soares, Carlos
2015-02-01
This study examines the variability of freak wave parameters around the eye of northern hemisphere extratropical cyclones. The data was obtained from a hindcast performed with the WAve Model (WAM) model forced by the wind fields of the Climate Forecast System Reanalysis (CFSR). The hindcast results were validated against the wave buoys and satellite altimetry data showing a good correlation. The variability of different wave parameters was assessed by applying the empirical orthogonal functions (EOF) technique on the hindcast data. From the EOF analysis, it can be concluded that the first empirical orthogonal function (V1) accounts for greater share of variability of significant wave height (Hs), peak period (Tp), directional spreading (SPR) and Benjamin-Feir index (BFI). The share of variance in V1 varies for cyclone and variable: for the 2nd storm and Hs V1 contains 96 % of variance while for the 3rd storm and BFI V1 accounts only for 26 % of variance. The spatial patterns of V1 show that the variables are distributed around the cyclones centres mainly in a lobular fashion.
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
REEF: Retainable Evaluator Execution Framework
Weimer, Markus; Chen, Yingda; Chun, Byung-Gon; Condie, Tyson; Curino, Carlo; Douglas, Chris; Lee, Yunseong; Majestro, Tony; Malkhi, Dahlia; Matusevych, Sergiy; Myers, Brandon; Narayanamurthy, Shravan; Ramakrishnan, Raghu; Rao, Sriram; Sears, Russell; Sezgin, Beysim; Wang, Julia
2015-01-01
Resource Managers like Apache YARN have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low-level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault-tolerance, task scheduling and coordination) and re-implement common mechanisms (e.g., caching, bulk-data transfers). This paper presents REEF, a development framework that provides a control-plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource re-use for data caching, and state management abstractions that greatly ease the development of elastic data processing work-flows on cloud platforms that support a Resource Manager service. REEF is being used to develop several commercial offerings such as the Azure Stream Analytics service. Furthermore, we demonstrate REEF development of a distributed shell application, a machine learning algorithm, and a port of the CORFU [4] system. REEF is also currently an Apache Incubator project that has attracted contributors from several instititutions.1 PMID:26819493
NASA Astrophysics Data System (ADS)
Szymański, Zygmunt
2015-03-01
In the paper present's an analysis of suitableness an application of compact and hybrid drive system in hoisting machine. In the paper presented the review of constructional solutions of hoisting machines drive system, driving with AC and DC motor. In the paper presented conception of modern, energy sparing hoisting machine supply system, composed with compact motor, an supplied with transistor or thyristor converter supply system, and intelligent control system composed with multilevel microprocessor controller. In the paper present's also analysis of suitableness application an selected method of artificial intelligent in hoisting machine control system, automation system, and modern diagnostic system. In the paper one limited to analysis of: fuzzy logic method, genetic algorithms method, and modern neural net II and III generation. That method enables realization of complex control algorithms of hosting machine with insurance of energy sparing exploitation conditions, monitoring of exploitation parameters, and prediction diagnostic of hoisting machine technical state, minimization a number of failure states. In the paper present's a conception of control and diagnostic system of the hoisting machine based on fuzzy logic neural set control. In the chapter presented also a selected control algorithms and results of computer simulations realized for particular mathematical models of hoisting machine. Results of theoretical investigation were partly verified in laboratory and industrial experiments.
Cinelli, Mattia; Sun, , Yuxin; Best, Katharine; Heather, James M.; Reich-Zeliger, Shlomit; Shifrut, Eric; Friedman, Nir; Shawe-Taylor, John; Chain, Benny
2017-01-01
Abstract Motivation: Somatic DNA recombination, the hallmark of vertebrate adaptive immunity, has the potential to generate a vast diversity of antigen receptor sequences. How this diversity captures antigen specificity remains incompletely understood. In this study we use high throughput sequencing to compare the global changes in T cell receptor β chain complementarity determining region 3 (CDR3β) sequences following immunization with ovalbumin administered with complete Freund’s adjuvant (CFA) or CFA alone. Results: The CDR3β sequences were deconstructed into short stretches of overlapping contiguous amino acids. The motifs were ranked according to a one-dimensional Bayesian classifier score comparing their frequency in the repertoires of the two immunization classes. The top ranking motifs were selected and used to create feature vectors which were used to train a support vector machine. The support vector machine achieved high classification scores in a leave-one-out validation test reaching >90% in some cases. Summary: The study describes a novel two-stage classification strategy combining a one-dimensional Bayesian classifier with a support vector machine. Using this approach we demonstrate that the frequency of a small number of linear motifs three amino acids in length can accurately identify a CD4 T cell response to ovalbumin against a background response to the complex mixture of antigens which characterize Complete Freund’s Adjuvant. Availability and implementation: The sequence data is available at www.ncbi.nlm.nih.gov/sra/?term¼SRP075893. The Decombinator package is available at github.com/innate2adaptive/Decombinator. The R package e1071 is available at the CRAN repository https://cran.r-project.org/web/packages/e1071/index.html. Contact: b.chain@ucl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28073756
ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean; Potok, Thomas E.; Jones, Todd
At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less
NASA Astrophysics Data System (ADS)
Dorville, Jean-François; Cayol, Claude; Palany, Philippe
2016-04-01
Many numerical models based on equation of action conservation (N = E/σ) enables the simulation of sea states (WAM, WW3,...). They allow through parametric equations to define sources and sinks of wave energy (E(f,σ)) in spectral form. Statistics of the sea states can be predicted at medium or long term as the significant wave height, the wave pic direction, mean wave period, etc. Those predictions are better if initials and boundaries conditions together with 10m wind field are well defined. Basically the more homogeneous the marine area bathymetry is the more accurate the prediction will be. Météo-France for French West Indies and French Guiana (MF-DIRAG) is in charge of the safety of persons and goods tries to improve knowledge and capacity to evaluate the sea state at the coast and the marine submersion height using among other statistical methods (as return periods) and numerical simulations. The area of responsibility is large and includes different territory, type of coast and sea wave climate. Up today most part of the daily simulations were done for large areas and with large meshes (10km). The needs of more accurate values in the assessment of the marine submersion pushed to develop new strategies to estimate the level of the sea water on the coast line and therefore characterize the marine submersion hazard. Since 2013 new data are available to enhance the capacity to simulate the mechanical process at the coast. High resolution DEM Litto 3D for Guadeloupe and Martinique coasts with grid-spacing of 5m up to 5km of the coast are free of use. The study presents the methodology applied at MF-DIRAG in study mode to evaluate effects of wave breaking on coastline. The method is based on wave simulation downscaling form the Atlantic basin to the coastal area using MF-WAM to an sub kilometric unstructured WW3 or SWAN depending to the domain studied. At the final step a non-hydrostatic wave flow as SWASH is used on the coast completed by an analytical method based on Stockdon et al. 2006 to validate the water level estimation. The water circulation due to storm surge and tide is at this point computed separately with an oceanic model including a coastal configuration and only used as an input in the wave models. The method is testing on two documented hurricane events (Dean 2007 and Omar 2008), results, accuracy and computation cost are presented. A special attention is brought to wave breaking simulation on coast of small to medium slope.
Du, Tianchuan; Liao, Li; Wu, Cathy H; Sun, Bilin
2016-11-01
Protein-protein interactions play essential roles in many biological processes. Acquiring knowledge of the residue-residue contact information of two interacting proteins is not only helpful in annotating functions for proteins, but also critical for structure-based drug design. The prediction of the protein residue-residue contact matrix of the interfacial regions is challenging. In this work, we introduced deep learning techniques (specifically, stacked autoencoders) to build deep neural network models to tackled the residue-residue contact prediction problem. In tandem with interaction profile Hidden Markov Models, which was used first to extract Fisher score features from protein sequences, stacked autoencoders were deployed to extract and learn hidden abstract features. The deep learning model showed significant improvement over the traditional machine learning model, Support Vector Machines (SVM), with the overall accuracy increased by 15% from 65.40% to 80.82%. We showed that the stacked autoencoders could extract novel features, which can be utilized by deep neural networks and other classifiers to enhance learning, out of the Fisher score features. It is further shown that deep neural networks have significant advantages over SVM in making use of the newly extracted features. Copyright © 2016. Published by Elsevier Inc.
Machine aided indexing from natural language text
NASA Technical Reports Server (NTRS)
Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.
1993-01-01
The NASA Lexical Dictionary (NLD) Machine Aided Indexing (MAI) system was designed to (1) reuse the indexing of the Defense Technical Information Center (DTIC); (2) reuse the indexing of the Department of Energy (DOE); and (3) reduce the time required for original indexing. This was done by automatically generating appropriate NASA thesaurus terms from either the other agency's index terms, or, for original indexing, from document titles and abstracts. The NASA STI Program staff devised two different ways to generate thesaurus terms from text. The first group of programs identified noun phrases by a parsing method that allowed for conjunctions and certain prepositions, on the assumption that indexable concepts are found in such phrases. Results were not always satisfactory, and it was noted that indexable concepts often occurred outside of noun phrases. The first method also proved to be too slow for the ultimate goal of interactive (online) MAI. The second group of programs used the knowledge base (KB), word proximity, and frequency of word and phrase occurrence to identify indexable concepts. Both methods are described and illustrated. Online MAI has been achieved, as well as several spinoff benefits, which are also described.
Machine Aided Indexing and the NASA Thesaurus
NASA Technical Reports Server (NTRS)
vonOfenheim, Bill
2007-01-01
Machine Aided Indexing (MAI) is a Web-based application program for aiding the indexing of literature in the NASA Scientific and Technical Information (STI) Database. MAI was designed to be a convenient, fully interactive tool for determining the subject matter of documents and identifying keywords. The heart of MAI is a natural-language processor that accepts, as input, any user-supplied text, including abstracts, full documents, and Web pages. Within seconds, the text is analyzed and a ranked list of terms is generated. The 17,800 terms of the NASA Thesaurus serve as the foundation of the knowledge base used by MAI. The NASA Thesaurus defines a standard vocabulary, the use of which enables MAI to assist in ensuring that STI documents are uniformly and consistently accessible. Of particular interest to traditional users of the NASA Thesaurus, MAI incorporates a fully searchable thesaurus display module that affords word-search and hierarchy- navigation capabilities that make it much easier and less time-consuming to look up terms and browse, relative to lookup and browsing in older print and Portable Document Format (PDF) digital versions of the Thesaurus. In addition, because MAI is centrally hosted, the Thesaurus data are always current.
Computational approaches for predicting biomedical research collaborations.
Zhang, Qing; Yu, Hong
2014-01-01
Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets.
A possible extension to the RInChI as a means of providing machine readable process data.
Jacob, Philipp-Maximilian; Lan, Tian; Goodman, Jonathan M; Lapkin, Alexei A
2017-04-11
The algorithmic, large-scale use and analysis of reaction databases such as Reaxys is currently hindered by the absence of widely adopted standards for publishing reaction data in machine readable formats. Crucial data such as yields of all products or stoichiometry are frequently not explicitly stated in the published papers and, hence, not reported in the database entry for those reactions, limiting their usefulness for algorithmic analysis. This paper presents a possible extension to the IUPAC RInChI standard via an auxiliary layer, termed ProcAuxInfo, which is a standardised, extensible form in which to report certain key reaction parameters such as declaration of all products and reactants as well as auxiliaries known in the reaction, reaction stoichiometry, amounts of substances used, conversion, yield and operating conditions. The standard is demonstrated via creation of the RInChI including the ProcAuxInfo layer based on three published reactions and demonstrates accurate data recoverability via reverse translation of the created strings. Implementation of this or another method of reporting process data by the publishing community would ensure that databases, such as Reaxys, would be able to abstract crucial data for big data analysis of their contents.
NASA Technical Reports Server (NTRS)
Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John
2005-01-01
Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.
Two Theories Are Better Than One
NASA Astrophysics Data System (ADS)
Jones, Robert
2008-03-01
All knowledge is of an approximate character (B. Russell, Human Knowledge, 1948, pg 497 and 507). Our formalisms abstract, idealize, and simplify (R. L. Epstein, Propositional Logics, 2001, Ch XI and E. Bender, An Intro. to Math. Modeling, 1978, pg v and 2). Each formalism is an idealization, often times approximating in its own DIFFERENT ways, each offering somewhat different coverage of the domain. Having MULTIPLE overlaping theories of a knowledge domain is then better than having just one theory (R. Jones, APS general meeting, April 2004). Theories are not unique (T. M. Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, vol. 9, 1992, pg 319). In the future every field will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. This idea can be considered an extension of Bohr's notion of complementarity, ``...different experimental arrangements...described by different physical concepts...together and only together exhaust the definable information we can obtain about the object.'' (H. J. Folse, The Philosophy of Neils Bohr, 1985, pg 238)
Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus
2017-01-01
Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748
Text data extraction for a prospective, research-focused data mart: implementation and validation
2012-01-01
Background Translational research typically requires data abstracted from medical records as well as data collected specifically for research. Unfortunately, many data within electronic health records are represented as text that is not amenable to aggregation for analyses. We present a scalable open source SQL Server Integration Services package, called Regextractor, for including regular expression parsers into a classic extract, transform, and load workflow. We have used Regextractor to abstract discrete data from textual reports from a number of ‘machine generated’ sources. To validate this package, we created a pulmonary function test data mart and analyzed the quality of the data mart versus manual chart review. Methods Eleven variables from pulmonary function tests performed closest to the initial clinical evaluation date were studied for 100 randomly selected subjects with scleroderma. One research assistant manually reviewed, abstracted, and entered relevant data into a database. Correlation with data obtained from the automated pulmonary function test data mart within the Northwestern Medical Enterprise Data Warehouse was determined. Results There was a near perfect (99.5%) agreement between results generated from the Regextractor package and those obtained via manual chart abstraction. The pulmonary function test data mart has been used subsequently to monitor disease progression of patients in the Northwestern Scleroderma Registry. In addition to the pulmonary function test example presented in this manuscript, the Regextractor package has been used to create cardiac catheterization and echocardiography data marts. The Regextractor package was released as open source software in October 2009 and has been downloaded 552 times as of 6/1/2012. Conclusions Collaboration between clinical researchers and biomedical informatics experts enabled the development and validation of a tool (Regextractor) to parse, abstract and assemble structured data from text data contained in the electronic health record. Regextractor has been successfully used to create additional data marts in other medical domains and is available to the public. PMID:22970696
Text data extraction for a prospective, research-focused data mart: implementation and validation.
Hinchcliff, Monique; Just, Eric; Podlusky, Sofia; Varga, John; Chang, Rowland W; Kibbe, Warren A
2012-09-13
Translational research typically requires data abstracted from medical records as well as data collected specifically for research. Unfortunately, many data within electronic health records are represented as text that is not amenable to aggregation for analyses. We present a scalable open source SQL Server Integration Services package, called Regextractor, for including regular expression parsers into a classic extract, transform, and load workflow. We have used Regextractor to abstract discrete data from textual reports from a number of 'machine generated' sources. To validate this package, we created a pulmonary function test data mart and analyzed the quality of the data mart versus manual chart review. Eleven variables from pulmonary function tests performed closest to the initial clinical evaluation date were studied for 100 randomly selected subjects with scleroderma. One research assistant manually reviewed, abstracted, and entered relevant data into a database. Correlation with data obtained from the automated pulmonary function test data mart within the Northwestern Medical Enterprise Data Warehouse was determined. There was a near perfect (99.5%) agreement between results generated from the Regextractor package and those obtained via manual chart abstraction. The pulmonary function test data mart has been used subsequently to monitor disease progression of patients in the Northwestern Scleroderma Registry. In addition to the pulmonary function test example presented in this manuscript, the Regextractor package has been used to create cardiac catheterization and echocardiography data marts. The Regextractor package was released as open source software in October 2009 and has been downloaded 552 times as of 6/1/2012. Collaboration between clinical researchers and biomedical informatics experts enabled the development and validation of a tool (Regextractor) to parse, abstract and assemble structured data from text data contained in the electronic health record. Regextractor has been successfully used to create additional data marts in other medical domains and is available to the public.
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Atomic Force Microscopy of Biological Membranes
Frederix, Patrick L.T.M.; Bosshart, Patrick D.; Engel, Andreas
2009-01-01
Abstract Atomic force microscopy (AFM) is an ideal method to study the surface topography of biological membranes. It allows membranes that are adsorbed to flat solid supports to be raster-scanned in physiological solutions with an atomically sharp tip. Therefore, AFM is capable of observing biological molecular machines at work. In addition, the tip can be tethered to the end of a single membrane protein, and forces acting on the tip upon its retraction indicate barriers that occur during the process of protein unfolding. Here we discuss the fundamental limitations of AFM determined by the properties of cantilevers, present aspects of sample preparation, and review results achieved on reconstituted and native biological membranes. PMID:19167286
NASA Technical Reports Server (NTRS)
Goethel, Thomas; Glesner, Sabine
2009-01-01
The correctness of safety-critical embedded software is crucial, whereas non-functional properties like deadlock-freedom and real-time constraints are particularly important. The real-time calculus Timed Communicating Sequential Processes (CSP) is capable of expressing such properties and can therefore be used to verify embedded software. In this paper, we present our formalization of Timed CSP in the Isabelle/HOL theorem prover, which we have formulated as an operational coalgebraic semantics together with bisimulation equivalences and coalgebraic invariants. Furthermore, we apply these techniques in an abstract specification with real-time constraints, which is the basis for current work in which we verify the components of a simple real-time operating system deployed on a satellite.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
USSR Space Life Sciences Digest, issue 1
NASA Technical Reports Server (NTRS)
Hooke, L. R.; Radtke, M.; Rowe, J. E.
1985-01-01
The first issue of the bimonthly digest of USSR Space Life Sciences is presented. Abstracts are included for 49 Soviet periodical articles in 19 areas of aerospace medicine and space biology, published in Russian during the first quarter of 1985. Translated introductions and table of contents for nine Russian books on topics related to NASA's life science concerns are presented. Areas covered include: botany, cardiovascular and respiratory systems, cybernetics and biomedical data processing, endocrinology, gastrointestinal system, genetics, group dynamics, habitability and environmental effects, health and medicine, hematology, immunology, life support systems, man machine systems, metabolism, musculoskeletal system, neurophysiology, perception, personnel selection, psychology, radiobiology, reproductive system, and space biology. This issue concentrates on aerospace medicine and space biology.
Taxonomy-aware feature engineering for microbiome classification.
Oudah, Mai; Henschel, Andreas
2018-06-15
What is a healthy microbiome? The pursuit of this and many related questions, especially in light of the recently recognized microbial component in a wide range of diseases has sparked a surge in metagenomic studies. They are often not simply attributable to a single pathogen but rather are the result of complex ecological processes. Relatedly, the increasing DNA sequencing depth and number of samples in metagenomic case-control studies enabled the applicability of powerful statistical methods, e.g. Machine Learning approaches. For the latter, the feature space is typically shaped by the relative abundances of operational taxonomic units, as determined by cost-effective phylogenetic marker gene profiles. While a substantial body of microbiome/microbiota research involves unsupervised and supervised Machine Learning, very little attention has been put on feature selection and engineering. We here propose the first algorithm to exploit phylogenetic hierarchy (i.e. an all-encompassing taxonomy) in feature engineering for microbiota classification. The rationale is to exploit the often mono- or oligophyletic distribution of relevant (but hidden) traits by virtue of taxonomic abstraction. The algorithm is embedded in a comprehensive microbiota classification pipeline, which we applied to a diverse range of datasets, distinguishing healthy from diseased microbiota samples. We demonstrate substantial improvements over the state-of-the-art microbiota classification tools in terms of classification accuracy, regardless of the actual Machine Learning technique while using drastically reduced feature spaces. Moreover, generalized features bear great explanatory value: they provide a concise description of conditions and thus help to provide pathophysiological insights. Indeed, the automatically and reproducibly derived features are consistent with previously published domain expert analyses.
Medical Education Must Move from the Information Age to the Age of Artificial Intelligence.
Wartman, Steven A; Combs, C Donald
2017-11-01
Changes to the medical profession require medical education reforms that will enable physicians to more effectively enter contemporary practice. Proposals for such reforms abound. Common themes include renewed emphasis on communication, teamwork, risk-management, and patient safety. These reforms are important but insufficient. They do not adequately address the most fundamental change--the practice of medicine is rapidly transitioning from the information age to the age of artificial intelligence. Employers need physicians who: work at the top of their license, have knowledge spanning the health professions and care continuum, effectively leverage data platforms, focus on analyzing outcomes and improving performance, and communicate the meaning of the probabilities generated by massive amounts of data to patients given their unique human complexities.Future medical practice will have four characteristics that must be addressed in medical education: care will be (1) provided in many locations; (2) provided by newly-constituted health care teams; and (3) based on a growing array of data from multiple sources and artificial intelligence applications; and (4) the interface between medicine and machines will need to be skillfully managed. Thus, medical education must make better use of the findings of cognitive psychology, pay more attention to the alignment of humans and machines in education, and increase the use of simulations. Medical education will need to evolve to include systematic curricular attention to the organization of professional effort among health professionals, the use of intelligence tools like machine learning and robots, and a relentless focus on improving performance and patient outcomes. [end of abstract].
NREL`s variable speed test bed: Preliminary results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlin, P.W.; Fingersh, L.J.; Fuchs, E.F.
1996-10-01
Under an NREL subcontract, the Electrical and Computer Engineering Department of the University of Colorado (CU) designed a 20-kilowatt, 12-pole, permanent-magnet, electric generator and associated custom power electronics modules. This system can supply power over a generator speed range from 60 to 120 RPM. The generator was fabricated and assembled by the Denver electric-motor manufacturer, Unique Mobility, and the power electronics modules were designed and fabricated at the University. The generator was installed on a 56-foot tower in the modified nacelle of a Grumman Windstream 33 wind turbine in early October 1995. For checkout it was immediately loaded directly intomore » a three-phase resistive load in which it produced 3.5 kilowatts of power. Abstract only included. The ten-meter Grumman host wind machine is equipped with untwisted, untapered, NREL series S809 blades. The machine was instrumented to record both mechanical hub power and electrical power delivered to the utility. Initial tests are focusing on validating the calculated power surface. This mathematical surface shows the wind machine power as a function of both wind speed and turbine rotor speed. Upon the completion of this task, maximum effort will be directed toward filling a test matrix in which variable-speed operation will be contrasted with constant-speed mode by switching the variable speed control algorithm with the baseline constant speed control algorithm at 10 minutes time intervals. Other quantities in the test matrix will be analyzed to detect variable speed-effects on structural loads and power quality.« less
Cyborgs in the Everyday: Masculinity and Biosensing Prostate Cancer
Haddow, Gill; King, Emma; Kunkler, Ian; McLaren, Duncan
2015-01-01
Abstract An in vivo biosensor is a technology in development that will assess the biological activity of cancers to individualise external beam radiotherapy. Inserting such technology into the human body creates cybernetic organisms; a cyborg that is a human–machine hybrid. There is a gap in knowledge relating to patient willingness to allow automated technology to be embedded and to become cyborg. There is little agreement around what makes a cyborg and less understanding of the variation in the cyborgisation process. Understanding the viewpoint of possible beneficiaries addresses such gaps. There are currently three versions of ‘cyborg’ in the literature (i) a critical feminist STS concept to destabilise power inherent in dualisms, (ii) an extreme version of the human/machine in science-fiction that emphasises the ‘man’ in human and (iii) a prediction of internal physiological adaptation required for future space exploration. Interview study findings with 12 men in remission from prostate cancer show a fourth version can be used to describe current and future sub-groups of the population; ‘everyday cyborgs'. For the everyday cyborg the masculine cyborg status found in the fictionalised human–machine related to issues of control of the cancer. This was preferred to the felt stigmatisation of being a ‘leaker and bleeder’. The willingness to become cyborg was matched with a having to get used to the everyday cyborg's technological adaptations and risks. It is crucial to explore the everyday cyborg's sometimes ambivalent viewpoint. The everyday cyborg thus adds the dimension of participant voice currently missing in existing cyborg literatures and imaginations. PMID:27335534
[Anesthesia simulators and training devices].
Hartmannsgruber, M; Good, M; Carovano, R; Lampotang, S; Gravenstein, J S
1993-07-01
Simulators and training devices are used extensively by educators in 'high-tech' occupations, especially those requiring an understanding of complex systems and co-ordinated psychomotor skills. Because of advances in computer technology, anaesthetised patients can now be realistically simulated. This paper describes several training devices and a simulator currently being employed in the training of anaesthesia personnel at the University of Florida. This Gainesville Anesthesia Simulator (GAS) comprises a patient mannequin, anaesthesia gas machine, and a full set of normally operating monitoring instruments. The patient can spontaneously breathe, has audible heart and breath sounds, and palpable pulses. The mannequin contains a sophisticated lung model that consumes and eliminates gas according to physiological principles. Interconnected computers controlling the physical signs of the mannequin enable the presentation of a multitude of clinical signs. In addition, the anaesthesia machine, which is functionally intact, has hidden fault activators to challenge the user to correct equipment malfunctions. Concealed sensors monitor the users' actions and responses. A robust data acquisition and control system and a user-friendly scripting language for programming simulation scenarios are key features of GAS and make this system applicable for the training of both the beginning resident and the experienced practitioner. GAS enhances clinical education in anaesthesia by providing a non-threatening environment that fosters learning by doing. Exercises with the simulator are supported by sessions on a number of training devices. These present theoretical and practical interactive courses on the anaesthesia machine and on monitors. An extensive system, for example, introduces the student to the physics and clinical application of transoesophageal echocardiography.(ABSTRACT TRUNCATED AT 250 WORDS)
SU-F-T-163: Improve Proton Therapy Efficiency: Report of a Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Y; Flanz, J; Mah, D
Purpose: The technology of proton therapy, especially the pencil beam scanning technique, is evolving very quickly. However, the efficiency of proton therapy seems to lag behind conventional photon therapy. The purpose of the abstract is to report on the findings of a workshop on improvement of QA, planning and treatment efficiency in proton therapy. Methods: A panel of physicists, clinicians, and vendor representatives from over 18 institutions in the United States and internationally were convened in Knoxville, Tennessee in November, 2015. The panel discussed several topics on how to improve proton therapy efficiency, including 1) lean principle and failure modemore » and effects analysis, 2) commissioning and machine QA, 3) treatment planning, optimization and evaluation, 4) patient positioning and IGRT, 5) vendor liaison and machine availability, and 6) staffing, education and training. Results: The relative time needed for machine QA, treatment planning & check in proton therapy was found to range from 1 to 2.5 times of that in photon therapy. Current status in proton QA, planning and treatment was assessed. Key areas for efficiency improvement, such as elimination of unnecessary QA items or steps and development of efficient software or hardware tools, were identified. A white paper to summarize our findings is being written. Conclusion: It is critical to improve efficiency by developing reliable proton beam lines, efficient software tools on treatment planning, optimization and evaluation, and dedicated proton QA device. Conscious efforts and collaborations from both industry leaders and proton therapy centers are needed to achieve this goal and further advance the technology of proton therapy.« less
Vera, L.; Pérez-Beteta, J.; Molina, D.; Borrás, J. M.; Benavides, M.; Barcia, J. A.; Velásquez, C.; Albillo, D.; Lara, P.; Pérez-García, V. M.
2017-01-01
Abstract Introduction: Machine learning methods are integrated in clinical research studies due to their strong capability to discover parameters having a high information content and their predictive combined potential. Several studies have been developed using glioblastoma patient’s imaging data. Many of them have focused on including large numbers of variables, mostly two-dimensional textural features and/or genomic data, regardless of their meaning or potential clinical relevance. Materials and methods: 193 glioblastoma patients were included in the study. Preoperative 3D magnetic resonance images were collected and semi-automatically segmented using an in-house software. After segmentation, a database of 90 parameters including geometrical and textural image-based measures together with patients’ clinical data (including age, survival, type of treatment, etc.) was constructed. The criterion for including variables in the study was that they had either shown individual impact on survival in single or multivariate analyses or have a precise clinical or geometrical meaning. These variables were used to perform several machine learning experiments. In a first set of computational cross-validation experiments based on regression trees, those attributes showing the highest information measures were extracted. In the second phase, more sophisticated learning methods were employed in order to validate the potential of the previous variables predicting survival. Concretely support vector machines, neural networks and sparse grid methods were used. Results: Variables showing high information measure in the first phase provided the best prediction results in the second phase. Specifically, patient age, Stupp regimen and a geometrical measure related with the irregularity of contrast-enhancing areas were the variables showing the highest information measure in the first stage. For the second phase, the combinations of patient age and Stupp regimen together with one tumor geometrical measure and one tumor heterogeneity feature reached the best quality prediction. Conclusions: Advanced machine learning methods identified the parameters with the highest information measure and survival predictive potential. The uninformed machine learning methods identified a novel feature measure with direct impact on survival. Used in combination with other previously known variables multi-indexes can be defined that can help in tumor characterization and prognosis prediction. Recent advances on the definition of those multi-indexes will be reported in the conference. Funding: James S. Mc. Donnell Foundation (USA) 21st Century Science Initiative in Mathematical and Complex Systems Approaches for Brain Cancer [Collaborative award 220020450 and planning grant 220020420], MINECO/FEDER [MTM2015-71200-R], JCCM [PEII-2014-031-P].
Marafino, Ben J; Davies, Jason M; Bardach, Naomi S; Dean, Mitzi L; Dudley, R Adams
2014-01-01
Existing risk adjustment models for intensive care unit (ICU) outcomes rely on manual abstraction of patient-level predictors from medical charts. Developing an automated method for abstracting these data from free text might reduce cost and data collection times. To develop a support vector machine (SVM) classifier capable of identifying a range of procedures and diagnoses in ICU clinical notes for use in risk adjustment. We selected notes from 2001-2008 for 4191 neonatal ICU (NICU) and 2198 adult ICU patients from the MIMIC-II database from the Beth Israel Deaconess Medical Center. Using these notes, we developed an implementation of the SVM classifier to identify procedures (mechanical ventilation and phototherapy in NICU notes) and diagnoses (jaundice in NICU and intracranial hemorrhage (ICH) in adult ICU). On the jaundice classification task, we also compared classifier performance using n-gram features to unigrams with application of a negation algorithm (NegEx). Our classifier accurately identified mechanical ventilation (accuracy=0.982, F1=0.954) and phototherapy use (accuracy=0.940, F1=0.912), as well as jaundice (accuracy=0.898, F1=0.884) and ICH diagnoses (accuracy=0.938, F1=0.943). Including bigram features improved performance on the jaundice (accuracy=0.898 vs 0.865) and ICH (0.938 vs 0.927) tasks, and outperformed NegEx-derived unigram features (accuracy=0.898 vs 0.863) on the jaundice task. Overall, a classifier using n-gram support vectors displayed excellent performance characteristics. The classifier generalizes to diverse patient populations, diagnoses, and procedures. SVM-based classifiers can accurately identify procedure status and diagnoses among ICU patients, and including n-gram features improves performance, compared to existing methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Automated Design Space Exploration with Aspen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spafford, Kyle L.; Vetter, Jeffrey S.
Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less
Automated Design Space Exploration with Aspen
Spafford, Kyle L.; Vetter, Jeffrey S.
2015-01-01
Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less
Foundations for a new science of learning.
Meltzoff, Andrew N; Kuhl, Patricia K; Movellan, Javier; Sejnowski, Terrence J
2009-07-17
Human learning is distinguished by the range and complexity of skills that can be learned and the degree of abstraction that can be achieved compared with those of other species. Homo sapiens is also the only species that has developed formal ways to enhance learning: teachers, schools, and curricula. Human infants have an intense interest in people and their behavior and possess powerful implicit learning mechanisms that are affected by social interaction. Neuroscientists are beginning to understand the brain mechanisms underlying learning and how shared brain systems for perception and action support social learning. Machine learning algorithms are being developed that allow robots and computers to learn autonomously. New insights from many different fields are converging to create a new science of learning that may transform educational practices.
Mutation Clusters from Cancer Exome.
Kakushadze, Zura; Yu, Willie
2017-08-15
We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development.
A DNA network as an information processing system.
Santini, Cristina Costa; Bath, Jonathan; Turberfield, Andrew J; Tyrrell, Andy M
2012-01-01
Biomolecular systems that can process information are sought for computational applications, because of their potential for parallelism and miniaturization and because their biocompatibility also makes them suitable for future biomedical applications. DNA has been used to design machines, motors, finite automata, logic gates, reaction networks and logic programs, amongst many other structures and dynamic behaviours. Here we design and program a synthetic DNA network to implement computational paradigms abstracted from cellular regulatory networks. These show information processing properties that are desirable in artificial, engineered molecular systems, including robustness of the output in relation to different sources of variation. We show the results of numerical simulations of the dynamic behaviour of the network and preliminary experimental analysis of its main components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, L.; Notkin, D.; Adams, L.
1990-03-31
This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less
Foundations for a New Science of Learning
Meltzoff, Andrew N.; Kuhl, Patricia K.; Movellan, Javier; Sejnowski, Terrence J.
2009-01-01
Human learning is distinguished by the range and complexity of skills that can be learned and the degree of abstraction that can be achieved compared to other species. Humans are also the only species that has developed formal ways to enhance learning: teachers, schools, and curricula. Human infants have an intense interest in people and their behavior, and possess powerful implicit learning mechanisms that are affected by social interaction. Neuroscientists are beginning to understand the brain mechanisms underlying learning and how shared brain systems for perception and action support social learning. Machine learning algorithms are being developed that allow robots and computers to learn autonomously. New insights from many different fields are converging to create a new science of learning that may transform educational practices. PMID:19608908
Mutation Clusters from Cancer Exome
Kakushadze, Zura; Yu, Willie
2017-01-01
We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development. PMID:28809811
Ogden, Michael W.; Marano, Kristin M.; Jones, Bobbette A.; Morgan, Walter T.; Stiles, Mitchell F.
2015-01-01
Abstract A randomized, multi-center study of adult cigarette smokers switched to tobacco-heating cigarettes, snus or ultra-low machine yield tobacco-burning cigarettes (50/group) was conducted, and subjects’ experience with the products was followed for 24 weeks. Differences in biomarkers of tobacco exposure between smokers and never smokers at baseline and among groups relative to each other and over time were assessed. Results indicated reduced exposure to many potentially harmful constituents found in cigarette smoke following product switching. Findings support differences in exposure from the use of various tobacco products and are relevant to the understanding of a risk continuum among tobacco products (ClinicalTrials.gov Identifier: NCT02061917). PMID:26554277
Ogden, Michael W.; Marano, Kristin M.; Jones, Bobbette A.; Morgan, Walter T.; Stiles, Mitchell F.
2015-01-01
Abstract A randomized, multi-center study of adult cigarette smokers switched to tobacco-heating cigarettes, snus or ultra-low machine yield tobacco-burning cigarettes (50/group) for 24 weeks was conducted. Evaluation of biomarkers of biological effect (e.g. inflammation, lipids, hypercoaguable state) indicated that the majority of consistent and statistically significant improvements over time within each group were observed in markers of inflammation. Consistent and statistically significant differences in pairwise comparisons between product groups were not observed. These findings are relevant to the understanding of biomarkers of biological effect related to cigarette smoking as well as the risk continuum across various tobacco products (ClinicalTrials.gov Identifier: NCT02061917). PMID:26525962
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
Echo movement and evolution from real-time processing.
NASA Technical Reports Server (NTRS)
Schaffner, M. R.
1972-01-01
Preliminary experimental data on the effectiveness of conventional radars in measuring the movement and evolution of meteorological echoes when the radar is connected to a programmable real-time processor are examined. In the processor programming is accomplished by conceiving abstract machines which constitute the actual programs used in the methods employed. An analysis of these methods, such as the center of gravity method, the contour-displacement method, the method of slope, the cross-section method, the contour crosscorrelation method, the method of echo evolution at each point, and three-dimensional measurements, shows that the motions deduced from them may differ notably (since each method determines different quantities) but the plurality of measurement may give additional information on the characteristics of the precipitation.
From flamingo dance to (desirable) drug discovery: a nature-inspired approach.
Sánchez-Rodríguez, Aminael; Pérez-Castillo, Yunierkis; Schürer, Stephan C; Nicolotti, Orazio; Mangiatordi, Giuseppe Felice; Borges, Fernanda; Cordeiro, M Natalia D S; Tejera, Eduardo; Medina-Franco, José L; Cruz-Monteagudo, Maykel
2017-10-01
The therapeutic effects of drugs are well known to result from their interaction with multiple intracellular targets. Accordingly, the pharma industry is currently moving from a reductionist approach based on a 'one-target fixation' to a holistic multitarget approach. However, many drug discovery practices are still procedural abstractions resulting from the attempt to understand and address the action of biologically active compounds while preventing adverse effects. Here, we discuss how drug discovery can benefit from the principles of evolutionary biology and report two real-life case studies. We do so by focusing on the desirability principle, and its many features and applications, such as machine learning-based multicriteria virtual screening. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2016-04-01
Preliminary results of predicting distribution of soil organic soils (Histosols) and soil organic carbon stock (in tonnes per ha) using global compilations of soil profiles (about 150,000 points) and covariates at 250 m spatial resolution (about 150 covariates; mainly MODIS seasonal land products, SRTM DEM derivatives, climatic images, lithological and land cover and landform maps) are presented. We focus on using a data-driven approach i.e. Machine Learning techniques that often require no knowledge about the distribution of the target variable or knowledge about the possible relationships. Other advantages of using machine learning are (DOI: 10.1371/journal.pone.0125814): All rules required to produce outputs are formalized. The whole procedure is documented (the statistical model and associated computer script), enabling reproducible research. Predicted surfaces can make use of various information sources and can be optimized relative to all available quantitative point and covariate data. There is more flexibility in terms of the spatial extent, resolution and support of requested maps. Automated mapping is also more cost-effective: once the system is operational, maintenance and production of updates are an order of magnitude faster and cheaper. Consequently, prediction maps can be updated and improved at shorter and shorter time intervals. Some disadvantages of automated soil mapping based on Machine Learning are: Models are data-driven and any serious blunders or artifacts in the input data can propagate to order-of-magnitude larger errors than in the case of expert-based systems. Fitting machine learning models is at the order of magnitude computationally more demanding. Computing effort can be even tens of thousands higher than if e.g. linear geostatistics is used. Many machine learning models are fairly complex often abstract and any interpretation of such models is not trivial and require special multidimensional / multivariable plotting and data mining tools. Results of model fitting using the R packages nnet, randomForest and the h2o software (machine learning functions) show that significant models can be fitted for soil classes, bulk density (R-square 0.76), soil organic carbon (R-square 0.62) and coarse fragments (R-square 0.59). Consequently, we were able to estimate soil organic carbon stock for majority of the land mask (excluding permanent ice) and detect patches of landscape containing mainly organic soils (peat and similar). Our results confirm that hotspots of soil organic carbon in Tropics are peatlands in Indonesia, north of Peru, west Amazon and Congo river basin. Majority of world soil organic carbon stock is likely in the Northern latitudes (tundra and taiga of the north). Distribution of histosols seems to be mainly controlled by climatic conditions (especially temperature regime and water vapor) and hydrologic position in the landscape. Predicted distributions of organic soils (probability of occurrence) and total soil organic carbon stock at resolutions of 1 km and 250 m are available via the SoilGrids.org project homepage.
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
Rojas Sánchez, Patricia; Cobos, Alberto; Navaro, Marisa; Ramos, José Tomas; Pagán, Israel
2017-01-01
Abstract Determining the factors modulating the genetic diversity of HIV-1 populations is essential to understand viral evolution. This study analyzes the relative importance of clinical factors in the intrahost HIV-1 subtype B (HIV-1B) evolution and in the fixation of drug resistance mutations (DRM) during longitudinal pediatric HIV-1 infection. We recovered 162 partial HIV-1B pol sequences (from 3 to 24 per patient) from 24 perinatally infected patients from the Madrid Cohort of HIV-1 infected children and adolescents in a time interval ranging from 2.2 to 20.3 years. We applied machine learning classification methods to analyze the relative importance of 28 clinical/epidemiological/virological factors in the HIV-1B evolution to predict HIV-1B genetic diversity (d), nonsynonymous and synonymous mutations (dN, dS) and DRM presence. Most of the 24 HIV-1B infected pediatric patients were Spanish (91.7%), diagnosed before 2000 (83.3%), and all were antiretroviral therapy experienced. They had from 0.3 to 18.8 years of HIV-1 exposure at sampling time. Most sequences presented DRM. The best-predictor variables for HIV-1B evolutionary parameters were the age of HIV-1 diagnosis for d, the age at first antiretroviral treatment for dN and the year of HIV-1 diagnosis for ds. The year of infection (birth year) and year of sampling seemed to be relevant for fixation of both DRM at large and, considering drug families, to protease inhibitors (PI). This study identifies, for the first time using machine learning, the factors affecting more HIV-1B pol evolution and those affecting DRM fixation in HIV-1B infected pediatric patients. PMID:29044435
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
Automated Design of Complex Dynamic Systems
Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni
2014-01-01
Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969
Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P
2017-08-14
The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .
NASA Astrophysics Data System (ADS)
Traversa, Fabio L.; Di Ventra, Massimiliano
2017-02-01
We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.
Critical Assessment of Small Molecule Identification 2016: automated methods.
Schymanski, Emma L; Ruttkies, Christoph; Krauss, Martin; Brouard, Céline; Kind, Tobias; Dührkop, Kai; Allen, Felicity; Vaniya, Arpana; Verdegem, Dries; Böcker, Sebastian; Rousu, Juho; Shen, Huibin; Tsugawa, Hiroshi; Sajed, Tanvir; Fiehn, Oliver; Ghesquière, Bart; Neumann, Steffen
2017-03-27
The fourth round of the Critical Assessment of Small Molecule Identification (CASMI) Contest ( www.casmi-contest.org ) was held in 2016, with two new categories for automated methods. This article covers the 208 challenges in Categories 2 and 3, without and with metadata, from organization, participation, results and post-contest evaluation of CASMI 2016 through to perspectives for future contests and small molecule annotation/identification. The Input Output Kernel Regression (CSI:IOKR) machine learning approach performed best in "Category 2: Best Automatic Structural Identification-In Silico Fragmentation Only", won by Team Brouard with 41% challenge wins. The winner of "Category 3: Best Automatic Structural Identification-Full Information" was Team Kind (MS-FINDER), with 76% challenge wins. The best methods were able to achieve over 30% Top 1 ranks in Category 2, with all methods ranking the correct candidate in the Top 10 in around 50% of challenges. This success rate rose to 70% Top 1 ranks in Category 3, with candidates in the Top 10 in over 80% of the challenges. The machine learning and chemistry-based approaches are shown to perform in complementary ways. The improvement in (semi-)automated fragmentation methods for small molecule identification has been substantial. The achieved high rates of correct candidates in the Top 1 and Top 10, despite large candidate numbers, open up great possibilities for high-throughput annotation of untargeted analysis for "known unknowns". As more high quality training data becomes available, the improvements in machine learning methods will likely continue, but the alternative approaches still provide valuable complementary information. Improved integration of experimental context will also improve identification success further for "real life" annotations. The true "unknown unknowns" remain to be evaluated in future CASMI contests. Graphical abstract .
NASA Astrophysics Data System (ADS)
Hutson, Matthew
2018-05-01
In their adaptability, young children demonstrate common sense, a kind of intelligence that, so far, computer scientists have struggled to reproduce. Gary Marcus, a developmental cognitive scientist at New York University in New York City, believes the field of artificial intelligence (AI) would do well to learn lessons from young thinkers. Researchers in machine learning argue that computers trained on mountains of data can learn just about anything—including common sense—with few, if any, programmed rules. But Marcus says computer scientists are ignoring decades of work in the cognitive sciences and developmental psychology showing that humans have innate abilities—programmed instincts that appear at birth or in early childhood—that help us think abstractly and flexibly. He believes AI researchers ought to include such instincts in their programs. Yet many computer scientists, riding high on the successes of machine learning, are eagerly exploring the limits of what a naïve AI can do. Computer scientists appreciate simplicity and have an aversion to debugging complex code. Furthermore, big companies such as Facebook and Google are pushing AI in this direction. These companies are most interested in narrowly defined, near-term problems, such as web search and facial recognition, in which blank-slate AI systems can be trained on vast data sets and work remarkably well. But in the longer term, computer scientists expect AIs to take on much tougher tasks that require flexibility and common sense. They want to create chatbots that explain the news, autonomous taxis that can handle chaotic city traffic, and robots that nurse the elderly. Some computer scientists are already trying. Such efforts, researchers hope, will result in AIs that sit somewhere between pure machine learning and pure instinct. They will boot up following some embedded rules, but will also learn as they go.
NASA Astrophysics Data System (ADS)
Salah, Zeinab; Shalaby, Ahmed; Steiner, Allison L.; Zakey, Ashraf S.; Gautam, Ritesh; Abdel Wahab, Mohamed M.
2018-02-01
This study assesses the direct and indirect effects of natural and anthropogenic aerosols (e.g., black carbon and sulfate) over West and Central Africa during the West African monsoon (WAM) period (June-July-August). We investigate the impacts of aerosols on the amount of cloudiness, the influences on the precipitation efficiency of clouds, and the associated radiative forcing (direct and indirect). Our study includes the implementation of three new formulations of auto-conversion parameterization [namely, the Beheng (BH), Tripoli and Cotton (TC) and Liu and Daum (R6) schemes] in RegCM4.4.1, besides the default model's auto-conversion scheme (Kessler). Among the new schemes, BH reduces the precipitation wet bias by more than 50% over West Africa and achieves a bias reduction of around 25% over Central Africa. Results from detailed sensitivity experiments suggest a significant path forward in terms of addressing the long-standing issue of the characteristic wet bias in RegCM. In terms of aerosol-induced radiative forcing, the impact of the various schemes is found to vary considerably (ranging from -5 to -25 W m-2).
Assessing sea wave and spray effects on Marine Boundary Layer structure
NASA Astrophysics Data System (ADS)
Stathopoulos, Christos; Galanis, George; Patlakas, Platon; Kallos, George
2017-04-01
Air sea interface is characterized by several mechanical and thermodynamical processes. Heat, moisture and momentum exchanges increase the complexity in modeling the atmospheric-ocean system. Near surface atmospheric levels are subject to sea surface roughness and sea spray. Sea spray fluxes can affect atmospheric stability and induce microphysical processes such as sea salt particle formation and condensation/evaporation of water in the boundary layer. Moreover, presence of sea spray can alter stratification over the ocean surface with further insertion of water vapor. This can lead to modified stability conditions and to wind profiles that deviate significantly from the logarithmic approximation. To model these effects, we introduce a fully coupled system consisting of the mesoscale atmospheric model RAMS/ICLAMS and the wave model WAM. The system encompasses schemes for ocean surface roughness, sea salt aerosols and droplet thermodynamic processes and handles sea salt as predictive quantity. Numerical experiments using the developed atmospheric-ocean system are performed over the Atlantic and Mediterranean shoreline. Emphasis is given to the quantification of the improvement obtained in the description of the marine boundary layer, particularly in its lower part as well as in wave characteristics.
Learning compliant manipulation through kinesthetic and tactile human-robot interaction.
Kronander, Klas; Billard, Aude
2014-01-01
Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Wang, Jianhui; Liu, Hui
Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less
SCENERY: a web application for (causal) network reconstruction from cytometry data
Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper
2017-01-01
Abstract Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. PMID:28525568
Tsotsos, John K.
2017-01-01
Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide. PMID:28848458
Streamline integration as a method for two-dimensional elliptic grid generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.
We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less
Tsotsos, John K
2017-01-01
Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide.
An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu
2016-01-01
Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future proteomics research. For facilitating extensive studies for future proteomics research, we developed a freely available web server called RVM‐BiGP‐PPIs in Hypertext Preprocessor (PHP) for predicting PPIs. The web server including source code and the datasets are available at http://219.219.62.123:8888/BiGP/. PMID:27452983
SWIFT-Review: a text-mining workbench for systematic review.
Howard, Brian E; Phillips, Jason; Miller, Kyle; Tandon, Arpit; Mav, Deepak; Shah, Mihir R; Holmgren, Stephanie; Pelch, Katherine E; Walker, Vickie; Rooney, Andrew A; Macleod, Malcolm; Shah, Ruchir R; Thayer, Kristina
2016-05-23
There is growing interest in using machine learning approaches to priority rank studies and reduce human burden in screening literature when conducting systematic reviews. In addition, identifying addressable questions during the problem formulation phase of systematic review can be challenging, especially for topics having a large literature base. Here, we assess the performance of the SWIFT-Review priority ranking algorithm for identifying studies relevant to a given research question. We also explore the use of SWIFT-Review during problem formulation to identify, categorize, and visualize research areas that are data rich/data poor within a large literature corpus. Twenty case studies, including 15 public data sets, representing a range of complexity and size, were used to assess the priority ranking performance of SWIFT-Review. For each study, seed sets of manually annotated included and excluded titles and abstracts were used for machine training. The remaining references were then ranked for relevance using an algorithm that considers term frequency and latent Dirichlet allocation (LDA) topic modeling. This ranking was evaluated with respect to (1) the number of studies screened in order to identify 95 % of known relevant studies and (2) the "Work Saved over Sampling" (WSS) performance metric. To assess SWIFT-Review for use in problem formulation, PubMed literature search results for 171 chemicals implicated as EDCs were uploaded into SWIFT-Review (264,588 studies) and categorized based on evidence stream and health outcome. Patterns of search results were surveyed and visualized using a variety of interactive graphics. Compared with the reported performance of other tools using the same datasets, the SWIFT-Review ranking procedure obtained the highest scores on 11 out of 15 of the public datasets. Overall, these results suggest that using machine learning to triage documents for screening has the potential to save, on average, more than 50 % of the screening effort ordinarily required when using un-ordered document lists. In addition, the tagging and annotation capabilities of SWIFT-Review can be useful during the activities of scoping and problem formulation. Text-mining and machine learning software such as SWIFT-Review can be valuable tools to reduce the human screening burden and assist in problem formulation.
Olm, Matthew R.; Morowitz, Michael J.
2018-01-01
ABSTRACT Antibiotic resistance in pathogens is extensively studied, and yet little is known about how antibiotic resistance genes of typical gut bacteria influence microbiome dynamics. Here, we leveraged genomes from metagenomes to investigate how genes of the premature infant gut resistome correspond to the ability of bacteria to survive under certain environmental and clinical conditions. We found that formula feeding impacts the resistome. Random forest models corroborated by statistical tests revealed that the gut resistome of formula-fed infants is enriched in class D beta-lactamase genes. Interestingly, Clostridium difficile strains harboring this gene are at higher abundance in formula-fed infants than C. difficile strains lacking this gene. Organisms with genes for major facilitator superfamily drug efflux pumps have higher replication rates under all conditions, even in the absence of antibiotic therapy. Using a machine learning approach, we identified genes that are predictive of an organism’s direction of change in relative abundance after administration of vancomycin and cephalosporin antibiotics. The most accurate results were obtained by reducing annotated genomic data to five principal components classified by boosted decision trees. Among the genes involved in predicting whether an organism increased in relative abundance after treatment are those that encode subclass B2 beta-lactamases and transcriptional regulators of vancomycin resistance. This demonstrates that machine learning applied to genome-resolved metagenomics data can identify key genes for survival after antibiotics treatment and predict how organisms in the gut microbiome will respond to antibiotic administration. IMPORTANCE The process of reconstructing genomes from environmental sequence data (genome-resolved metagenomics) allows unique insight into microbial systems. We apply this technique to investigate how the antibiotic resistance genes of bacteria affect their ability to flourish in the gut under various conditions. Our analysis reveals that strain-level selection in formula-fed infants drives enrichment of beta-lactamase genes in the gut resistome. Using genomes from metagenomes, we built a machine learning model to predict how organisms in the gut microbial community respond to perturbation by antibiotics. This may eventually have clinical applications. PMID:29359195
Huebner, Philip A.; Willits, Jon A.
2018-01-01
Previous research has suggested that distributional learning mechanisms may contribute to the acquisition of semantic knowledge. However, distributional learning mechanisms, statistical learning, and contemporary “deep learning” approaches have been criticized for being incapable of learning the kind of abstract and structured knowledge that many think is required for acquisition of semantic knowledge. In this paper, we show that recurrent neural networks, trained on noisy naturalistic speech to children, do in fact learn what appears to be abstract and structured knowledge. We trained two types of recurrent neural networks (Simple Recurrent Network, and Long Short-Term Memory) to predict word sequences in a 5-million-word corpus of speech directed to children ages 0–3 years old, and assessed what semantic knowledge they acquired. We found that learned internal representations are encoding various abstract grammatical and semantic features that are useful for predicting word sequences. Assessing the organization of semantic knowledge in terms of the similarity structure, we found evidence of emergent categorical and hierarchical structure in both models. We found that the Long Short-term Memory (LSTM) and SRN are both learning very similar kinds of representations, but the LSTM achieved higher levels of performance on a quantitative evaluation. We also trained a non-recurrent neural network, Skip-gram, on the same input to compare our results to the state-of-the-art in machine learning. We found that Skip-gram achieves relatively similar performance to the LSTM, but is representing words more in terms of thematic compared to taxonomic relations, and we provide reasons why this might be the case. Our findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing semantic system. PMID:29520243
Why Johnny can't reengineer health care processes with information technology.
Webster, C; McLinden, S; Begler, K
1995-01-01
Many educational institutions are developing curricula that integrate computer and business knowledge and skills concerning a specific industry, such as banking or health care. We have developed a curriculum that emphasizes, equally, medical, computer, and business management concepts. Along the way we confronted a formidable obstacle, namely the domain specificity of the reference disciplines. Knowledge within each domain is sufficiently different from other domains that it reduces the leverage of building on preexisting knowledge and skills. We review this problem from the point of view of cognitive science (in particular, knowledge representation and machine learning) to suggest strategies for coping with incommensurate domain ontologies. These strategies include reflective judgment, implicit learning, abstraction, generalization, analogy, multiple inheritance, project-orientation, selectivity, goal- and failure-driven learning, and case- and story-based learning.
A Semantic Basis for Proof Queries and Transformations
NASA Technical Reports Server (NTRS)
Aspinall, David; Denney, Ewen W.; Luth, Christoph
2013-01-01
We extend the query language PrQL, designed for inspecting machine representations of proofs, to also allow transformation of proofs. PrQL natively supports hiproofs which express proof structure using hierarchically nested labelled trees, which we claim is a natural way of taming the complexity of huge proofs. Query-driven transformations enable manipulation of this structure, in particular, to transform proofs produced by interactive theorem provers into forms that assist their understanding, or that could be consumed by other tools. In this paper we motivate and define basic transformation operations, using an abstract denotational semantics of hiproofs and queries. This extends our previous semantics for queries based on syntactic tree representations.We define update operations that add and remove sub-proofs, and manipulate the hierarchy to group and ungroup nodes. We show that
Reconceptualizing the classification of PNAS articles
Airoldi, Edoardo M.; Erosheva, Elena A.; Fienberg, Stephen E.; Joutard, Cyrille; Love, Tanzy; Shringarpure, Suyash
2010-01-01
PNAS article classification is rooted in long-standing disciplinary divisions that do not necessarily reflect the structure of modern scientific research. We reevaluate that structure using latent pattern models from statistical machine learning, also known as mixed-membership models, that identify semantic structure in co-occurrence of words in the abstracts and references. Our findings suggest that the latent dimensionality of patterns underlying PNAS research articles in the Biological Sciences is only slightly larger than the number of categories currently in use, but it differs substantially in the content of the categories. Further, the number of articles that are listed under multiple categories is only a small fraction of what it should be. These findings together with the sensitivity analyses suggest ways to reconceptualize the organization of papers published in PNAS. PMID:21078953
Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system
NASA Astrophysics Data System (ADS)
Hanna, Moheb M.; Buck, A. A.; Smith, R.
1994-10-01
The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.
Playing biology's name game: identifying protein names in scientific text.
Hanisch, Daniel; Fluck, Juliane; Mevissen, Heinz-Theodor; Zimmer, Ralf
2003-01-01
A growing body of work is devoted to the extraction of protein or gene interaction information from the scientific literature. Yet, the basis for most extraction algorithms, i.e. the specific and sensitive recognition of protein and gene names and their numerous synonyms, has not been adequately addressed. Here we describe the construction of a comprehensive general purpose name dictionary and an accompanying automatic curation procedure based on a simple token model of protein names. We designed an efficient search algorithm to analyze all abstracts in MEDLINE in a reasonable amount of time on standard computers. The parameters of our method are optimized using machine learning techniques. Used in conjunction, these ingredients lead to good search performance. A supplementary web page is available at http://cartan.gmd.de/ProMiner/.
Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases.
Neal, Maxwell L; Carlson, Brian E; Thompson, Christopher T; James, Ryan C; Kim, Karam G; Tran, Kenneth; Crampin, Edmund J; Cook, Daniel L; Gennari, John H
2015-01-01
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen's semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the "Pandit-Hinch-Niederer" (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach.
Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases
Neal, Maxwell L.; Carlson, Brian E.; Thompson, Christopher T.; James, Ryan C.; Kim, Karam G.; Tran, Kenneth; Crampin, Edmund J.; Cook, Daniel L.; Gennari, John H.
2015-01-01
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach. PMID:26716837
Cañada, Andres; Rabal, Obdulia; Oyarzabal, Julen; Valencia, Alfonso
2017-01-01
Abstract A considerable effort has been devoted to retrieve systematically information for genes and proteins as well as relationships between them. Despite the importance of chemical compounds and drugs as a central bio-entity in pharmacological and biological research, only a limited number of freely available chemical text-mining/search engine technologies are currently accessible. Here we present LimTox (Literature Mining for Toxicology), a web-based online biomedical search tool with special focus on adverse hepatobiliary reactions. It integrates a range of text mining, named entity recognition and information extraction components. LimTox relies on machine-learning, rule-based, pattern-based and term lookup strategies. This system processes scientific abstracts, a set of full text articles and medical agency assessment reports. Although the main focus of LimTox is on adverse liver events, it enables also basic searches for other organ level toxicity associations (nephrotoxicity, cardiotoxicity, thyrotoxicity and phospholipidosis). This tool supports specialized search queries for: chemical compounds/drugs, genes (with additional emphasis on key enzymes in drug metabolism, namely P450 cytochromes—CYPs) and biochemical liver markers. The LimTox website is free and open to all users and there is no login requirement. LimTox can be accessed at: http://limtox.bioinfo.cnio.es PMID:28531339
PDF text classification to leverage information extraction from publication reports.
Bui, Duy Duc An; Del Fiol, Guilherme; Jonnalagadda, Siddhartha
2016-06-01
Data extraction from original study reports is a time-consuming, error-prone process in systematic review development. Information extraction (IE) systems have the potential to assist humans in the extraction task, however majority of IE systems were not designed to work on Portable Document Format (PDF) document, an important and common extraction source for systematic review. In a PDF document, narrative content is often mixed with publication metadata or semi-structured text, which add challenges to the underlining natural language processing algorithm. Our goal is to categorize PDF texts for strategic use by IE systems. We used an open-source tool to extract raw texts from a PDF document and developed a text classification algorithm that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories. To validate the algorithm, we developed a gold standard of PDF reports that were included in the development of previous systematic reviews by the Cochrane Collaboration. In a two-step procedure, we evaluated (1) classification performance, and compared it with machine learning classifier, and (2) the effects of the algorithm on an IE system that extracts clinical outcome mentions. The multi-pass sieve algorithm achieved an accuracy of 92.6%, which was 9.7% (p<0.001) higher than the best performing machine learning classifier that used a logistic regression algorithm. F-measure improvements were observed in the classification of TITLE (+15.6%), ABSTRACT (+54.2%), BODYTEXT (+3.7%), SEMISTRUCTURE (+34%), and MEDADATA (+14.2%). In addition, use of the algorithm to filter semi-structured texts and publication metadata improved performance of the outcome extraction system (F-measure +4.1%, p=0.002). It also reduced of number of sentences to be processed by 44.9% (p<0.001), which corresponds to a processing time reduction of 50% (p=0.005). The rule-based multi-pass sieve framework can be used effectively in categorizing texts extracted from PDF documents. Text classification is an important prerequisite step to leverage information extraction from PDF documents. Copyright © 2016 Elsevier Inc. All rights reserved.
Pfeiffenberger, Erik; Chaleil, Raphael A.G.; Moal, Iain H.
2017-01-01
ABSTRACT Reliable identification of near‐native poses of docked protein–protein complexes is still an unsolved problem. The intrinsic heterogeneity of protein–protein interactions is challenging for traditional biophysical or knowledge based potentials and the identification of many false positive binding sites is not unusual. Often, ranking protocols are based on initial clustering of docked poses followed by the application of an energy function to rank each cluster according to its lowest energy member. Here, we present an approach of cluster ranking based not only on one molecular descriptor (e.g., an energy function) but also employing a large number of descriptors that are integrated in a machine learning model, whereby, an extremely randomized tree classifier based on 109 molecular descriptors is trained. The protocol is based on first locally enriching clusters with additional poses, the clusters are then characterized using features describing the distribution of molecular descriptors within the cluster, which are combined into a pairwise cluster comparison model to discriminate near‐native from incorrect clusters. The results show that our approach is able to identify clusters containing near‐native protein–protein complexes. In addition, we present an analysis of the descriptors with respect to their power to discriminate near native from incorrect clusters and how data transformations and recursive feature elimination can improve the ranking performance. Proteins 2017; 85:528–543. © 2016 Wiley Periodicals, Inc. PMID:27935158
Dynamic partial reconfiguration of logic controllers implemented in FPGAs
NASA Astrophysics Data System (ADS)
Bazydło, Grzegorz; Wiśniewski, Remigiusz
2016-09-01
Technological progress in recent years benefits in digital circuits containing millions of logic gates with the capability for reprogramming and reconfiguring. On the one hand it provides the unprecedented computational power, but on the other hand the modelled systems are becoming increasingly complex, hierarchical and concurrent. Therefore, abstract modelling supported by the Computer Aided Design tools becomes a very important task. Even the higher consumption of the basic electronic components seems to be acceptable because chip manufacturing costs tend to fall over the time. The paper presents a modelling approach for logic controllers with the use of Unified Modelling Language (UML). Thanks to the Model Driven Development approach, starting with a UML state machine model, through the construction of an intermediate Hierarchical Concurrent Finite State Machine model, a collection of Verilog files is created. The system description generated in hardware description language can be synthesized and implemented in reconfigurable devices, such as FPGAs. Modular specification of the prototyped controller permits for further dynamic partial reconfiguration of the prototyped system. The idea bases on the exchanging of the functionality of the already implemented controller without stopping of the FPGA device. It means, that a part (for example a single module) of the logic controller is replaced by other version (called context), while the rest of the system is still running. The method is illustrated by a practical example by an exemplary Home Area Network system.
Vaughan, Adam; Bohac, Stanislav V
2015-10-01
Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine combustion timing predictions must contend with non-linear chemistry, non-linear physics, period doubling bifurcation(s), turbulent mixing, model parameters that can drift day-to-day, and air-fuel mixture state information that cannot typically be resolved on a cycle-to-cycle basis, especially during transients. In previous work, an abstract cycle-to-cycle mapping function coupled with ϵ-Support Vector Regression was shown to predict experimentally observed cycle-to-cycle combustion timing over a wide range of engine conditions, despite some of the aforementioned difficulties. The main limitation of the previous approach was that a partially acasual randomly sampled training dataset was used to train proof of concept offline predictions. The objective of this paper is to address this limitation by proposing a new online adaptive Extreme Learning Machine (ELM) extension named Weighted Ring-ELM. This extension enables fully causal combustion timing predictions at randomly chosen engine set points, and is shown to achieve results that are as good as or better than the previous offline method. The broader objective of this approach is to enable a new class of real-time model predictive control strategies for high variability HCCI and, ultimately, to bring HCCI's low engine-out NOx and reduced CO2 emissions to production engines. Copyright © 2015 Elsevier Ltd. All rights reserved.
Khomtchouk, Bohdan B; Weitz, Edmund; Karp, Peter D; Wahlestedt, Claes
2018-01-01
Abstract We present a rationale for expanding the presence of the Lisp family of programming languages in bioinformatics and computational biology research. Put simply, Lisp-family languages enable programmers to more quickly write programs that run faster than in other languages. Languages such as Common Lisp, Scheme and Clojure facilitate the creation of powerful and flexible software that is required for complex and rapidly evolving domains like biology. We will point out several important key features that distinguish languages of the Lisp family from other programming languages, and we will explain how these features can aid researchers in becoming more productive and creating better code. We will also show how these features make these languages ideal tools for artificial intelligence and machine learning applications. We will specifically stress the advantages of domain-specific languages (DSLs): languages that are specialized to a particular area, and thus not only facilitate easier research problem formulation, but also aid in the establishment of standards and best programming practices as applied to the specific research field at hand. DSLs are particularly easy to build in Common Lisp, the most comprehensive Lisp dialect, which is commonly referred to as the ‘programmable programming language’. We are convinced that Lisp grants programmers unprecedented power to build increasingly sophisticated artificial intelligence systems that may ultimately transform machine learning and artificial intelligence research in bioinformatics and computational biology. PMID:28040748
Discovering governing equations from data by sparse identification of nonlinear dynamics
NASA Astrophysics Data System (ADS)
Brunton, Steven
The ability to discover physical laws and governing equations from data is one of humankind's greatest intellectual achievements. A quantitative understanding of dynamic constraints and balances in nature has facilitated rapid development of knowledge and enabled advanced technology, including aircraft, combustion engines, satellites, and electrical power. There are many more critical data-driven problems, such as understanding cognition from neural recordings, inferring patterns in climate, determining stability of financial markets, predicting and suppressing the spread of disease, and controlling turbulence for greener transportation and energy. With abundant data and elusive laws, data-driven discovery of dynamics will continue to play an increasingly important role in these efforts. This work develops a general framework to discover the governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity-promoting techniques and machine learning. The resulting models are parsimonious, balancing model complexity with descriptive ability while avoiding overfitting. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions. This perspective, combining dynamical systems with machine learning and sparse sensing, is explored with the overarching goal of real-time closed-loop feedback control of complex systems. This is joint work with Joshua L. Proctor and J. Nathan Kutz. Video Abstract: https://www.youtube.com/watch?v=gSCa78TIldg
Lapborisuth, Pawan; Zhang, Xian; Noah, Adam; Hirsch, Joy
2017-01-01
Abstract. Neurofeedback is a method for using neural activity displayed on a computer to regulate one’s own brain function and has been shown to be a promising technique for training individuals to interact with brain–machine interface applications such as neuroprosthetic limbs. The goal of this study was to develop a user-friendly functional near-infrared spectroscopy (fNIRS)-based neurofeedback system to upregulate neural activity associated with motor imagery, which is frequently used in neuroprosthetic applications. We hypothesized that fNIRS neurofeedback would enhance activity in motor cortex during a motor imagery task. Twenty-two participants performed active and imaginary right-handed squeezing movements using an elastic ball while wearing a 98-channel fNIRS device. Neurofeedback traces representing localized cortical hemodynamic responses were graphically presented to participants in real time. Participants were instructed to observe this graphical representation and use the information to increase signal amplitude. Neural activity was compared during active and imaginary squeezing with and without neurofeedback. Active squeezing resulted in activity localized to the left premotor and supplementary motor cortex, and activity in the motor cortex was found to be modulated by neurofeedback. Activity in the motor cortex was also shown in the imaginary squeezing condition only in the presence of neurofeedback. These findings demonstrate that real-time fNIRS neurofeedback is a viable platform for brain–machine interface applications. PMID:28680906
An adaptive process-based cloud infrastructure for space situational awareness applications
NASA Astrophysics Data System (ADS)
Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce
2014-06-01
Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.
Spinazola, J.M.; Hansen, C.V.; Underwood, E.J.; Kenny, J.F.; Wolf, R.J.
1987-01-01
Machine-readable geohydrologic data for Precambrian through Cretaceous rocks in Kansas were compiled as part of the USGS Central Midwest Regional Aquifer System Analysis. The geohydrologic data include log, water quality, water level, hydraulics, and water use information. The log data consist of depths to the top of selected geologic formations determined from about 275 sites with geophysical logs and formation lithologies from about 190 sites with lithologic logs. The water quality data consist of about 10,800 analyses, of which about 1 ,200 are proprietary. The water level data consist of about 4 ,480 measured water levels and about 4,175 equivalent freshwater hydraulic heads, of which about 3,745 are proprietary. The hydraulics data consist of results from about 30 specific capacity tests and about 20 aquifer tests, and interpretations of about 285 drill stem tests (of which about 60 are proprietary) and about 75 core-sample analyses. The water use data consist of estimates of freshwater withdrawals from Precambrian through Cretaceous geohydrologic units for each of the 105 counties in Kansas. Average yearly withdrawals were estimated for each decade from 1940 to 1980. All the log and water use data and the nonproprietary parts of the water quality , water level, and hydraulics data are available on magnetic tape from the USGS office in Lawrence, Kansas. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Boussetoua, Mohammed
During winter, the climate in the northern region is known for its icing and freezing conditions. However, emergency services often use helicopters to reach isolated locations. The difficult situations, generally experiences in the North particularly in Quebec, may prevent rescuers to intervene. The main reason preventing such operations is the lack of a de-icing system in the small helicopter blades. The overall objective of the project is research, development, design and manufacture of a system composed of an on-board rotating low speed generator and heating elements. It consumes a part of the power supplied by the turbine through the axis of the main rotor of the small aircraft and converts it to electrical power to be used by the heating elements. This innovation will allow to fly safely everywhere throughout the year protect the lives of the users even in the worst weather conditions. Firstly, the research focuses on the identification of problems related to the use of protection systems against the hoarfrost on main rotor blades of different aircrafts during flight. In this phase, we specifically focused on the difficulties encountered by the aircraft companies using the existing and operational systems for protection against hoarfrost. Main rotor blades are difficult to protect on helicopters. Several systems were considered by the helicopter manufacturers, such as electrothermal systems, pneumatic systems or using anti-icing fluids. In the current state of technological knowledge, all helicopters that have been certified to fly in icing conditions use electrothermal systems for protection against hoarfrost on their main rotor Small helicopters addressed by this work, are forbidden to fly in icing conditions due to lack of energy source to operate these systems. The electrothermal system has been considered for this thesis work to protect the main rotor blades of small aircraft in-flight. The second part of this thesis is based on the source of power feeding the hearting system. In recent years, numerous research studies have started on the development of electromechanical system converters for various applications, such as transport by road, rail or aviation. The development of new low-speed, low-weight electric machines and their very high degree of compactness has become a very promising alternative. This project strongly interests many industries in the field of air transport. The transverse flux machine is considered as a compact structure having better mass power compared to other electrical machines. The design of transverse flux machine was the subject of an electromagnetic study. Also, the analytical study helped to determine the overall dimensions of the machine. The study was followed by a validation phase of the analytical model using numerical simulations. These two studies were intended to determine changes in the characteristics of the transverse flux machine according to the different geometric dimensions of its active parts. From the calculations made using analytical and numerical models, a prototype of the transverse flux machine (600 W, 320 RPM) was designed and manufactured in the AMIL laboratory at the Universite du Quebec a Chicoutimi (UQAC). A bench test was conducted to compare the theoretical and experimental results. The measurements obtained on this prototype were compared with the theoretical results. This phase of the study demonstrates with satisfaction, the reliability of the theoretical models developed. Finally, a new configuration of this machine has been proposed. Numerical simulation results of this structure are particularly encouraging and require further investigations. For logistical and financial reasons, the prototype of this configuration has not been manufactured. (Abstract shortened by UMI.)
Implementation and test of a coastal forecasting system for wind waves in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Inghilesi, R.; Catini, F.; Orasi, A.; Corsini, S.
2010-09-01
A coastal forecasting system has been implemented in order to provide a coverage of the whole Mediterranean Sea and of several enclosed coastal areas as well. The problem is to achieve a good definition of the small scale coastal processes which affect the propagation of waves toward the shores while retaining the possibility of selecting any of the possible coastal areas in the whole Mediterranean Sea. The system is built on a very high resolution parallel implementation of the WAM and SWAN models, one-way chain-nested in key areas. The system will shortly be part of the ISPRA SIMM forecasting system which has been operative since 2001. The SIMM sistem makes available the high resolution wind fields (0.1/0.1 deg) used in the coastal system. The coastal system is being tested on several Italian coastal areas (Ligurian Sea, Lower Tyrrenian Sea, Sicily Channel, Lower Adriatic Sea) in order to optimise the numerics of the coastal processes and to verify the results in shallow waters and complex bathymetries. The results of the comparison between hindcast and buoy data in very shallow (14m depth) and deep sea (150m depth) will be shown for several episodes in the upper Tyrrenian Sea.
NASA Technical Reports Server (NTRS)
Hasselmann, Klaus; Hasselmann, Susanne; Bauer, Eva; Bruening, Claus; Lehner, Susanne; Graber, Hans; Lionello, Piero
1988-01-01
The applicability of ERS-1 wind and wave data for wave models was studied using the WAM third generation wave model and SEASAT altimeter, scatterometer and SAR data. A series of global wave hindcasts is made for the surface stress and surface wind fields by assimilation of scatterometer data for the full 96-day SEASAT and also for two wind field analyses for shorter periods by assimilation with the higher resolution ECMWF T63 model and by subjective analysis methods. It is found that wave models respond very sensitively to inconsistencies in wind field analyses and therefore provide a valuable data validation tool. Comparisons between SEASAT SAR image spectra and theoretical SAR spectra derived from the hindcast wave spectra by Monte Carlo simulations yield good overall agreement for 32 cases representing a wide variety of wave conditions. It is concluded that SAR wave imaging is sufficiently well understood to apply SAR image spectra with confidence for wave studies if supported by realistic wave models and theoretical computations of the strongly nonlinear mapping of the wave spectrum into the SAR image spectrum. A closed nonlinear integral expression for this spectral mapping relation is derived which avoids the inherent statistical errors of Monte Carlo computations and may prove to be more efficient numerically.
SWH trends and links to large scale teleconnection patterns in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Lionello, P.; Pino, C.; Galati, M. B.
2010-09-01
This study analyzes the SWH field in the Mediterranean Sea using a multidecadal simulations (1958-2001) carried out using the WAM (WAve Model) forced by the REMO-HIPOCAS wind fields. The simulations are validated against satellite altimeter data. Several mid-latitude patterns are linked to the SWH field in the Mediterranean. Considering the mean monthly SWH values, EA (Eastern Atlantic pattern) exerts the largest influence, while NAO and other patterns have a smaller but comparable effect. Severe SWH conditions have been characterized using the 95percentile of daily SWH maxima. NAO is important mainly for high SWH conditions in winter with significant correlation in December, January and March, but also EA, SCA (SCAndinavian) and EA-WR (Eastern Atlantic-Western Russia) play an important role. In general, both SWH high and mean values are modulated by several patterns, with an important variability in space and at monthly level so that no single pattern can be attributed a dominant role along the whole annual cycle and all the mentioned patterns are important for at least few months in the year. Significant trends of SWH are present only in sparse areas and suggest mostly a minor decrease of storm intensity, The statistics of extremes and high SWH values is substantially steady during the second half of the 20th century.
NASA Astrophysics Data System (ADS)
Martucci, G.; Carniel, S.; Chiggiato, J.; Sclavo, M.; Lionello, P.; Galati, M. B.
2009-09-01
The study is a statistical analysis of sea states timeseries derived using the wave model WAM forced by the ERA-40 dataset in selected areas near the Italian coasts. For the period 1 January 1958 to 31 December 1999 the analysis yields: (i) the existence of a negative trend in the annual- and winter-averaged sea state heights; (ii) the existence of a turning-point in late 70's in the annual-averaged trend of sea state heights at a site in the Northern Adriatic Sea; (iii) the overall absence of a significant trend in the annual-averaged mean durations of sea states over thresholds; (iv) the assessment of the extreme values on a time-scale of thousand years. The analysis uses two methods to obtain samples of extremes from the independent sea states: the r-largest annual maxima and the peak-over-threshold. The two methods show statistical differences in retrieving the return values and more generally in describing the significant wave field. The study shows the existence of decadal negative trends in the significant wave heights and by this it conveys useful information on the wave climatology of the Italian seas during the second half of the 20th century.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Kai; Qi, Junjian; Kang, Wei
2016-08-01
Growing penetration of intermittent resources such as renewable generations increases the risk of instability in a power grid. This paper introduces the concept of observability and its computational algorithms for a power grid monitored by the wide-area measurement system (WAMS) based on synchrophasors, e.g. phasor measurement units (PMUs). The goal is to estimate real-time states of generators, especially for potentially unstable trajectories, the information that is critical for the detection of rotor angle instability of the grid. The paper studies the number and siting of synchrophasors in a power grid so that the state of the system can be accuratelymore » estimated in the presence of instability. An unscented Kalman filter (UKF) is adopted as a tool to estimate the dynamic states that are not directly measured by synchrophasors. The theory and its computational algorithms are illustrated in detail by using a 9-bus 3-generator power system model and then tested on a 140-bus 48-generator Northeast Power Coordinating Council power grid model. Case studies on those two systems demonstrate the performance of the proposed approach using a limited number of synchrophasors for dynamic state estimation for stability assessment and its robustness against moderate inaccuracies in model parameters.« less
Mesoscale Simulations of Gravity Waves During the 2008-2009 Major Stratospheric Sudden Warming
NASA Technical Reports Server (NTRS)
Limpasuvan, Varavut; Alexander, M. Joan; Orsolini, Yvan J.; Wu, Dong L.; Xue, Ming; Richter, Jadwiga H.; Yamashita, Chihoko
2011-01-01
A series of 24 h mesoscale simulations (of 10 km horizontal and 400 m vertical resolution) are performed to examine the characteristics and forcing of gravity waves (GWs) relative to planetary waves (PWs) during the 2008-2009 major stratospheric sudden wam1ing (SSW). Just prior to SSW occurrence, widespread westward propagating GWs are found along the vortex's edge and associated predominantly with major topographical features and strong near-surface winds. Momentum forcing due to GWs surpasses PW forcing in the upper stratosphere and tends to decelerate the polar westerly jet in excess of 30 m/s/d. With SSW onset, PWs dominate the momentum forcing, providing decelerative effects in excess of 50 m/s/d throughout the upper polar stratosphere. GWs related to topography become less widespread largely due to incipient wind reversal as the vortex starts to elongate. During the SSW maturation and early recovery, the polar vortex eventually splits and both wave signatures and forcing greatly subside. Nonetheless, during SSW, westward and eastward propagating GWs are found in the polar region and may be generated in situ by flow adjustment processes in the stratosphere or by secondary GW breaking. The simulated large-scale features agree well with those resolved in satellite observations and analysis products.
Korhonen, Anna; Silins, Ilona; Sun, Lin; Stenius, Ulla
2009-01-01
Background One of the most neglected areas of biomedical Text Mining (TM) is the development of systems based on carefully assessed user needs. We have recently investigated the user needs of an important task yet to be tackled by TM -- Cancer Risk Assessment (CRA). Here we take the first step towards the development of TM technology for the task: identifying and organizing the scientific evidence required for CRA in a taxonomy which is capable of supporting extensive data gathering from biomedical literature. Results The taxonomy is based on expert annotation of 1297 abstracts downloaded from relevant PubMed journals. It classifies 1742 unique keywords found in the corpus to 48 classes which specify core evidence required for CRA. We report promising results with inter-annotator agreement tests and automatic classification of PubMed abstracts to taxonomy classes. A simple user test is also reported in a near real-world CRA scenario which demonstrates along with other evaluation that the resources we have built are well-defined, accurate, and applicable in practice. Conclusion We present our annotation guidelines and a tool which we have designed for expert annotation of PubMed abstracts. A corpus annotated for keywords and document relevance is also presented, along with the taxonomy which organizes the keywords into classes defining core evidence for CRA. As demonstrated by the evaluation, the materials we have constructed provide a good basis for classification of CRA literature along multiple dimensions. They can support current manual CRA as well as facilitate the development of an approach based on TM. We discuss extending the taxonomy further via manual and machine learning approaches and the subsequent steps required to develop TM technology for the needs of CRA. PMID:19772619
Text Mining for Protein Docking
Badal, Varsha D.; Kundrotas, Petras J.; Vakser, Ilya A.
2015-01-01
The rapidly growing amount of publicly available information from biomedical research is readily accessible on the Internet, providing a powerful resource for predictive biomolecular modeling. The accumulated data on experimentally determined structures transformed structure prediction of proteins and protein complexes. Instead of exploring the enormous search space, predictive tools can simply proceed to the solution based on similarity to the existing, previously determined structures. A similar major paradigm shift is emerging due to the rapidly expanding amount of information, other than experimentally determined structures, which still can be used as constraints in biomolecular structure prediction. Automated text mining has been widely used in recreating protein interaction networks, as well as in detecting small ligand binding sites on protein structures. Combining and expanding these two well-developed areas of research, we applied the text mining to structural modeling of protein-protein complexes (protein docking). Protein docking can be significantly improved when constraints on the docking mode are available. We developed a procedure that retrieves published abstracts on a specific protein-protein interaction and extracts information relevant to docking. The procedure was assessed on protein complexes from Dockground (http://dockground.compbio.ku.edu). The results show that correct information on binding residues can be extracted for about half of the complexes. The amount of irrelevant information was reduced by conceptual analysis of a subset of the retrieved abstracts, based on the bag-of-words (features) approach. Support Vector Machine models were trained and validated on the subset. The remaining abstracts were filtered by the best-performing models, which decreased the irrelevant information for ~ 25% complexes in the dataset. The extracted constraints were incorporated in the docking protocol and tested on the Dockground unbound benchmark set, significantly increasing the docking success rate. PMID:26650466
Unlocking the Power of Big Data at the National Institutes of Health
Coakley, Meghan F.; Leerkes, Maarten R.; Barnett, Jason; Gabrielian, Andrei E.; Noble, Karlynn; Weber, M. Nick
2013-01-01
Abstract The era of “big data” presents immense opportunities for scientific discovery and technological progress, with the potential to have enormous impact on research and development in the public sector. In order to capitalize on these benefits, there are significant challenges to overcome in data analytics. The National Institute of Allergy and Infectious Diseases held a symposium entitled “Data Science: Unlocking the Power of Big Data” to create a forum for big data experts to present and share some of the creative and innovative methods to gleaning valuable knowledge from an overwhelming flood of biological data. A significant investment in infrastructure and tool development, along with more and better-trained data scientists, may facilitate methods for assimilation of data and machine learning, to overcome obstacles such as data security, data cleaning, and data integration. PMID:27442200
Bridging the Particle Physics and Big Data Worlds
NASA Astrophysics Data System (ADS)
Pivarski, James
2017-09-01
For decades, particle physicists have developed custom software because the scale and complexity of our problems were unique. In recent years, however, the ``big data'' industry has begun to tackle similar problems, and has developed some novel solutions. Incorporating scientific Python libraries, Spark, TensorFlow, and machine learning tools into the physics software stack can improve abstraction, reliability, and in some cases performance. Perhaps more importantly, it can free physicists to concentrate on domain-specific problems. Building bridges isn't always easy, however. Physics software and open-source software from industry differ in many incidental ways and a few fundamental ways. I will show work from the DIANA-HEP project to streamline data flow from ROOT to Numpy and Spark, to incorporate ideas of functional programming into histogram aggregation, and to develop real-time, query-style manipulations of particle data.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Designing, programming, and optimizing a (small) quantum computer
NASA Astrophysics Data System (ADS)
Svore, Krysta
In 1982, Richard Feynman proposed to use a computer founded on the laws of quantum physics to simulate physical systems. In the more than thirty years since, quantum computers have shown promise to solve problems in number theory, chemistry, and materials science that would otherwise take longer than the lifetime of the universe to solve on an exascale classical machine. The practical realization of a quantum computer requires understanding and manipulating subtle quantum states while experimentally controlling quantum interference. It also requires an end-to-end software architecture for programming, optimizing, and implementing a quantum algorithm on the quantum device hardware. In this talk, we will introduce recent advances in connecting abstract theory to present-day real-world applications through software. We will highlight recent advancement of quantum algorithms and the challenges in ultimately performing a scalable solution on a quantum device.
Families of Graph Algorithms: SSSP Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew
2017-08-28
Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less
Eternal Sunshine of the Spotless Machine: Protecting Privacy with Ephemeral Channels
Dunn, Alan M.; Lee, Michael Z.; Jana, Suman; Kim, Sangman; Silberstein, Mark; Xu, Yuanzhong; Shmatikov, Vitaly; Witchel, Emmett
2014-01-01
Modern systems keep long memories. As we show in this paper, an adversary who gains access to a Linux system, even one that implements secure deallocation, can recover the contents of applications’ windows, audio buffers, and data remaining in device drivers—long after the applications have terminated. We design and implement Lacuna, a system that allows users to run programs in “private sessions.” After the session is over, all memories of its execution are erased. The key abstraction in Lacuna is an ephemeral channel, which allows the protected program to talk to peripheral devices while making it possible to delete the memories of this communication from the host. Lacuna can run unmodified applications that use graphics, sound, USB input devices, and the network, with only 20 percentage points of additional CPU utilization. PMID:24755709
Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael
2015-01-01
Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596
Alien Mindscapes—A Perspective on the Search for Extraterrestrial Intelligence
2016-01-01
Abstract Advances in planetary and space sciences, astrobiology, and life and cognitive sciences, combined with developments in communication theory, bioneural computing, machine learning, and big data analysis, create new opportunities to explore the probabilistic nature of alien life. Brought together in a multidisciplinary approach, they have the potential to support an integrated and expanded Search for Extraterrestrial Intelligence (SETI1), a search that includes looking for life as we do not know it. This approach will augment the odds of detecting a signal by broadening our understanding of the evolutionary and systemic components in the search for extraterrestrial intelligence (ETI), provide more targets for radio and optical SETI, and identify new ways of decoding and coding messages using universal markers. Key Words: SETI—Astrobiology—Coevolution of Earth and life—Planetary habitability and biosignatures. Astrobiology 16, 661–676. PMID:27383691
A General Sparse Tensor Framework for Electronic Structure Theory
Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...
2017-01-24
Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less
An overview of very high level software design methods
NASA Technical Reports Server (NTRS)
Asdjodi, Maryam; Hooper, James W.
1988-01-01
Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.
Toward a Systematic Approach for Selection of NASA Technology Portfolios
NASA Technical Reports Server (NTRS)
Weisbin, Charles R.; Rodriguez, Guillermo; Alberto, Elfes; Smith, Jeffrey H.
2004-01-01
There is an important need for a consistent analytical foundation supporting the selection and monitoring of R&D tasks that support new system concepts that enable future NASA missions. This capability should be applicable at various degrees of abstraction, depending upon whether one is interested in formulation, development, or operations. It should also be applicable to a single project, a program comprised of a group of projects, an enterprise typically including multiple programs, and the overall agency itself. Emphasis here is on technology selection and new initiatives, but the same approach can be generalized to other applications, dealing, for example, with new system architectures, risk reduction, and task allocation among humans and machines. The purpose of this paper is to describe one such approach, which is in its early stages of implementation within NASA programs, and to discuss several illustrative examples.
Developing a PLC-friendly state machine model: lessons learned
NASA Astrophysics Data System (ADS)
Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans
2014-07-01
Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we've learned during the development process of such a "PLC-friendly" state machine model.
Niazi, Muhammad K. K.; Dhulekar, Nimit; Schmidt, Diane; Major, Samuel; Cooper, Rachel; Abeijon, Claudia; Gatti, Daniel M.; Kramnik, Igor; Yener, Bulent; Gurcan, Metin; Beamer, Gillian
2015-01-01
ABSTRACT Pulmonary tuberculosis (TB) is caused by Mycobacterium tuberculosis in susceptible humans. Here, we infected Diversity Outbred (DO) mice with ∼100 bacilli by aerosol to model responses in a highly heterogeneous population. Following infection, ‘supersusceptible’, ‘susceptible’ and ‘resistant’ phenotypes emerged. TB disease (reduced survival, weight loss, high bacterial load) correlated strongly with neutrophils, neutrophil chemokines, tumor necrosis factor (TNF) and cell death. By contrast, immune cytokines were weak correlates of disease. We next applied statistical and machine learning approaches to our dataset of cytokines and chemokines from lungs and blood. Six molecules from the lung: TNF, CXCL1, CXCL2, CXCL5, interferon-γ (IFN-γ), interleukin 12 (IL-12); and two molecules from blood – IL-2 and TNF – were identified as being important by applying both statistical and machine learning methods. Using molecular features to generate tree classifiers, CXCL1, CXCL2 and CXCL5 distinguished four classes (supersusceptible, susceptible, resistant and non-infected) from each other with approximately 77% accuracy using completely independent experimental data. By contrast, models based on other molecules were less accurate. Low to no IFN-γ, IL-12, IL-2 and IL-10 successfully discriminated non-infected mice from infected mice but failed to discriminate disease status amongst supersusceptible, susceptible and resistant M.-tuberculosis-infected DO mice. Additional analyses identified CXCL1 as a promising peripheral biomarker of disease and of CXCL1 production in the lungs. From these results, we conclude that: (1) DO mice respond variably to M. tuberculosis infection and will be useful to identify pathways involving necrosis and neutrophils; (2) data from DO mice is suited for machine learning methods to build, validate and test models with independent data based solely on molecular biomarkers; (3) low levels of immunological cytokines best indicate a lack of exposure to M. tuberculosis but cannot distinguish infection from disease. PMID:26204894
Zhang, Jie; Bai, Ruoshi; Zhou, Zhaojuan; Liu, Xingyu; Zhou, Jun
2017-04-01
A fully automated analytical method was developed and validated by this present study. The method was based on two-dimensional (2D) online solid-phase extraction liquid chromatography-tandem mass spectrometry (SPE-LC-MS/MS) to determine nine aromatic amines (AAs) in mainstream smoke (MSS) simultaneously. As a part of validation process, AAs yields for 16 top-selling commercial cigarettes from China market were evaluated by the developed method under both Health Canada Intensive (HCI) and ISO machine smoking regimes. The gas phase of MSS was trapped by 25 mL 0.6 M hydrochloric acid solution, while the particulate phase was collected on a glass fiber filter. Then, the glass fiber pad was extracted with hydrochloric acid solution in an ultrasonic bath. The extract was analyzed with 2D online SPE-LC-MS/MS. In order to minimize the matrix effects of sample on each analyte, two cartridges with different extraction mechanisms were utilized to cleanup disturbances of different polarity, which were performed by the 2D SPE. A phenyl-hexyl analytical column was used to achieve a chromatographic separation. Under the optimized conditions, the isomers of p-toluidine, m-toluidine and o-toluidine, 3-aminobiphenyl and 4-aminobiphenyl, and 1-naphthylamine and 2-naphthylamine were baseline separated with good peak shapes for the first time. The limits of detection for nine AAs ranged from 0.03 to 0.24 ng cig -1 . The recovery of the measurement of nine AAs was from 84.82 to 118.47%. The intra-day and inter-day precisions of nine AAs were less than 10 and 16%, respectively. Compared with ISO machine smoking regime, the AAs yields in MSS were 1.17 to 3.41 times higher under HCI machine smoking regime. Graphical abstract New method using online SPE-LC/MS/MS for analysis of aromatic amines in mainstream cigarette smoke.
e-Addictology: An Overview of New Technologies for Assessing and Intervening in Addictive Behaviors.
Ferreri, Florian; Bourla, Alexis; Mouchabac, Stephane; Karila, Laurent
2018-01-01
New technologies can profoundly change the way we understand psychiatric pathologies and addictive disorders. New concepts are emerging with the development of more accurate means of collecting live data, computerized questionnaires, and the use of passive data. Digital phenotyping , a paradigmatic example, refers to the use of computerized measurement tools to capture the characteristics of different psychiatric disorders. Similarly, machine learning-a form of artificial intelligence-can improve the classification of patients based on patterns that clinicians have not always considered in the past. Remote or automated interventions (web-based or smartphone-based apps), as well as virtual reality and neurofeedback, are already available or under development. These recent changes have the potential to disrupt practices, as well as practitioners' beliefs, ethics and representations, and may even call into question their professional culture. However, the impact of new technologies on health professionals' practice in addictive disorder care has yet to be determined. In the present paper, we therefore present an overview of new technology in the field of addiction medicine. Using the keywords [e-health], [m-health], [computer], [mobile], [smartphone], [wearable], [digital], [machine learning], [ecological momentary assessment], [biofeedback] and [virtual reality], we searched the PubMed database for the most representative articles in the field of assessment and interventions in substance use disorders. We screened 595 abstracts and analyzed 92 articles, dividing them into seven categories: e-health program and web-based interventions, machine learning, computerized adaptive testing, wearable devices and digital phenotyping, ecological momentary assessment, biofeedback, and virtual reality. This overview shows that new technologies can improve assessment and interventions in the field of addictive disorders. The precise role of connected devices, artificial intelligence and remote monitoring remains to be defined. If they are to be used effectively, these tools must be explained and adapted to the different profiles of physicians and patients. The involvement of patients, caregivers and other health professionals is essential to their design and assessment.
Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA
Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.
2017-01-01
The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099
DeepSynergy: predicting anti-cancer drug synergy with Deep Learning
Preuer, Kristina; Lewis, Richard P I; Hochreiter, Sepp; Bender, Andreas; Bulusu, Krishna C; Klambauer, Günter
2018-01-01
Abstract Motivation While drug combination therapies are a well-established concept in cancer treatment, identifying novel synergistic combinations is challenging due to the size of combinatorial space. However, computational approaches have emerged as a time- and cost-efficient way to prioritize combinations to test, based on recently available large-scale combination screening data. Recently, Deep Learning has had an impact in many research areas by achieving new state-of-the-art model performance. However, Deep Learning has not yet been applied to drug synergy prediction, which is the approach we present here, termed DeepSynergy. DeepSynergy uses chemical and genomic information as input information, a normalization strategy to account for input data heterogeneity, and conical layers to model drug synergies. Results DeepSynergy was compared to other machine learning methods such as Gradient Boosting Machines, Random Forests, Support Vector Machines and Elastic Nets on the largest publicly available synergy dataset with respect to mean squared error. DeepSynergy significantly outperformed the other methods with an improvement of 7.2% over the second best method at the prediction of novel drug combinations within the space of explored drugs and cell lines. At this task, the mean Pearson correlation coefficient between the measured and the predicted values of DeepSynergy was 0.73. Applying DeepSynergy for classification of these novel drug combinations resulted in a high predictive performance of an AUC of 0.90. Furthermore, we found that all compared methods exhibit low predictive performance when extrapolating to unexplored drugs or cell lines, which we suggest is due to limitations in the size and diversity of the dataset. We envision that DeepSynergy could be a valuable tool for selecting novel synergistic drug combinations. Availability and implementation DeepSynergy is available via www.bioinf.jku.at/software/DeepSynergy. Contact klambauer@bioinf.jku.at Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253077
NASA Astrophysics Data System (ADS)
Piana Agostinetti, Nicola; Calo', Marco
2014-05-01
Stimulation of geothermal wells through hydraulic injections is the most common way to increase secondary porosity in hot-dry rock geothermal reservoir. As worldwide documented, injection of over-pressurized fluids in the subsurface creates a diffuse pattern of microseismicity confined to the portion of crustal volume around the injection well. Such "pseudo"-natural seismicity can be a valuable source of information about the elastic properties of the rock in the volume directly below the geothermal site and about their time-evolution during fluid injection. Classical methods (e.g. Local Earthquake Tomography, LET) have been applied to image how the rocks interact with the flow of over-pressurized fluids. Repeating the LET computation using consecutive set of events produces a time-series of P-wave velocity models which can be analyzed to catch the time-variation of the elastic properties. Such approaches, based on a linearized solution of the tomographic inverse problem, can give a qualitative idea of the behavior of rocks, but they cannot be used to quantify such interaction, due to the well-know issues which affect LET results, like the strong link between the "final" and the "starting" model (i.e. the "final" model must be a small-perturbation of the the "starting" model), model paramterization, damping of the covariance matrix, etc.. Also, the robustness of the retrieved models can not be easily assessed due to the difficulties to determine the absolute errors on the Vp parameters themselves. Thus, it can be challenging to understand if the fluctuations in the elastic properties remain or not within the estimated errors. In this study we present the results of a full 4D local earthquake tomography obtained with the P- and S- wave arrival times of 600 seismic events recorded in 2000 during the stimulation of the GPK2 well of the Enhanced Geothermal System located in Soultz-des-Forestes (France). We focus on the initial stage, when the injection rate has been increased abruptly from 30 l/s to 40 l/s. Such operation lasted less than 13 hours and generated a large number of events, almost evenly time-distributed. Such stage has been analyzed in details using a linearized tomographic inversion code imroved with a post-processing (WAM) which highlighted the fluctuations in the Vp velocity near the well-head over a few hours time-scale and a few hundreds meter spatial-scale (Calo' et al, GJI, 2011). The approach adopted (LET+WAM) provided a rough estimation of the distribution errors in the models that resulted unsatifactory to assess the reliablity of some important velocity variations observed over the time. Solving the LET inverse problem using a trans-dimensional Monte Carlo method gives us now the possibility to fully quantify the errors associated with the retrieved Vp and Vp/Vs models and enable us to evaluate the robustness of the fluctuations in the elastic properties during the injection phase.
Determining Tidal Phase Differences from X-Band Radar Images
NASA Astrophysics Data System (ADS)
Newman, Kieran; Bell, Paul; Brown, Jennifer; Plater, Andrew
2017-04-01
Introduction Previous work by Bell et. al. (2016) has developed a method using X-band marine radar to measure intertidal bathymetry, using the waterline as a level over a spring-neap tidal cycle. This has been used in the Dee Estuary to give a good representation of the bathymetry in the area. However, there are some sources of inaccuracy in the method, as a uniform spatial tidal signal is assumed over the entire domain. Motivation The method used by Bell et. al. (2016) applies a spatially uniform tidal signal to the entire domain. This fails to account for fine-scale variations in water level and tidal phase. While methods are being developed to account for small-scale water level variations using high resolution modelling, a method to determine tidal phase variations directly from the radar intensity images could be advantageous operationally. Methods The tidal phase has been computed using two different methods, with hourly averaged images from 2008. In the first method, the cross-correlation between each raw pixel time series and a tidal signal at a number of lags is calculated, and the lag with the highest correlation to the pixel series is recorded. For the second method, the same method of correlation is used on signals generated by tracking movement of buoys, which show up strongly in the radar image as they move on their moorings with the tidal currents. There is a broad agreement between the two methods, but validation is needed to determine the relative accuracy. The phase has also been calculated using a Fourier decomposition, and agrees broadly with the above methods. Work also needs to be done to separate areas where the recorded phase is due to tidal current (mostly subtidal areas) or due to elevation (mostly the wetting/drying signal in intertidal areas), by classifying radar intensities by the phases and amplitudes of the tides. Filtering out signal variations due to wind strength and attenuation of the radar signal will also be applied. Validation Validation will be attempted using data from a POLCOMS-WAM model run for Liverpool Bay at 180m resolution for February 2008 (Brown, 2011), and ongoing work to develop a model at 5m resolution using DELFT3D-FLOW. There are also a series of ADCP and other direct measurements of tidal current and elevation available, although periods of measurement do not all overlap. However, this could still be used for some validation. Conclusion While this work is in very early stages, it could present a method to determine fine-scale variations in tidal phase without a network of current recorders, and an improvement in the accuracy of bathymetric methods using X-band Radar. References Bell, P.S., Bird, C.O., Plater, A.J., 2016. A temporal waterline approach to mapping intertidal areas using X-band marine radar. Coastal Engineering, 07: 84-101. Brown, J.M., Bolaños, R., Wolf, J., 2011. Impact assessment of advanced coupling features in a tide-surge-wave model, POLCOMS-WAM, in a shallow water application. Journal of Marine Systems, 87: 13-24. Deltares, 2010. Delft3D FLOW. Delft: Deltares.
15 CFR 700.31 - Metalworking machines.
Code of Federal Regulations, 2011 CFR
2011-01-01
... machines covered by this section include: Bending and forming machines Boring machines Broaching machines... Planers and shapers Polishing, lapping, boring, and finishing machines Punching and shearing machines...
15 CFR 700.31 - Metalworking machines.
Code of Federal Regulations, 2012 CFR
2012-01-01
... machines covered by this section include: Bending and forming machines Boring machines Broaching machines... Planers and shapers Polishing, lapping, boring, and finishing machines Punching and shearing machines...
15 CFR 700.31 - Metalworking machines.
Code of Federal Regulations, 2013 CFR
2013-01-01
... machines covered by this section include: Bending and forming machines Boring machines Broaching machines... Planers and shapers Polishing, lapping, boring, and finishing machines Punching and shearing machines...
15 CFR 700.31 - Metalworking machines.
Code of Federal Regulations, 2014 CFR
2014-01-01
... machines covered by this section include: Bending and forming machines Boring machines Broaching machines... Planers and shapers Polishing, lapping, boring, and finishing machines Punching and shearing machines...
15 CFR 700.31 - Metalworking machines.
Code of Federal Regulations, 2010 CFR
2010-01-01
... machines covered by this section include: Bending and forming machines Boring machines Broaching machines... Planers and shapers Polishing, lapping, boring, and finishing machines Punching and shearing machines...
Proceedings of the First NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)
2009-01-01
Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.
Nonpolitical images evoke neural predictors of political ideology.
Ahn, Woo-Young; Kishida, Kenneth T; Gu, Xiaosi; Lohrenz, Terry; Harvey, Ann; Alford, John R; Smith, Kevin B; Yaffe, Gideon; Hibbing, John R; Dayan, Peter; Montague, P Read
2014-11-17
Political ideologies summarize dimensions of life that define how a person organizes their public and private behavior, including their attitudes associated with sex, family, education, and personal autonomy. Despite the abstract nature of such sensibilities, fundamental features of political ideology have been found to be deeply connected to basic biological mechanisms that may serve to defend against environmental challenges like contamination and physical threat. These results invite the provocative claim that neural responses to nonpolitical stimuli (like contaminated food or physical threats) should be highly predictive of abstract political opinions (like attitudes toward gun control and abortion). We applied a machine-learning method to fMRI data to test the hypotheses that brain responses to emotionally evocative images predict individual scores on a standard political ideology assay. Disgusting images, especially those related to animal-reminder disgust (e.g., mutilated body), generate neural responses that are highly predictive of political orientation even though these neural predictors do not agree with participants' conscious rating of the stimuli. Images from other affective categories do not support such predictions. Remarkably, brain responses to a single disgusting stimulus were sufficient to make accurate predictions about an individual subject's political ideology. These results provide strong support for the idea that fundamental neural processing differences that emerge under the challenge of emotionally evocative stimuli may serve to structure political beliefs in ways formerly unappreciated. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Chun, Hong-Woo; Tsuruoka, Yoshimasa; Kim, Jin-Dong; Shiba, Rie; Nagata, Naoki; Hishiki, Teruyoshi; Tsujii, Jun'ichi
2006-01-01
Background Automatic recognition of relations between a specific disease term and its relevant genes or protein terms is an important practice of bioinformatics. Considering the utility of the results of this approach, we identified prostate cancer and gene terms with the ID tags of public biomedical databases. Moreover, considering that genetics experts will use our results, we classified them based on six topics that can be used to analyze the type of prostate cancers, genes, and their relations. Methods We developed a maximum entropy-based named entity recognizer and a relation recognizer and applied them to a corpus-based approach. We collected prostate cancer-related abstracts from MEDLINE, and constructed an annotated corpus of gene and prostate cancer relations based on six topics by biologists. We used it to train the maximum entropy-based named entity recognizer and relation recognizer. Results Topic-classified relation recognition achieved 92.1% precision for the relation (an increase of 11.0% from that obtained in a baseline experiment). For all topics, the precision was between 67.6 and 88.1%. Conclusion A series of experimental results revealed two important findings: a carefully designed relation recognition system using named entity recognition can improve the performance of relation recognition, and topic-classified relation recognition can be effectively addressed through a corpus-based approach using manual annotation and machine learning techniques. PMID:17134477
Chun, Hong-Woo; Tsuruoka, Yoshimasa; Kim, Jin-Dong; Shiba, Rie; Nagata, Naoki; Hishiki, Teruyoshi; Tsujii, Jun'ichi
2006-11-24
Automatic recognition of relations between a specific disease term and its relevant genes or protein terms is an important practice of bioinformatics. Considering the utility of the results of this approach, we identified prostate cancer and gene terms with the ID tags of public biomedical databases. Moreover, considering that genetics experts will use our results, we classified them based on six topics that can be used to analyze the type of prostate cancers, genes, and their relations. We developed a maximum entropy-based named entity recognizer and a relation recognizer and applied them to a corpus-based approach. We collected prostate cancer-related abstracts from MEDLINE, and constructed an annotated corpus of gene and prostate cancer relations based on six topics by biologists. We used it to train the maximum entropy-based named entity recognizer and relation recognizer. Topic-classified relation recognition achieved 92.1% precision for the relation (an increase of 11.0% from that obtained in a baseline experiment). For all topics, the precision was between 67.6 and 88.1%. A series of experimental results revealed two important findings: a carefully designed relation recognition system using named entity recognition can improve the performance of relation recognition, and topic-classified relation recognition can be effectively addressed through a corpus-based approach using manual annotation and machine learning techniques.