AIRSAR Automated Web-based Data Processing and Distribution System
NASA Technical Reports Server (NTRS)
Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen
2005-01-01
In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.
Ontology-Based Administration of Web Directories
NASA Astrophysics Data System (ADS)
Horvat, Marko; Gledec, Gordan; Bogunović, Nikola
Administration of a Web directory and maintenance of its content and the associated structure is a delicate and labor intensive task performed exclusively by human domain experts. Subsequently there is an imminent risk of a directory structures becoming unbalanced, uneven and difficult to use to all except for a few users proficient with the particular Web directory and its domain. These problems emphasize the need to establish two important issues: i) generic and objective measures of Web directories structure quality, and ii) mechanism for fully automated development of a Web directory's structure. In this paper we demonstrate how to formally and fully integrate Web directories with the Semantic Web vision. We propose a set of criteria for evaluation of a Web directory's structure quality. Some criterion functions are based on heuristics while others require the application of ontologies. We also suggest an ontology-based algorithm for construction of Web directories. By using ontologies to describe the semantics of Web resources and Web directories' categories it is possible to define algorithms that can build or rearrange the structure of a Web directory. Assessment procedures can provide feedback and help steer the ontology-based construction process. The issues raised in the article can be equally applied to new and existing Web directories.
Automated Detection and Analysis of Interplanetary Shocks Running Real-Time on the Web
NASA Astrophysics Data System (ADS)
Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.; Davis, A. J.
2008-05-01
The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. We have built a fully automated code that finds and analyzes interplanetary shocks as they occur and posts their solutions on the Web for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. At a previous meeting we reported on efforts to develop a fully automated code that used ACE Level-2 (science quality) data to prove the applicability and correctness of the code and the associated shock-finder. We have since adapted the code to run ACE RTSW data provided by NOAA. This data lacks the full 3-dimensional velocity vector for the solar wind and contains only a single component wind speed. We show that by assuming the wind velocity to be radial strong shock solutions remain essentially unchanged and the analysis performs as well as it would if 3-D velocity components were available. This is due, at least in part, to the fact that strong shocks tend to have nearly radial shock normals and it is the strong shocks that are most effective in space weather applications. Strong shocks are the only shocks that concern us in this application. The code is now running on the Web and the results are available to all.
Koziol-McLain, Jane; Vandal, Alain C; Nada-Raja, Shyamala; Wilson, Denise; Glass, Nancy E; Eden, Karen B; McLean, Christine; Dobbs, Terry; Case, James
2015-01-31
Intimate partner violence (IPV) and its associated negative mental health consequences are significant for women in New Zealand and internationally. One of the most widely recommended interventions is safety planning. However, few women experiencing violence access specialist services for safety planning. A safety decision aid, weighing the dangers of leaving or staying in an abusive relationship, gives women the opportunity to prioritise, plan and take action to increase safety for themselves and their children. This randomised controlled trial is testing the effectiveness of an innovative, interactive web-based safety decision aid. The trial is an international collaborative concurrent replication of a USA trial (IRIS study NCT01312103), regionalised for the Aotearoa New Zealand culture and offers fully automated online trial recruitment, eligibility screening and consent. In a fully automated web-based trial (isafe) 340 abused women will be randomly assigned in equal numbers to a safety decision aid intervention or usual safety planning control website. Intervention components include: (a) safety priority setting, (b) danger assessment and (c) an individually tailored safety action plan. Self-reported outcome measures are collected at baseline and 3, 6, and 12-months post-baseline. Primary outcomes are depression (measured by Center for Epidemiologic Studies Depression Scale, Revised) and IPV exposure (measured by Severity Violence Against Women Scale) at 12 months post-baseline. Secondary outcomes include PTSD, psychological abuse, decisional conflict, safety behaviors and danger in the relationship. This trial will provide much-needed information on the potential relationships among safety planning, improved mental health, reduced violence as well as decreased decisional conflict related to safety in the abusive relationship. The novel web-based safety decision aid intervention may provide a cost-effective, easily accessed safety-planning resource that can be translated into clinical and community practice by multiple health disciplines and advocates. The trial will also provide information about how women in abusive relationships safely access safety information and resources through the Internet. Finally, the trial will inform other research teams on the feasibility and acceptability of fully automated recruitment, eligibility screening, consent and retention procedures. Trial registered on 03 July 2012 on the Australian New Zealand Clinical Trials Registry ACTRN12612000708853 .
van den Berg, Sanne W; Peters, Esmee J; Kraaijeveld, J Frank; Gielissen, Marieke F M; Prins, Judith B
2013-08-19
Generic fully automated Web-based self-management interventions are upcoming, for example, for the growing number of breast cancer survivors. It is hypothesized that the use of these interventions is more individualized and that users apply a large amount of self-tailoring. However, technical usage evaluations of these types of interventions are scarce and practical guidelines are lacking. To gain insight into meaningful usage parameters to evaluate the use of generic fully automated Web-based interventions by assessing how breast cancer survivors use a generic self-management website. Final aim is to propose practical recommendations for researchers and information and communication technology (ICT) professionals who aim to design and evaluate the use of similar Web-based interventions. The BREAst cancer ehealTH (BREATH) intervention is a generic unguided fully automated website with stepwise weekly access and a fixed 4-month structure containing 104 intervention ingredients (ie, texts, tasks, tests, videos). By monitoring https-server requests, technical usage statistics were recorded for the intervention group of the randomized controlled trial. Observed usage was analyzed by measures of frequency, duration, and activity. Intervention adherence was defined as continuous usage, or the proportion of participants who started using the intervention and continued to log in during all four phases. By comparing observed to minimal intended usage (frequency and activity), different user groups were defined. Usage statistics for 4 months were collected from 70 breast cancer survivors (mean age 50.9 years). Frequency of logins/person ranged from 0 to 45, total duration/person from 0 to 2324 minutes (38.7 hours), and activity from opening none to all intervention ingredients. 31 participants continued logging in to all four phases resulting in an intervention adherence rate of 44.3% (95% CI 33.2-55.9). Nine nonusers (13%), 30 low users (43%), and 31 high users (44%) were defined. Low and high users differed significantly on frequency (P<.001), total duration (P<.001), session duration (P=.009), and activity (P<.001). High users logged in an average of 21 times, had a mean session duration of 33 minutes, and opened on average 91% of all ingredients. Signing the self-help contract (P<.001), reporting usefulness of ingredients (P=.003), overall satisfaction (P=.028), and user friendliness evaluation (P=.003) were higher in high users. User groups did not differ on age, education, and baseline distress. By reporting the usage of a self-management website for breast cancer survivors, the present study gained first insight into the design of usage evaluations of generic fully automated Web-based interventions. It is recommended to (1) incorporate usage statistics that reflect the amount of self-tailoring applied by users, (2) combine technical usage statistics with self-reported usefulness, and (3) use qualitative measures. Also, (4) a pilot usage evaluation should be a fixed step in the development process of novel Web-based interventions, and (5) it is essential for researchers to gain insight into the rationale of recorded and nonrecorded usage statistics. Netherlands Trial Register (NTR): 2935; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=2935 (Archived by WebCite at http://www.webcitation.org/6IkX1ADEV).
Li, Tim M H; Chau, Michael; Wong, Paul W C; Lai, Eliza S Y; Yip, Paul S F
2013-05-15
Internet-based learning programs provide people with massive health care information and self-help guidelines on improving their health. The advent of Web 2.0 and social networks renders significant flexibility to embedding highly interactive components, such as games, to foster learning processes. The effectiveness of game-based learning on social networks has not yet been fully evaluated. The aim of this study was to assess the effectiveness of a fully automated, Web-based, social network electronic game on enhancing mental health knowledge and problem-solving skills of young people. We investigated potential motivational constructs directly affecting the learning outcome. Gender differences in learning outcome and motivation were also examined. A pre/posttest design was used to evaluate the fully automated Web-based intervention. Participants, recruited from a closed online user group, self-assessed their mental health literacy and motivational constructs before and after completing the game within a 3-week period. The electronic game was designed according to cognitive-behavioral approaches. Completers and intent-to-treat analyses, using multiple imputation for missing data, were performed. Regression analysis with backward selection was employed when examining the relationship between knowledge enhancement and motivational constructs. The sample included 73 undergraduates (42 females) for completers analysis. The gaming approach was effective in enhancing young people's mental health literacy (d=0.65). The finding was also consistent with the intent-to-treat analysis, which included 127 undergraduates (75 females). No gender differences were found in learning outcome (P=.97). Intrinsic goal orientation was the primary factor in learning motivation, whereas test anxiety was successfully alleviated in the game setting. No gender differences were found on any learning motivation subscales (P>.10). We also found that participants' self-efficacy for learning and performance, as well as test anxiety, significantly affected their learning outcomes, whereas other motivational subscales were statistically nonsignificant. Electronic games implemented through social networking sites appear to effectively enhance users' mental health literacy.
Arnaud, Nicolas; Baldus, Christiane; Elgán, Tobias H; De Paepe, Nina; Tønnesen, Hanne; Csémy, Ladislav; Thomasius, Rainer
2016-05-24
Mid-to-late adolescence is a critical period for initiation of alcohol and drug problems, which can be reduced by targeted brief motivational interventions. Web-based brief interventions have advantages in terms of acceptability and accessibility and have shown significant reductions of substance use among college students. However, the evidence is sparse among adolescents with at-risk use of alcohol and other drugs. This study evaluated the effectiveness of a targeted and fully automated Web-based brief motivational intervention with no face-to-face components on substance use among adolescents screened for at-risk substance use in four European countries. In an open-access, purely Web-based randomized controlled trial, a convenience sample of adolescents aged 16-18 years from Sweden, Germany, Belgium, and the Czech Republic was recruited using online and offline methods and screened online for at-risk substance use using the CRAFFT (Car, Relax, Alone, Forget, Friends, Trouble) screening instrument. Participants were randomized to a single session brief motivational intervention group or an assessment-only control group but not blinded. Primary outcome was differences in past month drinking measured by a self-reported AUDIT-C-based index score for drinking frequency, quantity, and frequency of binge drinking with measures collected online at baseline and after 3 months. Secondary outcomes were the AUDIT-C-based separate drinking indicators, illegal drug use, and polydrug use. All outcome analyses were conducted with and without Expectation Maximization (EM) imputation of missing follow-up data. In total, 2673 adolescents were screened and 1449 (54.2%) participants were randomized to the intervention or control group. After 3 months, 211 adolescents (14.5%) provided follow-up data. Compared to the control group, results from linear mixed models revealed significant reductions in self-reported past-month drinking in favor of the intervention group in both the non-imputed (P=.010) and the EM-imputed sample (P=.022). Secondary analyses revealed a significant effect on drinking frequency (P=.037) and frequency of binge drinking (P=.044) in the non-imputation-based analyses and drinking quantity (P=.021) when missing data were imputed. Analyses for illegal drug use and polydrug use revealed no significant differences between the study groups (Ps>.05). Although the study is limited by a large drop-out, significant between-group effects for alcohol use indicate that targeted brief motivational intervention in a fully automated Web-based format can be effective to reduce drinking and lessen existing substance use service barriers for at-risk drinking European adolescents. International Standard Randomized Controlled Trial Registry: ISRCTN95538913; http://www.isrctn.com/ISRCTN95538913 (Archived by WebCite at http://www.webcitation.org/6XkuUEwBx).
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
A new fully automated FTIR system for total column measurements of greenhouse gases
NASA Astrophysics Data System (ADS)
Geibel, M. C.; Gerbig, C.; Feist, D. G.
2010-10-01
This article introduces a new fully automated FTIR system that is part of the Total Carbon Column Observing Network (TCCON). It will provide continuous ground-based measurements of column-averaged volume mixing ratio for CO2, CH4 and several other greenhouse gases in the tropics. Housed in a 20-foot shipping container it was developed as a transportable system that could be deployed almost anywhere in the world. We describe the automation concept which relies on three autonomous subsystems and their interaction. Crucial components like a sturdy and reliable solar tracker dome are described in detail. The automation software employs a new approach relying on multiple processes, database logging and web-based remote control. First results of total column measurements at Jena, Germany show that the instrument works well and can provide parts of the diurnal as well as seasonal cycle for CO2. Instrument line shape measurements with an HCl cell suggest that the instrument stays well-aligned over several months. After a short test campaign for side by side intercomaprison with an existing TCCON instrument in Australia, the system will be transported to its final destination Ascension Island.
Li, Tim MH; Wong, Paul WC; Lai, Eliza SY; Yip, Paul SF
2013-01-01
Background Internet-based learning programs provide people with massive health care information and self-help guidelines on improving their health. The advent of Web 2.0 and social networks renders significant flexibility to embedding highly interactive components, such as games, to foster learning processes. The effectiveness of game-based learning on social networks has not yet been fully evaluated. Objectives The aim of this study was to assess the effectiveness of a fully automated, Web-based, social network electronic game on enhancing mental health knowledge and problem-solving skills of young people. We investigated potential motivational constructs directly affecting the learning outcome. Gender differences in learning outcome and motivation were also examined. Methods A pre/posttest design was used to evaluate the fully automated Web-based intervention. Participants, recruited from a closed online user group, self-assessed their mental health literacy and motivational constructs before and after completing the game within a 3-week period. The electronic game was designed according to cognitive-behavioral approaches. Completers and intent-to-treat analyses, using multiple imputation for missing data, were performed. Regression analysis with backward selection was employed when examining the relationship between knowledge enhancement and motivational constructs. Results The sample included 73 undergraduates (42 females) for completers analysis. The gaming approach was effective in enhancing young people’s mental health literacy (d=0.65). The finding was also consistent with the intent-to-treat analysis, which included 127 undergraduates (75 females). No gender differences were found in learning outcome (P=.97). Intrinsic goal orientation was the primary factor in learning motivation, whereas test anxiety was successfully alleviated in the game setting. No gender differences were found on any learning motivation subscales (P>.10). We also found that participants’ self-efficacy for learning and performance, as well as test anxiety, significantly affected their learning outcomes, whereas other motivational subscales were statistically nonsignificant. Conclusions Electronic games implemented through social networking sites appear to effectively enhance users’ mental health literacy. PMID:23676714
A novel web informatics approach for automated surveillance of cancer mortality trends✩
Tourassi, Georgia; Yoon, Hong-Jun; Xu, Songhua
2016-01-01
Cancer surveillance data are collected every year in the United States via the National Program of Cancer Registries (NPCR) and the Surveillance, Epidemiology and End Results (SEER) Program of the National Cancer Institute (NCI). General trends are closely monitored to measure the nation's progress against cancer. The objective of this study was to apply a novel web informatics approach for enabling fully automated monitoring of cancer mortality trends. The approach involves automated collection and text mining of online obituaries to derive the age distribution, geospatial, and temporal trends of cancer deaths in the US. Using breast and lung cancer as examples, we mined 23,850 cancer-related and 413,024 general online obituaries spanning the timeframe 2008–2012. There was high correlation between the web-derived mortality trends and the official surveillance statistics reported by NCI with respect to the age distribution (ρ = 0.981 for breast; ρ = 0.994 for lung), the geospatial distribution (ρ = 0.939 for breast; ρ = 0.881 for lung), and the annual rates of cancer deaths (ρ = 0.661 for breast; ρ = 0.839 for lung). Additional experiments investigated the effect of sample size on the consistency of the web-based findings. Overall, our study findings support web informatics as a promising, cost-effective way to dynamically monitor spatiotemporal cancer mortality trends. PMID:27044930
A novel web informatics approach for automated surveillance of cancer mortality trends
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia; Yoon, Hong -Jun; Xu, Songhua
Cancer surveillance data are collected every year in the United States via the National Program of Cancer Registries (NPCR) and the Surveillance, Epidemiology and End Results (SEER) Program of the National Cancer Institute (NCI). General trends are closely monitored to measure the nation’s progress against cancer. The objective of this study was to apply a novel web informatics approach for enabling fully automated monitoring of cancer mortality trends. The approach involves automated collection and text mining of online obituaries to derive the age distribution, geospatial, and temporal trends of cancer deaths in the US. Using breast and lung cancer asmore » examples, we mined 23,850 cancer-related and 413,024 general online obituaries spanning the timeframe 2008–2012. There was high correlation between the web-derived mortality trends and the official surveillance statistics reported by NCI with respect to the age distribution (ρ = 0.981 for breast; ρ = 0.994 for lung), the geospatial distribution (ρ = 0.939 for breast; ρ = 0.881 for lung), and the annual rates of cancer deaths (ρ = 0.661 for breast; ρ = 0.839 for lung). Additional experiments investigated the effect of sample size on the consistency of the web-based findings. Altogether, our study findings support web informatics as a promising, cost-effective way to dynamically monitor spatiotemporal cancer mortality trends.« less
A novel web informatics approach for automated surveillance of cancer mortality trends
Tourassi, Georgia; Yoon, Hong -Jun; Xu, Songhua
2016-04-01
Cancer surveillance data are collected every year in the United States via the National Program of Cancer Registries (NPCR) and the Surveillance, Epidemiology and End Results (SEER) Program of the National Cancer Institute (NCI). General trends are closely monitored to measure the nation’s progress against cancer. The objective of this study was to apply a novel web informatics approach for enabling fully automated monitoring of cancer mortality trends. The approach involves automated collection and text mining of online obituaries to derive the age distribution, geospatial, and temporal trends of cancer deaths in the US. Using breast and lung cancer asmore » examples, we mined 23,850 cancer-related and 413,024 general online obituaries spanning the timeframe 2008–2012. There was high correlation between the web-derived mortality trends and the official surveillance statistics reported by NCI with respect to the age distribution (ρ = 0.981 for breast; ρ = 0.994 for lung), the geospatial distribution (ρ = 0.939 for breast; ρ = 0.881 for lung), and the annual rates of cancer deaths (ρ = 0.661 for breast; ρ = 0.839 for lung). Additional experiments investigated the effect of sample size on the consistency of the web-based findings. Altogether, our study findings support web informatics as a promising, cost-effective way to dynamically monitor spatiotemporal cancer mortality trends.« less
UltiMatch-NL: A Web Service Matchmaker Based on Multiple Semantic Filters
Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba
2014-01-01
In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters. PMID:25157872
UltiMatch-NL: a Web service matchmaker based on multiple semantic filters.
Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba
2014-01-01
In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters.
2011-01-01
Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies. PMID:22024447
Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke
2011-10-24
The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.
PLIP: fully automated protein-ligand interaction profiler.
Salentin, Sebastian; Schreiber, Sven; Haupt, V Joachim; Adasme, Melissa F; Schroeder, Michael
2015-07-01
The characterization of interactions in protein-ligand complexes is essential for research in structural bioinformatics, drug discovery and biology. However, comprehensive tools are not freely available to the research community. Here, we present the protein-ligand interaction profiler (PLIP), a novel web service for fully automated detection and visualization of relevant non-covalent protein-ligand contacts in 3D structures, freely available at projects.biotec.tu-dresden.de/plip-web. The input is either a Protein Data Bank structure, a protein or ligand name, or a custom protein-ligand complex (e.g. from docking). In contrast to other tools, the rule-based PLIP algorithm does not require any structure preparation. It returns a list of detected interactions on single atom level, covering seven interaction types (hydrogen bonds, hydrophobic contacts, pi-stacking, pi-cation interactions, salt bridges, water bridges and halogen bonds). PLIP stands out by offering publication-ready images, PyMOL session files to generate custom images and parsable result files to facilitate successive data processing. The full python source code is available for download on the website. PLIP's command-line mode allows for high-throughput interaction profiling. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
TreeRipper web application: towards a fully automated optical tree recognition software.
Hughes, Joseph
2011-05-20
Relationships between species, genes and genomes have been printed as trees for over a century. Whilst this may have been the best format for exchanging and sharing phylogenetic hypotheses during the 20th century, the worldwide web now provides faster and automated ways of transferring and sharing phylogenetic knowledge. However, novel software is needed to defrost these published phylogenies for the 21st century. TreeRipper is a simple website for the fully-automated recognition of multifurcating phylogenetic trees (http://linnaeus.zoology.gla.ac.uk/~jhughes/treeripper/). The program accepts a range of input image formats (PNG, JPG/JPEG or GIF). The underlying command line c++ program follows a number of cleaning steps to detect lines, remove node labels, patch-up broken lines and corners and detect line edges. The edge contour is then determined to detect the branch length, tip label positions and the topology of the tree. Optical Character Recognition (OCR) is used to convert the tip labels into text with the freely available tesseract-ocr software. 32% of images meeting the prerequisites for TreeRipper were successfully recognised, the largest tree had 115 leaves. Despite the diversity of ways phylogenies have been illustrated making the design of a fully automated tree recognition software difficult, TreeRipper is a step towards automating the digitization of past phylogenies. We also provide a dataset of 100 tree images and associated tree files for training and/or benchmarking future software. TreeRipper is an open source project licensed under the GNU General Public Licence v3.
Azar, Kristen MJ; Block, Torin J; Romanelli, Robert J; Carpenter, Heather; Hopkins, Donald; Palaniappan, Latha; Block, Clifford H
2015-01-01
Background In the United States, 86 million adults have pre-diabetes. Evidence-based interventions that are both cost effective and widely scalable are needed to prevent diabetes. Objective Our goal was to develop a fully automated diabetes prevention program and determine its effectiveness in a randomized controlled trial. Methods Subjects with verified pre-diabetes were recruited to participate in a trial of the effectiveness of Alive-PD, a newly developed, 1-year, fully automated behavior change program delivered by email and Web. The program involves weekly tailored goal-setting, team-based and individual challenges, gamification, and other opportunities for interaction. An accompanying mobile phone app supports goal-setting and activity planning. For the trial, participants were randomized by computer algorithm to start the program immediately or after a 6-month delay. The primary outcome measures are change in HbA1c and fasting glucose from baseline to 6 months. The secondary outcome measures are change in HbA1c, glucose, lipids, body mass index (BMI), weight, waist circumference, and blood pressure at 3, 6, 9, and 12 months. Randomization and delivery of the intervention are independent of clinic staff, who are blinded to treatment assignment. Outcomes will be evaluated for the intention-to-treat and per-protocol populations. Results A total of 340 subjects with pre-diabetes were randomized to the intervention (n=164) or delayed-entry control group (n=176). Baseline characteristics were as follows: mean age 55 (SD 8.9); mean BMI 31.1 (SD 4.3); male 68.5%; mean fasting glucose 109.9 (SD 8.4) mg/dL; and mean HbA1c 5.6 (SD 0.3)%. Data collection and analysis are in progress. We hypothesize that participants in the intervention group will achieve statistically significant reductions in fasting glucose and HbA1c as compared to the control group at 6 months post baseline. Conclusions The randomized trial will provide rigorous evidence regarding the efficacy of this Web- and Internet-based program in reducing or preventing progression of glycemic markers and indirectly in preventing progression to diabetes. Trial Registration ClinicalTrials.gov NCT01479062; http://clinicaltrials.gov/show/NCT01479062 (Archived by WebCite at http://www.webcitation.org/6U8ODy1vo). PMID:25608692
Exploratory Study of Web-Based Planning and Mobile Text Reminders in an Overweight Population
Murray, Peter; Cobain, Mark; Chinapaw, Mai; van Mechelen, Willem; Hurling, Robert
2011-01-01
Background Forming specific health plans can help translate good intentions into action. Mobile text reminders can further enhance the effects of planning on behavior. Objective Our aim was to explore the combined impact of a Web-based, fully automated planning tool and mobile text reminders on intention to change saturated fat intake, self-reported saturated fat intake, and portion size changes over 4 weeks. Methods Of 1013 men and women recruited online, 858 were randomly allocated to 1 of 3 conditions: a planning tool (PT), combined planning tool and text reminders (PTT), and a control group. All outcome measures were assessed by online self-reports. Analysis of covariance was used to analyze the data. Results Participants allocated to the PT (meansat urated fat 3.6, meancopingplanning 3) and PTT (meansaturatedfat 3.5, meancopingplanning 3.1) reported a lower consumption of high-fat foods (F 2,571 = 4.74, P = .009) and higher levels of coping planning (F 2,571 = 7.22, P < .001) than the control group (meansat urated f at 3.9, meancopingplanning 2.8). Participants in the PTT condition also reported smaller portion sizes of high-fat foods (mean 2.8; F 2, 569 = 4.12, P = .0) than the control group (meanportions 3.1). The reduction in portion size was driven primarily by the male participants in the PTT (P = .003). We found no significant group differences in terms of percentage saturated fat intake, intentions, action planning, self-efficacy, or feedback on the intervention. Conclusions These findings support the use of Web-based tools and mobile technologies to change dietary behavior. The combination of a fully automated Web-based planning tool with mobile text reminders led to lower self-reported consumption of high-fat foods and greater reductions in portion sizes than in a control condition. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 61819220; http://www.controlled-trials.com/ISRCTN61819220 (Archived by WebCite at http://www.webcitation.org/63YiSy6R8) PMID:22182483
The BiSciCol Triplifier: bringing biodiversity data to the Semantic Web.
Stucky, Brian J; Deck, John; Conlin, Tom; Ziemba, Lukasz; Cellinese, Nico; Guralnick, Robert
2014-07-29
Recent years have brought great progress in efforts to digitize the world's biodiversity data, but integrating data from many different providers, and across research domains, remains challenging. Semantic Web technologies have been widely recognized by biodiversity scientists for their potential to help solve this problem, yet these technologies have so far seen little use for biodiversity data. Such slow uptake has been due, in part, to the relative complexity of Semantic Web technologies along with a lack of domain-specific software tools to help non-experts publish their data to the Semantic Web. The BiSciCol Triplifier is new software that greatly simplifies the process of converting biodiversity data in standard, tabular formats, such as Darwin Core-Archives, into Semantic Web-ready Resource Description Framework (RDF) representations. The Triplifier uses a vocabulary based on the popular Darwin Core standard, includes both Web-based and command-line interfaces, and is fully open-source software. Unlike most other RDF conversion tools, the Triplifier does not require detailed familiarity with core Semantic Web technologies, and it is tailored to a widely popular biodiversity data format and vocabulary standard. As a result, the Triplifier can often fully automate the conversion of biodiversity data to RDF, thereby making the Semantic Web much more accessible to biodiversity scientists who might otherwise have relatively little knowledge of Semantic Web technologies. Easy availability of biodiversity data as RDF will allow researchers to combine data from disparate sources and analyze them with powerful linked data querying tools. However, before software like the Triplifier, and Semantic Web technologies in general, can reach their full potential for biodiversity science, the biodiversity informatics community must address several critical challenges, such as the widespread failure to use robust, globally unique identifiers for biodiversity data.
Automated Detection and Analysis of Interplanetary Shocks with Real-Time Application
NASA Astrophysics Data System (ADS)
Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.
2006-12-01
The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. Our goal is to provide an automated code that finds and analyzes interplanetary shocks as they occur for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. Although these codes can be automated in a reasonable manner to yield solutions not far from those obtained by user-directed interactive analysis, event detection presents an added obstacle and the first step in a fully automated analysis. We present a fully automated Rankine-Hugoniot analysis code that can scan the ACE science data, find shock candidates, analyze the events, obtain solutions in good agreement with those derived from interactive applications, and dismiss false positive shock candidates on the basis of the conservation equations. The intent is to make this code available to NOAA for use in real-time space weather applications. The code has the added advantage of being able to scan spacecraft data sets to provide shock solutions for use outside real-time applications and can easily be applied to science-quality data sets from other missions. Use of the code for this purpose will also be explored.
2013-01-01
Background Surrogate variable analysis (SVA) is a powerful method to identify, estimate, and utilize the components of gene expression heterogeneity due to unknown and/or unmeasured technical, genetic, environmental, or demographic factors. These sources of heterogeneity are common in gene expression studies, and failing to incorporate them into the analysis can obscure results. Using SVA increases the biological accuracy and reproducibility of gene expression studies by identifying these sources of heterogeneity and correctly accounting for them in the analysis. Results Here we have developed a web application called SVAw (Surrogate variable analysis Web app) that provides a user friendly interface for SVA analyses of genome-wide expression studies. The software has been developed based on open source bioconductor SVA package. In our software, we have extended the SVA program functionality in three aspects: (i) the SVAw performs a fully automated and user friendly analysis workflow; (ii) It calculates probe/gene Statistics for both pre and post SVA analysis and provides a table of results for the regression of gene expression on the primary variable of interest before and after correcting for surrogate variables; and (iii) it generates a comprehensive report file, including graphical comparison of the outcome for the user. Conclusions SVAw is a web server freely accessible solution for the surrogate variant analysis of high-throughput datasets and facilitates removing all unwanted and unknown sources of variation. It is freely available for use at http://psychiatry.igm.jhmi.edu/sva. The executable packages for both web and standalone application and the instruction for installation can be downloaded from our web site. PMID:23497726
EasyLCMS: an asynchronous web application for the automated quantification of LC-MS data
2012-01-01
Background Downstream applications in metabolomics, as well as mathematical modelling, require data in a quantitative format, which may also necessitate the automated and simultaneous quantification of numerous metabolites. Although numerous applications have been previously developed for metabolomics data handling, automated calibration and calculation of the concentrations in terms of μmol have not been carried out. Moreover, most of the metabolomics applications are designed for GC-MS, and would not be suitable for LC-MS, since in LC, the deviation in the retention time is not linear, which is not taken into account in these applications. Moreover, only a few are web-based applications, which could improve stand-alone software in terms of compatibility, sharing capabilities and hardware requirements, even though a strong bandwidth is required. Furthermore, none of these incorporate asynchronous communication to allow real-time interaction with pre-processed results. Findings Here, we present EasyLCMS (http://www.easylcms.es/), a new application for automated quantification which was validated using more than 1000 concentration comparisons in real samples with manual operation. The results showed that only 1% of the quantifications presented a relative error higher than 15%. Using clustering analysis, the metabolites with the highest relative error distributions were identified and studied to solve recurrent mistakes. Conclusions EasyLCMS is a new web application designed to quantify numerous metabolites, simultaneously integrating LC distortions and asynchronous web technology to present a visual interface with dynamic interaction which allows checking and correction of LC-MS raw data pre-processing results. Moreover, quantified data obtained with EasyLCMS are fully compatible with numerous downstream applications, as well as for mathematical modelling in the systems biology field. PMID:22884039
Mehring, Michael; Haag, Max; Linde, Klaus; Wagenpfeil, Stefan; Schneider, Antonius
2014-09-24
Preliminary findings suggest that Web-based interventions may be effective in achieving significant smoking cessation. To date, very few findings are available for primary care patients, and especially for the involvement of general practitioners. Our goal was to examine the short-term effectiveness of a fully automated Web-based coaching program in combination with accompanied telephone counseling in smoking cessation in a primary care setting. The study was an unblinded cluster-randomized trial with an observation period of 12 weeks. Individuals recruited by general practitioners randomized to the intervention group participated in a Web-based coaching program based on education, motivation, exercise guidance, daily short message service (SMS) reminding, weekly feedback through Internet, and active monitoring by general practitioners. All components of the program are fully automated. Participants in the control group received usual care and advice from their practitioner without the Web-based coaching program. The main outcome was the biochemically confirmed smoking status after 12 weeks. We recruited 168 participants (86 intervention group, 82 control group) into the study. For 51 participants from the intervention group and 70 participants from the control group, follow-up data were available both at baseline and 12 weeks. Very few patients (9.8%, 5/51) from the intervention group and from the control group (8.6%, 6/70) successfully managed smoking cessation (OR 0.86, 95% CI 0.25-3.0; P=.816). Similar results were found within the intent-to-treat analysis: 5.8% (5/86) of the intervention group and 7.3% (6/82) of the control group (OR 1.28, 95% CI 0.38-4.36; P=.694). The number of smoked cigarettes per day decreased on average by 9.3 in the intervention group and by 6.6 in the control group (2.7 mean difference; 95% CI -5.33 to -0.58; P=.045). After adjustment for the baseline value, age, gender, and height, this significance decreases (mean difference 2.2; 95% CI -4.7 to 0.3; P=.080). This trial did not show that the tested Web-based intervention was effective for achieving smoking cessation compared to usual care. The limited statistical power and the high drop-out rate may have reduced the study's ability to detect significant differences between the groups. Further randomized controlled trials are needed in larger populations and to investigate the long-term outcome. German Register for Clinical Trials, registration number DRKS00003067; http://drks-neu.uniklinik-freiburg.de/drks_web/navigate.do?navigationId=trial.HTML&TRIAL_ ID=DRKS00003067 (Archived by WebCite at http://www.webcitation.org/6Sff1YZpx).
An Automated Weather Research and Forecasting (WRF)-Based Nowcasting System: Software Description
2013-10-01
14. ABSTRACT A Web service /Web interface software package has been engineered to address the need for an automated means to run the Weather Research...An Automated Weather Research and Forecasting (WRF)- Based Nowcasting System: Software Description by Stephen F. Kirby, Brian P. Reen, and...Based Nowcasting System: Software Description Stephen F. Kirby, Brian P. Reen, and Robert E. Dumais Jr. Computational and Information Sciences
A digital atlas of breast histopathology: an application of web based virtual microscopy
Lundin, M; Lundin, J; Helin, H; Isola, J
2004-01-01
Aims: To develop an educationally useful atlas of breast histopathology, using advanced web based virtual microscopy technology. Methods: By using a robotic microscope and software adopted and modified from the aerial and satellite imaging industry, a virtual microscopy system was developed that allows fully automated slide scanning and image distribution via the internet. More than 150 slides were scanned at high resolution with an oil immersion ×40 objective (numerical aperture, 1.3) and archived on an image server residing in a high speed university network. Results: A publicly available website was constructed, http://www.webmicroscope.net/breastatlas, which features a comprehensive virtual slide atlas of breast histopathology according to the World Health Organisation 2003 classification. Users can view any part of an entire specimen at any magnification within a standard web browser. The virtual slides are supplemented with concise textual descriptions, but can also be viewed without diagnostic information for self assessment of histopathology skills. Conclusions: Using the technology described here, it is feasible to develop clinically and educationally useful virtual microscopy applications. Web based virtual microscopy will probably become widely used at all levels in pathology teaching. PMID:15563669
GlycoExtractor: a web-based interface for high throughput processing of HPLC-glycan data.
Artemenko, Natalia V; Campbell, Matthew P; Rudd, Pauline M
2010-04-05
Recently, an automated high-throughput HPLC platform has been developed that can be used to fully sequence and quantify low concentrations of N-linked sugars released from glycoproteins, supported by an experimental database (GlycoBase) and analytical tools (autoGU). However, commercial packages that support the operation of HPLC instruments and data storage lack platforms for the extraction of large volumes of data. The lack of resources and agreed formats in glycomics is now a major limiting factor that restricts the development of bioinformatic tools and automated workflows for high-throughput HPLC data analysis. GlycoExtractor is a web-based tool that interfaces with a commercial HPLC database/software solution to facilitate the extraction of large volumes of processed glycan profile data (peak number, peak areas, and glucose unit values). The tool allows the user to export a series of sample sets to a set of file formats (XML, JSON, and CSV) rather than a collection of disconnected files. This approach not only reduces the amount of manual refinement required to export data into a suitable format for data analysis but also opens the field to new approaches for high-throughput data interpretation and storage, including biomarker discovery and validation and monitoring of online bioprocessing conditions for next generation biotherapeutics.
Tait, Robert J; McKetin, Rebecca; Kay-Lambkin, Frances; Carron-Arthur, Bradley; Bennett, Anthony; Bennett, Kylie; Christensen, Helen; Griffiths, Kathleen M
2014-01-01
Among illicit drugs, the prevalence of amphetamine-type stimulant (ATS) use is second only to cannabis. Currently, there are no approved pharmacotherapies for ATS problems, but some face-to-face psychotherapies are effective. Web-based interventions have proven to be effective for some substance use problems, but none has specifically targeted ATS users. The objective of the study was to evaluate the effectiveness of a Web-based intervention for ATS problems on a free-to-access site compared with a waitlist control group. We used a randomized controlled trial design. The primary outcome measure was self-reported ATS use in the past three months assessed using the Alcohol, Smoking, Substance Involvement Screening Test (ASSIST). Other measures included quality of life (EUROHIS score), psychological distress (K-10 score), days out of role, poly-drug use, general help-seeking intentions, actual help-seeking, and "readiness to change". The intervention consisted of three fully automated, self-guided modules based on cognitive behavioral therapy and motivation enhancement. The analysis was an intention-to-treat analysis using generalized estimating equation models, with a group by time interaction as the critical assessment. We randomized 160 people (intervention n=81, control n=79). At three months, 35/81 (43%) intervention and 45/79 (57%) control participants provided follow-up data. In the intervention group, 51/81 (63%) completed at least one module. The only significant group by time interaction was for days out of role. The pre/post change effect sizes showed small changes (range d=0.14 to 0.40) favoring the intervention group for poly-drug use, distress, actual help-seeking, and days out of role. In contrast, the control group was favored by reductions in ATS use, improvements in quality of life, and increases in help-seeking intentions (range d=0.09 to 0.16). This Web-based intervention for ATS use produced few significant changes in outcome measures. There were moderate, but nonsignificant reductions in poly-drug use, distress, days partially out of role, and increases in help-seeking. However, high levels of participant attrition, plus low levels of engagement with the modules, preclude firm conclusions being drawn on the efficacy of the intervention and emphasize the problems of engaging this group of clients in a fully automated program. Australian and New Zealand Clinical Trials Registry: ACTRN 12611000947909; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12611000947909 (Archived by WebCite at http://www.webcitation.org/6SHTxEnzP).
Aardoom, Jiska J; Dingemans, Alexandra E; Spinhoven, Philip; van Ginkel, Joost R; de Rooij, Mark; van Furth, Eric F
2016-06-17
Despite the disabling nature of eating disorders (EDs), many individuals with ED symptoms do not receive appropriate mental health care. Internet-based interventions have potential to reduce the unmet needs by providing easily accessible health care services. This study aimed to investigate the effectiveness of an Internet-based intervention for individuals with ED symptoms, called "Featback." In addition, the added value of different intensities of therapist support was investigated. Participants (N=354) were aged 16 years or older with self-reported ED symptoms, including symptoms of anorexia nervosa, bulimia nervosa, and binge eating disorder. Participants were recruited via the website of Featback and the website of a Dutch pro-recovery-focused e-community for young women with ED problems. Participants were randomized to: (1) Featback, consisting of psychoeducation and a fully automated self-monitoring and feedback system, (2) Featback supplemented with low-intensity (weekly) digital therapist support, (3) Featback supplemented with high-intensity (3 times a week) digital therapist support, and (4) a waiting list control condition. Internet-administered self-report questionnaires were completed at baseline, post-intervention (ie, 8 weeks after baseline), and at 3- and 6-month follow-up. The primary outcome measure was ED psychopathology. Secondary outcome measures were symptoms of depression and anxiety, perseverative thinking, and ED-related quality of life. Statistical analyses were conducted according to an intent-to-treat approach using linear mixed models. The 3 Featback conditions were superior to a waiting list in reducing bulimic psychopathology (d=-0.16, 95% confidence interval (CI)=-0.31 to -0.01), symptoms of depression and anxiety (d=-0.28, 95% CI=-0.45 to -0.11), and perseverative thinking (d=-0.28, 95% CI=-0.45 to -0.11). No added value of therapist support was found in terms of symptom reduction although participants who received therapist support were significantly more satisfied with the intervention than those who did not receive supplemental therapist support. No significant differences between the Featback conditions supplemented with low- and high-intensity therapist support were found regarding the effectiveness and satisfaction with the intervention. The fully automated Internet-based self-monitoring and feedback intervention Featback was effective in reducing ED and comorbid psychopathology. Supplemental therapist support enhanced satisfaction with the intervention but did not increase its effectiveness. Automated interventions such as Featback can provide widely disseminable and easily accessible care. Such interventions could be incorporated within a stepped-care approach in the treatment of EDs and help to bridge the gap between mental disorders and mental health care services. Netherlands Trial Registry: NTR3646; http://www.trialregister.nl/trialreg/admin/ rctview.asp?TC=3646 (Archived by WebCite at http://www.webcitation.org/6fgHTGKHE).
Autonomous system for Web-based microarray image analysis.
Bozinov, Daniel
2003-12-01
Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.
Arlt, Sönke; Buchert, Ralph; Spies, Lothar; Eichenlaub, Martin; Lehmbeck, Jan T; Jahn, Holger
2013-06-01
Fully automated magnetic resonance imaging (MRI)-based volumetry may serve as biomarker for the diagnosis in patients with mild cognitive impairment (MCI) or dementia. We aimed at investigating the relation between fully automated MRI-based volumetric measures and neuropsychological test performance in amnestic MCI and patients with mild dementia due to Alzheimer's disease (AD) in a cross-sectional and longitudinal study. In order to assess a possible prognostic value of fully automated MRI-based volumetry for future cognitive performance, the rate of change of neuropsychological test performance over time was also tested for its correlation with fully automated MRI-based volumetry at baseline. In 50 subjects, 18 with amnestic MCI, 21 with mild AD, and 11 controls, neuropsychological testing and T1-weighted MRI were performed at baseline and at a mean follow-up interval of 2.1 ± 0.5 years (n = 19). Fully automated MRI volumetry of the grey matter volume (GMV) was performed using a combined stereotactic normalisation and segmentation approach as provided by SPM8 and a set of pre-defined binary lobe masks. Left and right hippocampus masks were derived from probabilistic cytoarchitectonic maps. Volumes of the inner and outer liquor space were also determined automatically from the MRI. Pearson's test was used for the correlation analyses. Left hippocampal GMV was significantly correlated with performance in memory tasks, and left temporal GMV was related to performance in language tasks. Bilateral frontal, parietal and occipital GMVs were correlated to performance in neuropsychological tests comprising multiple domains. Rate of GMV change in the left hippocampus was correlated with decline of performance in the Boston Naming Test (BNT), Mini-Mental Status Examination, and trail making test B (TMT-B). The decrease of BNT and TMT-A performance over time correlated with the loss of grey matter in multiple brain regions. We conclude that fully automated MRI-based volumetry allows detection of regional grey matter volume loss that correlates with neuropsychological performance in patients with amnestic MCI or mild AD. Because of the high level of automation, MRI-based volumetry may easily be integrated into clinical routine to complement the current diagnostic procedure.
2017-01-01
Background Regardless of geography or income, effective help for depression and anxiety only reaches a small proportion of those who might benefit from it. The scale of the problem suggests a role for effective, safe, anonymized public health–driven Web-based services such as Big White Wall (BWW), which offer immediate peer support at low cost. Objective Using Reach, Effectiveness, Adoption, Implementation and Maintenance (RE-AIM) methodology, the aim of this study was to determine the population reach, effectiveness, cost-effectiveness, and barriers and drivers to implementation of BWW compared with Web-based information compiled by UK’s National Health Service (NHS, NHS Choices Moodzone) in people with probable mild to moderate depression and anxiety disorder. Methods A pragmatic, parallel-group, single-blind randomized controlled trial (RCT) is being conducted using a fully automated trial website in which eligible participants are randomized to receive either 6 months access to BWW or signposted to the NHS Moodzone site. The recruitment of 2200 people to the study will be facilitated by a public health engagement campaign involving general marketing and social media, primary care clinical champions, health care staff, large employers, and third sector groups. People will refer themselves to the study and will be eligible if they are older than 16 years, have probable mild to moderate depression or anxiety disorders, and have access to the Internet. Results The primary outcome will be the Warwick-Edinburgh Mental Well-Being Scale at 6 weeks. We will also explore the reach, maintenance, cost-effectiveness, and barriers and drivers to implementation and possible mechanisms of actions using a range of qualitative and quantitative methods. Conclusions This will be the first fully digital trial of a direct to public online peer support program for common mental disorders. The potential advantages of adding this to current NHS mental health services and the challenges of designing a public health campaign and RCT of two digital interventions using a fully automated digital enrollment and data collection process are considered for people with depression and anxiety. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 12673428; http://www.controlled-trials.com/ISRCTN12673428/12673428 (Archived by WebCite at http://www.webcitation.org/6uw6ZJk5a) PMID:29254909
Automated Microflow NMR: Routine Analysis of Five-Microliter Samples
Jansma, Ariane; Chuan, Tiffany; Geierstanger, Bernhard H.; Albrecht, Robert W.; Olson, Dean L.; Peck, Timothy L.
2006-01-01
A microflow CapNMR probe double-tuned for 1H and 13C was installed on a 400-MHz NMR spectrometer and interfaced to an automated liquid handler. Individual samples dissolved in DMSO-d6 are submitted for NMR analysis in vials containing as little as 10 μL of sample. Sets of samples are submitted in a low-volume 384-well plate. Of the 10 μL of sample per well, as with vials, 5 μL is injected into the microflow NMR probe for analysis. For quality control of chemical libraries, 1D NMR spectra are acquired under full automation from 384-well plates on as many as 130 compounds within 24 h using 128 scans per spectrum and a sample-to-sample cycle time of ∼11 min. Because of the low volume requirements and high mass sensitivity of the microflow NMR system, 30 nmol of a typical small molecule is sufficient to obtain high-quality, well-resolved, 1D proton or 2D COSY NMR spectra in ∼6 or 20 min of data acquisition time per experiment, respectively. Implementation of pulse programs with automated solvent peak identification and suppression allow for reliable data collection, even for samples submitted in fully protonated DMSO. The automated microflow NMR system is controlled and monitored using web-based software. PMID:16194121
Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-01-01
Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313
Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-06-01
Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.
Automated solar cell assembly team process research
NASA Astrophysics Data System (ADS)
Nowlan, M. J.; Hogan, S. J.; Darkazalli, G.; Breen, W. F.; Murach, J. M.; Sutherland, S. F.; Patterson, J. S.
1994-06-01
This report describes work done under the Photovoltaic Manufacturing Technology (PVMaT) project, Phase 3A, which addresses problems that are generic to the photovoltaic (PV) industry. Spire's objective during Phase 3A was to use its light soldering technology and experience to design and fabricate solar cell tabbing and interconnecting equipment to develop new, high-yield, high-throughput, fully automated processes for tabbing and interconnecting thin cells. Areas that were addressed include processing rates, process control, yield, throughput, material utilization efficiency, and increased use of automation. Spire teamed with Solec International, a PV module manufacturer, and the University of Massachusetts at Lowell's Center for Productivity Enhancement (CPE), automation specialists, who are lower-tier subcontractors. A number of other PV manufacturers, including Siemens Solar, Mobil Solar, Solar Web, and Texas instruments, agreed to evaluate the processes developed under this program.
MannDB: A microbial annotation database for protein characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C; Lam, M; Smith, J
2006-05-19
MannDB was created to meet a need for rapid, comprehensive automated protein sequence analyses to support selection of proteins suitable as targets for driving the development of reagents for pathogen or protein toxin detection. Because a large number of open-source tools were needed, it was necessary to produce a software system to scale the computations for whole-proteome analysis. Thus, we built a fully automated system for executing software tools and for storage, integration, and display of automated protein sequence analysis and annotation data. MannDB is a relational database that organizes data resulting from fully automated, high-throughput protein-sequence analyses using open-sourcemore » tools. Types of analyses provided include predictions of cleavage, chemical properties, classification, features, functional assignment, post-translational modifications, motifs, antigenicity, and secondary structure. Proteomes (lists of hypothetical and known proteins) are downloaded and parsed from Genbank and then inserted into MannDB, and annotations from SwissProt are downloaded when identifiers are found in the Genbank entry or when identical sequences are identified. Currently 36 open-source tools are run against MannDB protein sequences either on local systems or by means of batch submission to external servers. In addition, BLAST against protein entries in MvirDB, our database of microbial virulence factors, is performed. A web client browser enables viewing of computational results and downloaded annotations, and a query tool enables structured and free-text search capabilities. When available, links to external databases, including MvirDB, are provided. MannDB contains whole-proteome analyses for at least one representative organism from each category of biological threat organism listed by APHIS, CDC, HHS, NIAID, USDA, USFDA, and WHO. MannDB comprises a large number of genomes and comprehensive protein sequence analyses representing organisms listed as high-priority agents on the websites of several governmental organizations concerned with bio-terrorism. MannDB provides the user with a BLAST interface for comparison of native and non-native sequences and a query tool for conveniently selecting proteins of interest. In addition, the user has access to a web-based browser that compiles comprehensive and extensive reports.« less
Web Navigation Sequences Automation in Modern Websites
NASA Astrophysics Data System (ADS)
Montoto, Paula; Pan, Alberto; Raposo, Juan; Bellas, Fernando; López, Javier
Most today’s web sources are designed to be used by humans, but they do not provide suitable interfaces for software programs. That is why a growing interest has arisen in so-called web automation applications that are widely used for different purposes such as B2B integration, automated testing of web applications or technology and business watch. Previous proposals assume models for generating and reproducing navigation sequences that are not able to correctly deal with new websites using technologies such as AJAX: on one hand existing systems only allow recording simple navigation actions and, on the other hand, they are unable to detect the end of the effects caused by an user action. In this paper, we propose a set of new techniques to record and execute web navigation sequences able to deal with all the complexity existing in AJAX-based web sites. We also present an exhaustive evaluation of the proposed techniques that shows very promising results.
Dingemans, Alexandra E; Spinhoven, Philip; van Ginkel, Joost R; de Rooij, Mark; van Furth, Eric F
2016-01-01
Background Despite the disabling nature of eating disorders (EDs), many individuals with ED symptoms do not receive appropriate mental health care. Internet-based interventions have potential to reduce the unmet needs by providing easily accessible health care services. Objective This study aimed to investigate the effectiveness of an Internet-based intervention for individuals with ED symptoms, called “Featback.” In addition, the added value of different intensities of therapist support was investigated. Methods Participants (N=354) were aged 16 years or older with self-reported ED symptoms, including symptoms of anorexia nervosa, bulimia nervosa, and binge eating disorder. Participants were recruited via the website of Featback and the website of a Dutch pro-recovery–focused e-community for young women with ED problems. Participants were randomized to: (1) Featback, consisting of psychoeducation and a fully automated self-monitoring and feedback system, (2) Featback supplemented with low-intensity (weekly) digital therapist support, (3) Featback supplemented with high-intensity (3 times a week) digital therapist support, and (4) a waiting list control condition. Internet-administered self-report questionnaires were completed at baseline, post-intervention (ie, 8 weeks after baseline), and at 3- and 6-month follow-up. The primary outcome measure was ED psychopathology. Secondary outcome measures were symptoms of depression and anxiety, perseverative thinking, and ED-related quality of life. Statistical analyses were conducted according to an intent-to-treat approach using linear mixed models. Results The 3 Featback conditions were superior to a waiting list in reducing bulimic psychopathology (d=−0.16, 95% confidence interval (CI)=−0.31 to −0.01), symptoms of depression and anxiety (d=−0.28, 95% CI=−0.45 to −0.11), and perseverative thinking (d=−0.28, 95% CI=−0.45 to −0.11). No added value of therapist support was found in terms of symptom reduction although participants who received therapist support were significantly more satisfied with the intervention than those who did not receive supplemental therapist support. No significant differences between the Featback conditions supplemented with low- and high-intensity therapist support were found regarding the effectiveness and satisfaction with the intervention. Conclusions The fully automated Internet-based self-monitoring and feedback intervention Featback was effective in reducing ED and comorbid psychopathology. Supplemental therapist support enhanced satisfaction with the intervention but did not increase its effectiveness. Automated interventions such as Featback can provide widely disseminable and easily accessible care. Such interventions could be incorporated within a stepped-care approach in the treatment of EDs and help to bridge the gap between mental disorders and mental health care services. Trial Registration Netherlands Trial Registry: NTR3646; http://www.trialregister.nl/trialreg/admin/ rctview.asp?TC=3646 (Archived by WebCite at http://www.webcitation.org/6fgHTGKHE) PMID:27317358
The Web-based Electronic Data Review (WebEDR) application performs automated data evaluation on ERLN electronic data deliverables (EDDs). It uses test derived from the National Functional Guidelines combined with method-defined limits to measure data.
BAGEL4: a user-friendly web server to thoroughly mine RiPPs and bacteriocins.
van Heel, Auke J; de Jong, Anne; Song, Chunxu; Viel, Jakob H; Kok, Jan; Kuipers, Oscar P
2018-05-21
Interest in secondary metabolites such as RiPPs (ribosomally synthesized and posttranslationally modified peptides) is increasing worldwide. To facilitate the research in this field we have updated our mining web server. BAGEL4 is faster than its predecessor and is now fully independent from ORF-calling. Gene clusters of interest are discovered using the core-peptide database and/or through HMM motifs that are present in associated context genes. The databases used for mining have been updated and extended with literature references and links to UniProt and NCBI. Additionally, we have included automated promoter and terminator prediction and the option to upload RNA expression data, which can be displayed along with the identified clusters. Further improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt. Overall BAGEL4 provides the user with more information through a user-friendly web-interface which simplifies data evaluation. BAGEL4 is freely accessible at http://bagel4.molgenrug.nl.
Bohlmeijer, Ernst T; Van Gemert-Pijnen, Julia EWC
2013-01-01
Background Although Web-based interventions have been shown to be effective, they are not widely implemented in regular care. Nonadherence (ie, participants not following the intervention protocol) is an issue. By studying the way Web-based interventions are used and whether there are differences between adherers (ie, participants that started all 9 lessons) and nonadherers, more insight can be gained into the process of adherence. Objective The aims of this study were to (1) describe the characteristics of participants and investigate their relationship with adherence, (2) investigate the utilization of the different features of the intervention and possible differences between adherers and nonadherers, and (3) identify what use patterns emerge and whether there are differences between adherers and nonadherers. Methods Data were used from 206 participants that used the Web-based intervention Living to the full, a Web-based intervention for the prevention of depression employing both a fully automated and human-supported format. Demographic and baseline characteristics of participants were collected by using an online survey. Log data were collected within the Web-based intervention itself. Both quantitative and qualitative analyses were performed. Results In all, 118 participants fully adhered to the intervention (ie, started all 9 lessons). Participants with an ethnicity other than Dutch were more often adherers (χ2 1=5.5, P=.02), and nonadherers used the Internet more hours per day on average (F1,203=3.918, P=.049). A logistic regression showed that being female (OR 2.02, 95% CI 1.01-4.04; P=.046) and having a higher need for cognition (OR 1.02; 95% CI 1.00-1.05; P=.02) increased the odds of adhering to the intervention. Overall, participants logged in an average of 4 times per lesson, but adherers logged in significantly more times per lesson than nonadherers (F1,204=20.710; P<.001). For use patterns, we saw that early nonadherers seemed to use fewer sessions and spend less time than late nonadherers and adherers, and fewer sessions to complete the lesson than adherers. Furthermore, late nonadherers seemed to have a shorter total duration of sessions than adherers. Conclusions By using log data combined with baseline characteristics of participants, we extracted valuable lessons for redesign of this intervention and the design of Web-based interventions in general. First, although characteristics of respondents can significantly predict adherence, their predictive value is small. Second, it is important to design Web-based interventions to foster adherence and usage of all features in an intervention. Trial Registration Dutch Trial Register Number: NTR3007; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3007 (Archived by WebCite at http://www.webcitation.org/6ILhI3rd8). PMID:23963284
Kelders, Saskia M; Bohlmeijer, Ernst T; Van Gemert-Pijnen, Julia Ewc
2013-08-20
Although Web-based interventions have been shown to be effective, they are not widely implemented in regular care. Nonadherence (ie, participants not following the intervention protocol) is an issue. By studying the way Web-based interventions are used and whether there are differences between adherers (ie, participants that started all 9 lessons) and nonadherers, more insight can be gained into the process of adherence. The aims of this study were to (1) describe the characteristics of participants and investigate their relationship with adherence, (2) investigate the utilization of the different features of the intervention and possible differences between adherers and nonadherers, and (3) identify what use patterns emerge and whether there are differences between adherers and nonadherers. Data were used from 206 participants that used the Web-based intervention Living to the full, a Web-based intervention for the prevention of depression employing both a fully automated and human-supported format. Demographic and baseline characteristics of participants were collected by using an online survey. Log data were collected within the Web-based intervention itself. Both quantitative and qualitative analyses were performed. In all, 118 participants fully adhered to the intervention (ie, started all 9 lessons). Participants with an ethnicity other than Dutch were more often adherers (χ²₁=5.5, P=.02), and nonadherers used the Internet more hours per day on average (F₁,₂₀₃=3.918, P=.049). A logistic regression showed that being female (OR 2.02, 95% CI 1.01-4.04; P=.046) and having a higher need for cognition (OR 1.02; 95% CI 1.00-1.05; P=.02) increased the odds of adhering to the intervention. Overall, participants logged in an average of 4 times per lesson, but adherers logged in significantly more times per lesson than nonadherers (F₁,₂₀₄=20.710; P<.001). For use patterns, we saw that early nonadherers seemed to use fewer sessions and spend less time than late nonadherers and adherers, and fewer sessions to complete the lesson than adherers. Furthermore, late nonadherers seemed to have a shorter total duration of sessions than adherers. By using log data combined with baseline characteristics of participants, we extracted valuable lessons for redesign of this intervention and the design of Web-based interventions in general. First, although characteristics of respondents can significantly predict adherence, their predictive value is small. Second, it is important to design Web-based interventions to foster adherence and usage of all features in an intervention. Dutch Trial Register Number: NTR3007; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3007 (Archived by WebCite at http://www.webcitation.org/6ILhI3rd8).
Parallel solution-phase synthesis of a 2-aminothiazole library including fully automated work-up.
Buchstaller, Hans-Peter; Anlauf, Uwe
2011-02-01
A straightforward and effective procedure for the solution phase preparation of a 2-aminothiazole combinatorial library is described. Reaction, work-up and isolation of the title compounds as free bases was accomplished in a fully automated fashion using the Chemspeed ASW 2000 automated synthesizer. The compounds were obtained in good yields and excellent purities without any further purification procedure.
Alley, Stephanie; Jennings, Cally; Plotnikoff, Ronald C; Vandelanotte, Corneel
2016-08-12
Web-based physical activity interventions that apply computer tailoring have shown to improve engagement and behavioral outcomes but provide limited accountability and social support for participants. It is unknown how video calls with a behavioral expert in a Web-based intervention will be received and whether they improve the effectiveness of computer-tailored advice. The purpose of this study was to determine the feasibility and effectiveness of brief video-based coaching in addition to fully automated computer-tailored advice in a Web-based physical activity intervention for inactive adults. Participants were assigned to one of the three groups: (1) tailoring + video-coaching where participants received an 8-week computer-tailored Web-based physical activity intervention ("My Activity Coach") including 4 10-minute coaching sessions with a behavioral expert using a Web-based video-calling program (eg, Skype; n=52); (2) tailoring-only where participants received the same intervention without the coaching sessions (n=54); and (3) a waitlist control group (n=45). Demographics were measured at baseline, intervention satisfaction at week 9, and physical activity at baseline, week 9, and 6 months by Web-based self-report surveys. Feasibility was analyzed by comparing intervention groups on retention, adherence, engagement, and satisfaction using t tests and chi-square tests. Effectiveness was assessed using linear mixed models to compare physical activity changes between groups. A total of 23 tailoring + video-coaching participants, 30 tailoring-only participants, and 30 control participants completed the postintervention survey (83/151, 55.0% retention). A low percentage of tailoring + video-coaching completers participated in the coaching calls (11/23, 48%). However, the majority of those who participated in the video calls were satisfied with them (5/8, 71%) and had improved intervention adherence (9/11, 82% completed 3 or 4 modules vs 18/42, 43%, P=.01) and engagement (110 minutes spent on the website vs 78 minutes, P=.02) compared with other participants. There were no overall retention, adherence, engagement, and satisfaction differences between tailoring + video-coaching and tailoring-only participants. At 9 weeks, physical activity increased from baseline to postintervention in all groups (tailoring + video-coaching: +150 minutes/week; tailoring only: +123 minutes/week; waitlist control: +34 minutes/week). The increase was significantly higher in the tailoring + video-coaching group compared with the control group (P=.01). No significant difference was found between intervention groups and no significant between-group differences were found for physical activity change at 6 months. Only small improvements were observed when video-coaching was added to computer-tailored advice in a Web-based physical activity intervention. However, combined Web-based video-coaching and computer-tailored advice was effective in comparison with a control group. More research is needed to determine whether Web-based coaching is more effective than stand-alone computer-tailored advice. Australian New Zealand Clinical Trials Registry (ACTRN): 12614000339651; http://www.anzctr.org.au/TrialSearch.aspx?searchTxt=ACTRN12614000339651+&isBasic=True (Archived by WebCite at http://www.webcitation.org/6jTnOv0Ld).
Jennings, Cally; Plotnikoff, Ronald C; Vandelanotte, Corneel
2016-01-01
Background Web-based physical activity interventions that apply computer tailoring have shown to improve engagement and behavioral outcomes but provide limited accountability and social support for participants. It is unknown how video calls with a behavioral expert in a Web-based intervention will be received and whether they improve the effectiveness of computer-tailored advice. Objective The purpose of this study was to determine the feasibility and effectiveness of brief video-based coaching in addition to fully automated computer-tailored advice in a Web-based physical activity intervention for inactive adults. Methods Participants were assigned to one of the three groups: (1) tailoring + video-coaching where participants received an 8-week computer-tailored Web-based physical activity intervention (“My Activity Coach”) including 4 10-minute coaching sessions with a behavioral expert using a Web-based video-calling program (eg, Skype; n=52); (2) tailoring-only where participants received the same intervention without the coaching sessions (n=54); and (3) a waitlist control group (n=45). Demographics were measured at baseline, intervention satisfaction at week 9, and physical activity at baseline, week 9, and 6 months by Web-based self-report surveys. Feasibility was analyzed by comparing intervention groups on retention, adherence, engagement, and satisfaction using t tests and chi-square tests. Effectiveness was assessed using linear mixed models to compare physical activity changes between groups. Results A total of 23 tailoring + video-coaching participants, 30 tailoring-only participants, and 30 control participants completed the postintervention survey (83/151, 55.0% retention). A low percentage of tailoring + video-coaching completers participated in the coaching calls (11/23, 48%). However, the majority of those who participated in the video calls were satisfied with them (5/8, 71%) and had improved intervention adherence (9/11, 82% completed 3 or 4 modules vs 18/42, 43%, P=.01) and engagement (110 minutes spent on the website vs 78 minutes, P=.02) compared with other participants. There were no overall retention, adherence, engagement, and satisfaction differences between tailoring + video-coaching and tailoring-only participants. At 9 weeks, physical activity increased from baseline to postintervention in all groups (tailoring + video-coaching: +150 minutes/week; tailoring only: +123 minutes/week; waitlist control: +34 minutes/week). The increase was significantly higher in the tailoring + video-coaching group compared with the control group (P=.01). No significant difference was found between intervention groups and no significant between-group differences were found for physical activity change at 6 months. Conclusions Only small improvements were observed when video-coaching was added to computer-tailored advice in a Web-based physical activity intervention. However, combined Web-based video-coaching and computer-tailored advice was effective in comparison with a control group. More research is needed to determine whether Web-based coaching is more effective than stand-alone computer-tailored advice. Trial Registration Australian New Zealand Clinical Trials Registry (ACTRN): 12614000339651; http://www.anzctr.org.au/TrialSearch.aspx?searchTxt=ACTRN12614000339651+&isBasic=True (Archived by WebCite at http://www.webcitation.org/6jTnOv0Ld) PMID:27520283
An Automated, High-Throughput System for GISAXS and GIWAXS Measurements of Thin Films
NASA Astrophysics Data System (ADS)
Schaible, Eric; Jimenez, Jessica; Church, Matthew; Lim, Eunhee; Stewart, Polite; Hexemer, Alexander
Grazing incidence small-angle X-ray scattering (GISAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS) are important techniques for characterizing thin films. In order to meet rapidly increasing demand, the SAXSWAXS beamline at the Advanced Light Source (beamline 7.3.3) has implemented a fully automated, high-throughput system to conduct SAXS, GISAXS and GIWAXS measurements. An automated robot arm transfers samples from a holding tray to a measurement stage. Intelligent software aligns each sample in turn, and measures each according to user-defined specifications. Users mail in trays of samples on individually barcoded pucks, and can download and view their data remotely. Data will be pipelined to the NERSC supercomputing facility, and will be available to users via a web portal that facilitates highly parallelized analysis.
Boudreau, François; Walthouwer, Michel Jean Louis; de Vries, Hein; Dagenais, Gilles R; Turbide, Ginette; Bourlaud, Anne-Sophie; Moreau, Michel; Côté, José; Poirier, Paul
2015-10-09
The relationship between physical activity and cardiovascular disease (CVD) protection is well documented. Numerous factors (e.g. patient motivation, lack of facilities, physician time constraints) can contribute to poor PA adherence. Web-based computer-tailored interventions offer an innovative way to provide tailored feedback and to empower adults to engage in regular moderate- to vigorous-intensity PA. To describe the rationale, design and content of a web-based computer-tailored PA intervention for Canadian adults enrolled in a randomized controlled trial (RCT). 244 men and women aged between 35 and 70 years, without CVD or physical disability, not participating in regular moderate- to vigorous-intensity PA, and familiar with and having access to a computer at home, were recruited from the Quebec City Prospective Urban and Rural Epidemiological (PURE) study centre. Participants were randomized into two study arms: 1) an experimental group receiving the intervention and 2) a waiting list control group. The fully automated web-based computer-tailored PA intervention consists of seven 10- to 15-min sessions over an 8-week period. The theoretical underpinning of the intervention is based on the I-Change Model. The aim of the intervention was to reach a total of 150 min per week of moderate- to vigorous-intensity aerobic PA. This study will provide useful information before engaging in a large RCT to assess the long-term participation and maintenance of PA, the potential impact of regular PA on CVD risk factors and the cost-effectiveness of a web-based computer-tailored intervention. ISRCTN36353353 registered on 24/07/2014.
Automated Planning and Scheduling for Planetary Rover Distributed Operations
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Rabideau, Gregg; Tso, Kam S.; Chien, Steve
1999-01-01
Automated planning and Scheduling, including automated path planning, has been integrated with an Internet-based distributed operations system for planetary rover operations. The resulting prototype system enables faster generation of valid rover command sequences by a distributed planetary rover operations team. The Web Interface for Telescience (WITS) provides Internet-based distributed collaboration, the Automated Scheduling and Planning Environment (ASPEN) provides automated planning and scheduling, and an automated path planner provided path planning. The system was demonstrated on the Rocky 7 research rover at JPL.
Automated Cryocooler Monitor and Control System Software
NASA Technical Reports Server (NTRS)
Britchcliffe, Michael J.; Conroy, Bruce L.; Anderson, Paul E.; Wilson, Ahmad
2011-01-01
This software is used in an automated cryogenic control system developed to monitor and control the operation of small-scale cryocoolers. The system was designed to automate the cryogenically cooled low-noise amplifier system described in "Automated Cryocooler Monitor and Control System" (NPO-47246), NASA Tech Briefs, Vol. 35, No. 5 (May 2011), page 7a. The software contains algorithms necessary to convert non-linear output voltages from the cryogenic diode-type thermometers and vacuum pressure and helium pressure sensors, to temperature and pressure units. The control function algorithms use the monitor data to control the cooler power, vacuum solenoid, vacuum pump, and electrical warm-up heaters. The control algorithms are based on a rule-based system that activates the required device based on the operating mode. The external interface is Web-based. It acts as a Web server, providing pages for monitor, control, and configuration. No client software from the external user is required.
Automated geospatial Web Services composition based on geodata quality requirements
NASA Astrophysics Data System (ADS)
Cruz, Sérgio A. B.; Monteiro, Antonio M. V.; Santos, Rafael
2012-10-01
Service-Oriented Architecture and Web Services technologies improve the performance of activities involved in geospatial analysis with a distributed computing architecture. However, the design of the geospatial analysis process on this platform, by combining component Web Services, presents some open issues. The automated construction of these compositions represents an important research topic. Some approaches to solving this problem are based on AI planning methods coupled with semantic service descriptions. This work presents a new approach using AI planning methods to improve the robustness of the produced geospatial Web Services composition. For this purpose, we use semantic descriptions of geospatial data quality requirements in a rule-based form. These rules allow the semantic annotation of geospatial data and, coupled with the conditional planning method, this approach represents more precisely the situations of nonconformities with geodata quality that may occur during the execution of the Web Service composition. The service compositions produced by this method are more robust, thus improving process reliability when working with a composition of chained geospatial Web Services.
Environmental Response Laboratory Network (ERLN) WebEDR Quick Reference Guide
The Web Electronic Data Review is a web-based system that performs automated data processing on laboratory-submitted Electronic Data Deliverables (EDDs). Enables users to perform technical audits on data, and against Measurement Quality Objectives (MQOs).
McLean, Christine; Rohan, Maheswaran; Sisk, Rose; Dobbs, Terry; Nada-Raja, Shyamala; Wilson, Denise; Vandal, Alain C
2016-01-01
Background Automated eHealth Web-based research trials offer people an accessible, confidential opportunity to engage in research that matters to them. eHealth trials may be particularly useful for sensitive issues when seeking health care may be accompanied by shame and mistrust. Yet little is known about people’s early engagement with eHealth trials, from recruitment to preintervention autoregistration processes. A recent randomized controlled trial that tested the effectiveness of an eHealth safety decision aid for New Zealand women in the general population who experienced intimate partner violence (isafe) provided the opportunity to examine recruitment and preintervention participant engagement with a fully automated Web-based registration process. The trial aimed to recruit 340 women within 24 months. Objective The objective of our study was to examine participant preintervention engagement and recruitment efficiency for the isafe trial, and to analyze dropout through the registration pathway, from recruitment to eligibility screening and consent, to completion of baseline measures. Methods In this case study, data collection sources included the trial recruitment log, Google Analytics reports, registration and program metadata, and costs. Analysis included a qualitative narrative of the recruitment experience and descriptive statistics of preintervention participant engagement and dropout rates. A Koyck model investigated the relationship between Web-based online marketing website advertisements (ads) and participant accrual. Results The isafe trial was launched on September 17, 2012. Placement of ads in an online classified advertising platform increased the average number of recruited participants per month from 2 to 25. Over the 23-month recruitment period, the registration website recorded 4176 unique visitors. Among 1003 women meeting eligibility criteria, 51.55% (517) consented to participate; among the 501 women who enrolled (consented, validated, and randomized), 412 (82.2%) were accrued (completed baseline assessments). The majority (n=52, 58%) of the 89 women who dropped out between enrollment and accrual never logged in to the allocated isafe website. Of every 4 accrued women, 3 (314/412, 76.2%) identified the classified ad as their referral source, followed by friends and family (52/412, 12.6%). Women recruited through a friend or relative were more likely to self-identify as indigenous Māori and live in the highest-deprivation areas. Ads increased the accrual rate by a factor of 74 (95% CI 49–112). Conclusions Print advertisements, website links, and networking were costly and inefficient methods for recruiting participants to a Web-based eHealth trial. Researchers are advised to limit their recruitment efforts to Web-based online marketplace and classified advertising platforms, as in the isafe case, or to social media. Online classified advertising in “Jobs–Other–volunteers” successfully recruited a diverse sample of women experiencing intimate partner violence. Preintervention recruitment data provide critical information to inform future research and critical analysis of Web-based eHealth trials. ClinicalTrial Australian New Zealand Clinical Trials Registry (ANZCTR): ACTRN12612000708853; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12612000708853 (Archived by WebCite at http://www.webcitation/6lMGuVXdK) PMID:27780796
Koziol-McLain, Jane; McLean, Christine; Rohan, Maheswaran; Sisk, Rose; Dobbs, Terry; Nada-Raja, Shyamala; Wilson, Denise; Vandal, Alain C
2016-10-25
Automated eHealth Web-based research trials offer people an accessible, confidential opportunity to engage in research that matters to them. eHealth trials may be particularly useful for sensitive issues when seeking health care may be accompanied by shame and mistrust. Yet little is known about people's early engagement with eHealth trials, from recruitment to preintervention autoregistration processes. A recent randomized controlled trial that tested the effectiveness of an eHealth safety decision aid for New Zealand women in the general population who experienced intimate partner violence (isafe) provided the opportunity to examine recruitment and preintervention participant engagement with a fully automated Web-based registration process. The trial aimed to recruit 340 women within 24 months. The objective of our study was to examine participant preintervention engagement and recruitment efficiency for the isafe trial, and to analyze dropout through the registration pathway, from recruitment to eligibility screening and consent, to completion of baseline measures. In this case study, data collection sources included the trial recruitment log, Google Analytics reports, registration and program metadata, and costs. Analysis included a qualitative narrative of the recruitment experience and descriptive statistics of preintervention participant engagement and dropout rates. A Koyck model investigated the relationship between Web-based online marketing website advertisements (ads) and participant accrual. The isafe trial was launched on September 17, 2012. Placement of ads in an online classified advertising platform increased the average number of recruited participants per month from 2 to 25. Over the 23-month recruitment period, the registration website recorded 4176 unique visitors. Among 1003 women meeting eligibility criteria, 51.55% (517) consented to participate; among the 501 women who enrolled (consented, validated, and randomized), 412 (82.2%) were accrued (completed baseline assessments). The majority (n=52, 58%) of the 89 women who dropped out between enrollment and accrual never logged in to the allocated isafe website. Of every 4 accrued women, 3 (314/412, 76.2%) identified the classified ad as their referral source, followed by friends and family (52/412, 12.6%). Women recruited through a friend or relative were more likely to self-identify as indigenous Māori and live in the highest-deprivation areas. Ads increased the accrual rate by a factor of 74 (95% CI 49-112). Print advertisements, website links, and networking were costly and inefficient methods for recruiting participants to a Web-based eHealth trial. Researchers are advised to limit their recruitment efforts to Web-based online marketplace and classified advertising platforms, as in the isafe case, or to social media. Online classified advertising in "Jobs-Other-volunteers" successfully recruited a diverse sample of women experiencing intimate partner violence. Preintervention recruitment data provide critical information to inform future research and critical analysis of Web-based eHealth trials. Australian New Zealand Clinical Trials Registry (ANZCTR): ACTRN12612000708853; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12612000708853 (Archived by WebCite at http://www.webcitation/6lMGuVXdK).
Lancee, Jaap; Griffioen-Both, Fiemke; Spruit, Sandor; Fitrianie, Siska; Neerincx, Mark A; Beun, Robbert Jan; Brinkman, Willem-Paul
2017-01-01
Background This study is one of the first randomized controlled trials investigating cognitive behavioral therapy for insomnia (CBT-I) delivered by a fully automated mobile phone app. Such an app can potentially increase the accessibility of insomnia treatment for the 10% of people who have insomnia. Objective The objective of our study was to investigate the efficacy of CBT-I delivered via the Sleepcare mobile phone app, compared with a waitlist control group, in a randomized controlled trial. Methods We recruited participants in the Netherlands with relatively mild insomnia disorder. After answering an online pretest questionnaire, they were randomly assigned to the app (n=74) or the waitlist condition (n=77). The app packaged a sleep diary, a relaxation exercise, sleep restriction exercise, and sleep hygiene and education. The app was fully automated and adjusted itself to a participant’s progress. Program duration was 6 to 7 weeks, after which participants received posttest measurements and a 3-month follow-up. The participants in the waitlist condition received the app after they completed the posttest questionnaire. The measurements consisted of questionnaires and 7-day online diaries. The questionnaires measured insomnia severity, dysfunctional beliefs about sleep, and anxiety and depression symptoms. The diary measured sleep variables such as sleep efficiency. We performed multilevel analyses to study the interaction effects between time and condition. Results The results showed significant interaction effects (P<.01) favoring the app condition on the primary outcome measures of insomnia severity (d=–0.66) and sleep efficiency (d=0.71). Overall, these improvements were also retained in a 3-month follow-up. Conclusions This study demonstrated the efficacy of a fully automated mobile phone app in the treatment of relatively mild insomnia. The effects were in the range of what is found for Web-based treatment in general. This supports the applicability of such technical tools in the treatment of insomnia. Future work should examine the generalizability to a more diverse population. Furthermore, the separate components of such an app should be investigated. It remains to be seen how this app can best be integrated into the current health regimens. Trial Registration Netherlands Trial Register: NTR5560; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=5560 (Archived by WebCite at http://www.webcitation.org/6noLaUdJ4) PMID:28400355
Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek
2013-01-01
Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.
Semantic enrichment of medical forms - semi-automated coding of ODM-elements via web services.
Breil, Bernhard; Watermann, Andreas; Haas, Peter; Dziuballe, Philipp; Dugas, Martin
2012-01-01
Semantic interoperability is an unsolved problem which occurs while working with medical forms from different information systems or institutions. Standards like ODM or CDA assure structural homogenization but in order to compare elements from different data models it is necessary to use semantic concepts and codes on an item level of those structures. We developed and implemented a web-based tool which enables a domain expert to perform semi-automated coding of ODM-files. For each item it is possible to inquire web services which result in unique concept codes without leaving the context of the document. Although it was not feasible to perform a totally automated coding we have implemented a dialog based method to perform an efficient coding of all data elements in the context of the whole document. The proportion of codable items was comparable to results from previous studies.
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
Massouras, Andreas; Decouttere, Frederik; Hens, Korneel; Deplancke, Bart
2010-07-01
High-throughput sequencing (HTS) is revolutionizing our ability to obtain cheap, fast and reliable sequence information. Many experimental approaches are expected to benefit from the incorporation of such sequencing features in their pipeline. Consequently, software tools that facilitate such an incorporation should be of great interest. In this context, we developed WebPrInSeS, a web server tool allowing automated full-length clone sequence identification and verification using HTS data. WebPrInSeS encompasses two separate software applications. The first is WebPrInSeS-C which performs automated sequence verification of user-defined open-reading frame (ORF) clone libraries. The second is WebPrInSeS-E, which identifies positive hits in cDNA or ORF-based library screening experiments such as yeast one- or two-hybrid assays. Both tools perform de novo assembly using HTS data from any of the three major sequencing platforms. Thus, WebPrInSeS provides a highly integrated, cost-effective and efficient way to sequence-verify or identify clones of interest. WebPrInSeS is available at http://webprinses.epfl.ch/ and is open to all users.
Massouras, Andreas; Decouttere, Frederik; Hens, Korneel; Deplancke, Bart
2010-01-01
High-throughput sequencing (HTS) is revolutionizing our ability to obtain cheap, fast and reliable sequence information. Many experimental approaches are expected to benefit from the incorporation of such sequencing features in their pipeline. Consequently, software tools that facilitate such an incorporation should be of great interest. In this context, we developed WebPrInSeS, a web server tool allowing automated full-length clone sequence identification and verification using HTS data. WebPrInSeS encompasses two separate software applications. The first is WebPrInSeS-C which performs automated sequence verification of user-defined open-reading frame (ORF) clone libraries. The second is WebPrInSeS-E, which identifies positive hits in cDNA or ORF-based library screening experiments such as yeast one- or two-hybrid assays. Both tools perform de novo assembly using HTS data from any of the three major sequencing platforms. Thus, WebPrInSeS provides a highly integrated, cost-effective and efficient way to sequence-verify or identify clones of interest. WebPrInSeS is available at http://webprinses.epfl.ch/ and is open to all users. PMID:20501601
Alcohol cognitive bias modification training for problem drinkers over the web.
Wiers, Reinout W; Houben, Katrijn; Fadardi, Javad S; van Beek, Paul; Rhemtulla, Mijke; Cox, W Miles
2015-01-01
Following successful outcomes of cognitive bias modification (CBM) programs for alcoholism in clinical and community samples, the present study investigated whether different varieties of CBM (attention control training and approach-bias re-training) could be delivered successfully in a fully automated web-based way and whether these interventions would help self-selected problem drinkers to reduce their drinking. Participants were recruited through online advertising, which resulted in 697 interested participants, of whom 615 were screened in. Of the 314 who initiated training, 136 completed a pretest, four sessions of computerized training and a posttest. Participants were randomly assigned to one of four experimental conditions (attention control or one of three varieties of approach-bias re-training) or a sham-training control condition. The general pattern of findings was that participants in all conditions (including participants in the control-training condition) reduced their drinking. It is suggested that integrating CBM with online cognitive and motivational interventions could improve results. Copyright © 2014 Elsevier Ltd. All rights reserved.
An ant colony optimization based feature selection for web page classification.
Saraç, Esra; Özel, Selma Ayşe
2014-01-01
The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.
Anesthesiology, automation, and artificial intelligence.
Alexander, John C; Joshi, Girish P
2018-01-01
There have been many attempts to incorporate automation into the practice of anesthesiology, though none have been successful. Fundamentally, these failures are due to the underlying complexity of anesthesia practice and the inability of rule-based feedback loops to fully master it. Recent innovations in artificial intelligence, especially machine learning, may usher in a new era of automation across many industries, including anesthesiology. It would be wise to consider the implications of such potential changes before they have been fully realized.
Kayser, John William; Cossette, Sylvie; Côté, José; Bourbonnais, Anne; Purden, Margaret; Juneau, Martin; Tanguay, Jean-Francois; Simard, Marie-Josée; Dupuis, Jocelyn; Diodati, Jean G; Tremblay, Jean-Francois; Maheu-Cadotte, Marc-André; Cournoyer, Daniel
2017-04-27
Despite the health benefits of increasing physical activity in the secondary prevention of acute coronary syndrome (ACS), up to 60% of ACS patients are insufficiently active. Evidence supporting the effect of Web-based interventions on increasing physical activity outcomes in ACS patients is growing. However, randomized controlled trials (RCTs) using Web-based technologies that measured objective physical activity outcomes are sparse. Our aim is to evaluate in insufficiently active ACS patients, the effect of a fully automated, Web-based tailored nursing intervention (TAVIE en m@rche) on increasing steps per day. A parallel two-group multicenter RCT (target N=148) is being conducted in four major teaching hospitals in Montréal, Canada. An experimental group receiving the 4-week TAVIE en m@rche intervention plus a brief "booster" at 8 weeks, is compared with the control group receiving hyperlinks to publicly available websites. TAVIE en m@rche is based on the Strengths-Based Nursing Care orientation to nursing practice and the Self-Determination Theory of human motivation. The intervention is centered on videos of a nurse who delivers the content tailored to baseline levels of self-reported autonomous motivation, perceived competence, and walking behavior. Participants are recruited in hospital and are eligible if they report access to a computer and report less than recommended physical activity levels 6 months before hospitalization. Most outcome data are collected online at baseline, and 5 and 12 weeks postrandomization. The primary outcome is change in accelerometer-measured steps per day between randomization and 12 weeks. The secondary outcomes include change in steps per day between randomization and 5 weeks, and change in self-reported energy expenditure for walking and moderate to vigorous physical activity between randomization, and 5 and 12 weeks. Theoretical outcomes are the mediating role of self-reported perceived autonomy support, autonomous and controlled motivations, perceived competence, and barrier self-efficacy on steps per day. Clinical outcomes are quality of life, smoking, medication adherence, secondary prevention program attendance, health care utilization, and angina frequency. The potential moderating role of sex will also be explored. Analysis of covariance models will be used with covariates such as sex, age, fatigue, and depression symptoms. Allocation sequence is concealed, and blinding will be implemented during data analysis. Recruitment started March 30, 2016. Data analysis is planned for November 2017. Finding alternative interventions aimed at increasing the adoption of health behavior changes such as physical activity in the secondary prevention of ACS is clearly needed. Our RCT is expected to help support the potential efficacy of a fully automated, Web-based tailored nursing intervention on the objective outcome of steps per day in an ACS population. If this RCT is successful, and after its implementation as part of usual care, TAVIE en m@rche could help improve the health of ACS patients at large. ClinicalTrials.gov NCT02617641; https://clinicaltrials.gov/ct2/show/NCT02617641 (Archived by WebCite at http://www.webcitation.org/6pNNGndRa). ©John William Kayser, Sylvie Cossette, José Côté, Anne Bourbonnais, Margaret Purden, Martin Juneau, Jean-Francois Tanguay, Marie-Josée Simard, Jocelyn Dupuis, Jean G Diodati, Jean-Francois Tremblay, Marc-André Maheu-Cadotte, Daniel Cournoyer. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 27.04.2017.
Cardiac imaging: working towards fully-automated machine analysis & interpretation.
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-03-01
Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
2010-09-01
absorption, limiting the effectiveness of intelligence collection and weapon systems that operate in those portions of the spectrum by reducing the amount of... Intelligence Agency Web site in NITF 2.0 format. This study used basic imagery from DigitalGlobe (QuickBird, WorldView-1). This imagery is not...databases. Militarily, FASTEC could enable in-scene correction in intelligence collection and possibly influence electro- optical targeting decisions
The Role of Faculty in the Effectiveness of Fully Online Programs
ERIC Educational Resources Information Center
Al-Salman, Sami M.
2013-01-01
The enormous growth of online learning creates the need to develop a set of standards and guidelines for fully online programs. While many guidelines do exist, web-based programs still fall short in the recognition, adoption, or the implementation of these standards. One consequence is the high attrition rates associated with web-based distance…
An ODE-Based Wall Model for Turbulent Flow Simulations
NASA Technical Reports Server (NTRS)
Berger, Marsha J.; Aftosmis, Michael J.
2017-01-01
Fully automated meshing for Reynolds-Averaged Navier-Stokes Simulations, Mesh generation for complex geometry continues to be the biggest bottleneck in the RANS simulation process; Fully automated Cartesian methods routinely used for inviscid simulations about arbitrarily complex geometry; These methods lack of an obvious & robust way to achieve near wall anisotropy; Goal: Extend these methods for RANS simulation without sacrificing automation, at an affordable cost; Note: Nothing here is limited to Cartesian methods, and much becomes simpler in a body-fitted setting.
Anesthesiology, automation, and artificial intelligence
Alexander, John C.; Joshi, Girish P.
2018-01-01
ABSTRACT There have been many attempts to incorporate automation into the practice of anesthesiology, though none have been successful. Fundamentally, these failures are due to the underlying complexity of anesthesia practice and the inability of rule-based feedback loops to fully master it. Recent innovations in artificial intelligence, especially machine learning, may usher in a new era of automation across many industries, including anesthesiology. It would be wise to consider the implications of such potential changes before they have been fully realized. PMID:29686578
Mining a Web Citation Database for Author Co-Citation Analysis.
ERIC Educational Resources Information Center
He, Yulan; Hui, Siu Cheung
2002-01-01
Proposes a mining process to automate author co-citation analysis based on the Web Citation Database, a data warehouse for storing citation indices of Web publications. Describes the use of agglomerative hierarchical clustering for author clustering and multidimensional scaling for displaying author cluster maps, and explains PubSearch, a…
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Fully automated short-term incubation cycle... Diagnostic Devices § 866.1645 Fully automated short-term incubation cycle antimicrobial susceptibility system. (a) Identification. A fully automated short-term incubation cycle antimicrobial susceptibility system...
Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J
2007-08-01
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.
Brasil, Lourdes M; Gomes, Marília M F; Miosso, Cristiano J; da Silva, Marlete M; Amvame-Nze, Georges D
2015-07-16
Dengue fever is endemic in Asia, the Americas, the East of the Mediterranean and the Western Pacific. According to the World Health Organization, it is one of the diseases of greatest impact on health, affecting millions of people each year worldwide. A fast detection of increases in populations of the transmitting vector, the Aedes aegypti mosquito, is essential to avoid dengue outbreaks. Unfortunately, in several countries, such as Brazil, the current methods for detecting populations changes and disseminating this information are too slow to allow efficient allocation of resources to fight outbreaks. To reduce the delay in providing the information regarding A. aegypti population changes, we propose, develop, and evaluate a system for counting the eggs found in special traps and to provide the collected data using a web structure with geographical location resources. One of the most useful tools for the detection and surveillance of arthropods is the ovitrap, a special trap built to collect the mosquito eggs. This allows for an egg counting process, which is still usually performed manually, in countries such as Brazil. We implement and evaluate a novel system for automatically counting the eggs found in the ovitraps' cardboards. The system we propose is based on digital image processing (DIP) techniques, as well as a Web based Semi-Automatic Counting System (SCSA-WEB). All data collected are geographically referenced in a geographic information system (GIS) and made available on a Web platform. The work was developed in Gama's administrative region, in Brasília/Brazil, with the aid of the Environmental Surveillance Directory (DIVAL-Gama) and Brasília's Board of Health (SSDF), in partnership with the University of Brasília (UnB). The system was built based on a field survey carried out during three months and provided by health professionals. These professionals provided 84 cardboards from 84 ovitraps, sized 15 × 5 cm. In developing the system, we conducted the following steps: i. Obtain images from the eggs on an ovitrap's cardboards, with a microscope. ii. Apply a proposed image-processing-based semi-automatic counting system. The system we developed uses the Java programming language and the Java Server Faces technology. This is a framework suite for web applications development. This approach will allow a simple migration to any Operating System platform and future applications on mobile devices. iii. Collect and store all data into a Database (DB) and then georeference them in a GIS. The Database Management System used to develop the DB is based on PostgreSQL. The GIS will assist in the visualization and spatial analysis of digital maps, allowing the location of Dengue outbreaks in the region of study. This will also facilitate the planning, analysis, and evaluation of temporal and spatial epidemiology, as required by the Brazilian Health Care Control Center. iv. Deploy the SCSA-WEB, DB and GIS on a single Web platform. The statistical results obtained by DIP were satisfactory when compared with the SCSA-WEB's semi-automated eggs count. The results also indicate that the time spent in manual counting has being considerably reduced when using our fully automated DIP algorithm and semi-automated SCSA-WEB. The developed georeferencing Web platform proves to be of great support for future visualization with statistical and trace analysis of the disease. The analyses suggest the efficiency of our algorithm for automatic eggs counting, in terms of expediting the work of the laboratory technician, reducing considerably its time and error counting rates. We believe that this kind of integrated platform and tools can simplify the decision making process of the Brazilian Health Care Control Center.
Web-Based Surveillance Systems for Human, Animal, and Plant Diseases.
Madoff, Lawrence C; Li, Annie
2014-02-01
The emergence of infectious diseases, caused by novel pathogens or the spread of existing ones to new populations and regions, represents a continuous threat to humans and other species. The early detection of emerging human, animal, and plant diseases is critical to preventing the spread of infection and protecting the health of our species and environment. Today, more than 75% of emerging infectious diseases are estimated to be zoonotic and capable of crossing species barriers and diminishing food supplies. Traditionally, surveillance of diseases has relied on a hierarchy of health professionals that can be costly to build and maintain, leading to a delay or interruption in reporting. However, Internet-based surveillance systems bring another dimension to epidemiology by utilizing technology to collect, organize, and disseminate information in a more timely manner. Partially and fully automated systems allow for earlier detection of disease outbreaks by searching for information from both formal sources (e.g., World Health Organization and government ministry reports) and informal sources (e.g., blogs, online media sources, and social networks). Web-based applications display disparate information online or disperse it through e-mail to subscribers or the general public. Web-based early warning systems, such as ProMED-mail, the Global Public Health Intelligence Network (GPHIN), and Health Map, have been able to recognize emerging infectious diseases earlier than traditional surveillance systems. These systems, which are continuing to evolve, are now widely utilized by individuals, humanitarian organizations, and government health ministries.
Block, Gladys; Azar, Kristen Mj; Romanelli, Robert J; Block, Torin J; Hopkins, Donald; Carpenter, Heather A; Dolginsky, Marina S; Hudes, Mark L; Palaniappan, Latha P; Block, Clifford H
2015-10-23
One-third of US adults, 86 million people, have prediabetes. Two-thirds of adults are overweight or obese and at risk for diabetes. Effective and affordable interventions are needed that can reach these 86 million, and others at high risk, to reduce their progression to diagnosed diabetes. The aim was to evaluate the effectiveness of a fully automated algorithm-driven behavioral intervention for diabetes prevention, Alive-PD, delivered via the Web, Internet, mobile phone, and automated phone calls. Alive-PD provided tailored behavioral support for improvements in physical activity, eating habits, and factors such as weight loss, stress, and sleep. Weekly emails suggested small-step goals and linked to an individual Web page with tools for tracking, coaching, social support through virtual teams, competition, and health information. A mobile phone app and automated phone calls provided further support. The trial randomly assigned 339 persons to the Alive-PD intervention (n=163) or a 6-month wait-list usual-care control group (n=176). Participants were eligible if either fasting glucose or glycated hemoglobin A1c (HbA1c) was in the prediabetic range. Primary outcome measures were changes in fasting glucose and HbA1c at 6 months. Secondary outcome measures included clinic-measured changes in body weight, body mass index (BMI), waist circumference, triglyceride/high-density lipoprotein cholesterol (TG/HDL) ratio, and Framingham diabetes risk score. Analysis was by intention-to-treat. Participants' mean age was 55 (SD 8.9) years, mean BMI was 31.2 (SD 4.4) kg/m(2), and 68.7% (233/339) were male. Mean fasting glucose was in the prediabetic range (mean 109.9, SD 8.4 mg/dL), whereas the mean HbA1c was 5.6% (SD 0.3), in the normal range. In intention-to-treat analyses, Alive-PD participants achieved significantly greater reductions than controls in fasting glucose (mean -7.36 mg/dL, 95% CI -7.85 to -6.87 vs mean -2.19, 95% CI -2.64 to -1.73, P<.001), HbA1c (mean -0.26%, 95% CI -0.27 to -0.24 vs mean -0.18%, 95% CI -0.19 to -0.16, P<.001), and body weight (mean -3.26 kg, 95% CI -3.26 to -3.25 vs mean -1.26 kg, 95% CI -1.27 to -1.26, P<.001). Reductions in BMI, waist circumference, and TG/HDL were also significantly greater in Alive-PD participants than in the control group. At 6 months, the Alive-PD group reduced their Framingham 8-year diabetes risk from 16% to 11%, significantly more than the control group (P<.001). Participation and retention was good; intervention participants interacted with the program a median of 17 (IQR 14) of 24 weeks and 71.1% (116/163) were still interacting with the program in month 6. Alive-PD improved glycemic control, body weight, BMI, waist circumference, TG/HDL ratio, and diabetes risk. As a fully automated system, the program has high potential for scalability and could potentially reach many of the 86 million US adults who have prediabetes as well as other at-risk groups. Clinicaltrials.gov NCT01479062; https://clinicaltrials.gov/ct2/show/NCT01479062 (Archived by WebCite at http://www.webcitation.org/6bt4V20NR).
An Ant Colony Optimization Based Feature Selection for Web Page Classification
2014-01-01
The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678
Côté, José
2016-01-01
Background Type 2 diabetes is a major challenge for Canadian public health authorities, and regular physical activity is a key factor in the management of this disease. Given that less than half of people with type 2 diabetes in Canada are sufficiently active to meet the Canadian Diabetes Association's guidelines, effective programs targeting the adoption of regular physical activity are in demand for this population. Many researchers have argued that Web-based interventions targeting physical activity are a promising avenue for insufficiently active populations; however, it remains unclear if this type of intervention is effective among people with type 2 diabetes. Objective This research project aims to evaluate the effectiveness of two Web-based interventions targeting the adoption of regular aerobic physical activity among insufficiently active adult Canadian Francophones with type 2 diabetes. Methods A 3-arm, parallel randomized controlled trial with 2 experimental groups and 1 control group was conducted in the province of Quebec, Canada. A total of 234 participants were randomized at a 1:1:1 ratio to receive an 8-week, fully automated, computer-tailored, Web-based intervention (experimental group 1); an 8-week peer support (ie, Facebook group) Web-based intervention (experimental group 2); or no intervention (control group) during the study period. Results The primary outcome of this study is self-reported physical activity level (total min/week of moderate-intensity aerobic physical activity). Secondary outcomes are attitude, social influence, self-efficacy, type of motivation, and intention. All outcomes are assessed at baseline and 3 and 9 months after baseline with a self-reported questionnaire filled directly on the study websites. Conclusions By evaluating and comparing the effectiveness of 2 Web-based interventions characterized by different behavior change perspectives, findings of this study will contribute to advances in the field of physical activity promotion in adult populations with type 2 diabetes. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): ISRCTN15747108; http://www.isrctn.com/ISRCTN15747108 (Archived by WebCite at http://www.webcitation.org/6eJTi0m3r) PMID:26869015
2014-01-01
Background The Internet is an optimal setting to provide massive access to tobacco treatments. To evaluate open-access Web-based smoking cessation programs in a real-world setting, adherence and retention data should be taken into account as much as abstinence rate. Objective The objective was to analyze the usage and effectiveness of a fully automated, open-access, Web-based smoking cessation program by comparing interactive versus noninteractive versions. Methods Participants were randomly assigned either to the interactive or noninteractive version of the program, both with identical content divided into 4 interdependent modules. At baseline, we collected demographic, psychological, and smoking characteristics of the smokers self-enrolled in the Web-based program of Universidad Nacional de Educación a Distancia (National Distance Education University; UNED) in Madrid, Spain. The following questionnaires were administered: the anxiety and depression subscales from the Symptom Checklist-90-Revised, the 4-item Perceived Stress Scale, and the Heaviness of Smoking Index. At 3 months, we analyzed dropout rates, module completion, user satisfaction, follow-up response rate, and self-assessed smoking abstinence. Results A total of 23,213 smokers were registered, 50.06% (11,620/23,213) women and 49.94% (11,593/23,213) men, with a mean age of 39.5 years (SD 10.3). Of these, 46.10% (10,701/23,213) were married and 34.43% (7992/23,213) were single, 46.03% (10,686/23,213) had university education, and 78.73% (18,275/23,213) were employed. Participants smoked an average of 19.4 cigarettes per day (SD 10.3). Of the 11,861 smokers randomly assigned to the interactive version, 2720 (22.93%) completed the first module, 1052 (8.87%) the second, 624 (5.26%) the third, and 355 (2.99%) the fourth. Completion data was not available for the noninteractive version (no way to record it automatically). The 3-month follow-up questionnaire was completed by 1085 of 23,213 enrolled smokers (4.67%). Among them, 406 (37.42%) self-reported not smoking. No difference between groups was found. Assuming missing respondents continued to smoke, the abstinence rate was 1.74% (406/23,213), in which 22,678 were missing respondents. Among follow-up respondents, completing the 4 modules of the intervention increased the chances of smoking cessation (OR 1.95, 95% CI 1.27-2.97, P<.001), as did smoking 30 minutes (OR 1.58, 95% CI 1.04-2.39, P=.003) or 1 hour after waking (OR 1.93, 95% CI 1.27-2.93, P<.001) compared to smoking within the first 5 minutes after waking. Conclusions The findings suggest that the UNED Web-based smoking cessation program was very accessible, but a high level of attrition was confirmed. This could be related to the ease of enrollment, its free character, and the absence of direct contact with professionals. It is concluded that, in practice, the greater the accessibility to the program, the lower the adherence and retention. Professional support from health services and the payment of a reimbursable fee could prevent high rates of attrition. PMID:24760951
Cardiac imaging: working towards fully-automated machine analysis & interpretation
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-01-01
Introduction Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation. PMID:28277804
A Forest Fire Sensor Web Concept with UAVSAR
NASA Astrophysics Data System (ADS)
Lou, Y.; Chien, S.; Clark, D.; Doubleday, J.; Muellerschoen, R.; Zheng, Y.
2008-12-01
We developed a forest fire sensor web concept with a UAVSAR-based smart sensor and onboard automated response capability that will allow us to monitor fire progression based on coarse initial information provided by an external source. This autonomous disturbance detection and monitoring system combines the unique capabilities of imaging radar with high throughput onboard processing technology and onboard automated response capability based on specific science algorithms. In this forest fire sensor web scenario, a fire is initially located by MODIS/RapidFire or a ground-based fire observer. This information is transmitted to the UAVSAR onboard automated response system (CASPER). CASPER generates a flight plan to cover the alerted fire area and executes the flight plan. The onboard processor generates the fuel load map from raw radar data, used with wind and elevation information, predicts the likely fire progression. CASPER then autonomously alters the flight plan to track the fire progression, providing this information to the fire fighting team on the ground. We can also relay the precise fire location to other remote sensing assets with autonomous response capability such as Earth Observation-1 (EO-1)'s hyper-spectral imager to acquire the fire data.
A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image
NASA Astrophysics Data System (ADS)
Barat, Christian; Phlypo, Ronald
2010-12-01
We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.
The Effects of Metaphorical Interface on Germane Cognitive Load in Web-Based Instruction
ERIC Educational Resources Information Center
Cheon, Jongpil; Grant, Michael M.
2012-01-01
The purpose of this study was to examine the effects of a metaphorical interface on germane cognitive load in Web-based instruction. Based on cognitive load theory, germane cognitive load is a cognitive investment for schema construction and automation. A new instrument developed in a previous study was used to measure students' mental activities…
Horsch, Corine Hg; Lancee, Jaap; Griffioen-Both, Fiemke; Spruit, Sandor; Fitrianie, Siska; Neerincx, Mark A; Beun, Robbert Jan; Brinkman, Willem-Paul
2017-04-11
This study is one of the first randomized controlled trials investigating cognitive behavioral therapy for insomnia (CBT-I) delivered by a fully automated mobile phone app. Such an app can potentially increase the accessibility of insomnia treatment for the 10% of people who have insomnia. The objective of our study was to investigate the efficacy of CBT-I delivered via the Sleepcare mobile phone app, compared with a waitlist control group, in a randomized controlled trial. We recruited participants in the Netherlands with relatively mild insomnia disorder. After answering an online pretest questionnaire, they were randomly assigned to the app (n=74) or the waitlist condition (n=77). The app packaged a sleep diary, a relaxation exercise, sleep restriction exercise, and sleep hygiene and education. The app was fully automated and adjusted itself to a participant's progress. Program duration was 6 to 7 weeks, after which participants received posttest measurements and a 3-month follow-up. The participants in the waitlist condition received the app after they completed the posttest questionnaire. The measurements consisted of questionnaires and 7-day online diaries. The questionnaires measured insomnia severity, dysfunctional beliefs about sleep, and anxiety and depression symptoms. The diary measured sleep variables such as sleep efficiency. We performed multilevel analyses to study the interaction effects between time and condition. The results showed significant interaction effects (P<.01) favoring the app condition on the primary outcome measures of insomnia severity (d=-0.66) and sleep efficiency (d=0.71). Overall, these improvements were also retained in a 3-month follow-up. This study demonstrated the efficacy of a fully automated mobile phone app in the treatment of relatively mild insomnia. The effects were in the range of what is found for Web-based treatment in general. This supports the applicability of such technical tools in the treatment of insomnia. Future work should examine the generalizability to a more diverse population. Furthermore, the separate components of such an app should be investigated. It remains to be seen how this app can best be integrated into the current health regimens. Netherlands Trial Register: NTR5560; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=5560 (Archived by WebCite at http://www.webcitation.org/6noLaUdJ4). ©Corine HG Horsch, Jaap Lancee, Fiemke Griffioen-Both, Sandor Spruit, Siska Fitrianie, Mark A Neerincx, Robbert Jan Beun, Willem-Paul Brinkman. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 11.04.2017.
Panuccio, Giuseppe; Torsello, Giovanni Federico; Pfister, Markus; Bisdas, Theodosios; Bosiers, Michel J; Torsello, Giovanni; Austermann, Martin
2016-12-01
To assess the usability of a fully automated fusion imaging engine prototype, matching preinterventional computed tomography with intraoperative fluoroscopic angiography during endovascular aortic repair. From June 2014 to February 2015, all patients treated electively for abdominal and thoracoabdominal aneurysms were enrolled prospectively. Before each procedure, preoperative planning was performed with a fully automated fusion engine prototype based on computed tomography angiography, creating a mesh model of the aorta. In a second step, this three-dimensional dataset was registered with the two-dimensional intraoperative fluoroscopy. The main outcome measure was the applicability of the fully automated fusion engine. Secondary outcomes were freedom from failure of automatic segmentation or of the automatic registration as well as accuracy of the mesh model, measuring deviations from intraoperative angiography in millimeters, if applicable. Twenty-five patients were enrolled in this study. The fusion imaging engine could be used in successfully 92% of the cases (n = 23). Freedom from failure of automatic segmentation was 44% (n = 11). The freedom from failure of the automatic registration was 76% (n = 19), the median error of the automatic registration process was 0 mm (interquartile range, 0-5 mm). The fully automated fusion imaging engine was found to be applicable in most cases, albeit in several cases a fully automated data processing was not possible, requiring manual intervention. The accuracy of the automatic registration yielded excellent results and promises a useful and simple to use technology. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Kaylor-Hughes, Catherine J; Rawsthorne, Mat; Coulson, Neil S; Simpson, Sandra; Simons, Lucy; Guo, Boliang; James, Marilyn; Moran, Paul; Simpson, Jayne; Hollis, Chris; Avery, Anthony J; Tata, Laila J; Williams, Laura; Morriss, Richard K
2017-12-18
Regardless of geography or income, effective help for depression and anxiety only reaches a small proportion of those who might benefit from it. The scale of the problem suggests a role for effective, safe, anonymized public health-driven Web-based services such as Big White Wall (BWW), which offer immediate peer support at low cost. Using Reach, Effectiveness, Adoption, Implementation and Maintenance (RE-AIM) methodology, the aim of this study was to determine the population reach, effectiveness, cost-effectiveness, and barriers and drivers to implementation of BWW compared with Web-based information compiled by UK's National Health Service (NHS, NHS Choices Moodzone) in people with probable mild to moderate depression and anxiety disorder. A pragmatic, parallel-group, single-blind randomized controlled trial (RCT) is being conducted using a fully automated trial website in which eligible participants are randomized to receive either 6 months access to BWW or signposted to the NHS Moodzone site. The recruitment of 2200 people to the study will be facilitated by a public health engagement campaign involving general marketing and social media, primary care clinical champions, health care staff, large employers, and third sector groups. People will refer themselves to the study and will be eligible if they are older than 16 years, have probable mild to moderate depression or anxiety disorders, and have access to the Internet. The primary outcome will be the Warwick-Edinburgh Mental Well-Being Scale at 6 weeks. We will also explore the reach, maintenance, cost-effectiveness, and barriers and drivers to implementation and possible mechanisms of actions using a range of qualitative and quantitative methods. This will be the first fully digital trial of a direct to public online peer support program for common mental disorders. The potential advantages of adding this to current NHS mental health services and the challenges of designing a public health campaign and RCT of two digital interventions using a fully automated digital enrollment and data collection process are considered for people with depression and anxiety. International Standard Randomized Controlled Trial Number (ISRCTN): 12673428; http://www.controlled-trials.com/ISRCTN12673428/12673428 (Archived by WebCite at http://www.webcitation.org/6uw6ZJk5a). ©Catherine J Kaylor-Hughes, Mat Rawsthorne, Neil S Coulson, Sandra Simpson, Lucy Simons, Boliang Guo, Marilyn James, Paul Moran, Jayne Simpson, Chris Hollis, Anthony J Avery, Laila J Tata, Laura Williams, REBOOT Notts Lived Experience Advisory Panel, Richard K Morriss. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 18.12.2017.
Automated object-based classification of topography from SRTM data
Drăguţ, Lucian; Eisank, Clemens
2012-01-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060
Automated object-based classification of topography from SRTM data
NASA Astrophysics Data System (ADS)
Drăguţ, Lucian; Eisank, Clemens
2012-03-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.
Schulze, H Georg; Turner, Robin F B
2014-01-01
Charge-coupled device detectors are vulnerable to cosmic rays that can contaminate Raman spectra with positive going spikes. Because spikes can adversely affect spectral processing and data analyses, they must be removed. Although both hardware-based and software-based spike removal methods exist, they typically require parameter and threshold specification dependent on well-considered user input. Here, we present a fully automated spike removal algorithm that proceeds without requiring user input. It is minimally dependent on sample attributes, and those that are required (e.g., standard deviation of spectral noise) can be determined with other fully automated procedures. At the core of the method is the identification and location of spikes with coincident second derivatives along both the spectral and spatiotemporal dimensions of two-dimensional datasets. The method can be applied to spectra that are relatively inhomogeneous because it provides fairly effective and selective targeting of spikes resulting in minimal distortion of spectra. Relatively effective spike removal obtained with full automation could provide substantial benefits to users where large numbers of spectra must be processed.
Web-Based Time Entry Systems: Providing Greater Automation and Compliance
ERIC Educational Resources Information Center
Williams, Tracy
2005-01-01
Time and resources are becoming increasingly scarce in most higher education institutions today. As a result, colleges and universities are looking to streamline and simplify many costly, labor-intensive administrative processes. In this article, Tracy Williams examines how Web-based time-entry systems can help institutions save valuable time and…
ERIC Educational Resources Information Center
Kauffman, Douglas F.; Ge, Xun; Xie, Kui; Chen, Ching-Huei
2008-01-01
This study explored Metacognition and how automated instructional support in the form of problem-solving and self-reflection prompts influenced students' capacity to solve complex problems in a Web-based learning environment. Specifically, we examined the independent and interactive effects of problem-solving prompts and reflection prompts on…
An Automated End-To Multi-Agent Qos Based Architecture for Selection of Geospatial Web Services
NASA Astrophysics Data System (ADS)
Shah, M.; Verma, Y.; Nandakumar, R.
2012-07-01
Over the past decade, Service-Oriented Architecture (SOA) and Web services have gained wide popularity and acceptance from researchers and industries all over the world. SOA makes it easy to build business applications with common services, and it provides like: reduced integration expense, better asset reuse, higher business agility, and reduction of business risk. Building of framework for acquiring useful geospatial information for potential users is a crucial problem faced by the GIS domain. Geospatial Web services solve this problem. With the help of web service technology, geospatial web services can provide useful geospatial information to potential users in a better way than traditional geographic information system (GIS). A geospatial Web service is a modular application designed to enable the discovery, access, and chaining of geospatial information and services across the web that are often both computation and data-intensive that involve diverse sources of data and complex processing functions. With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS) offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.
Web based aphasia test using service oriented architecture (SOA)
NASA Astrophysics Data System (ADS)
Voos, J. A.; Vigliecca, N. S.; Gonzalez, E. A.
2007-11-01
Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.
Mak, Winnie W S; Chan, Amy T Y; Cheung, Eliza Y L; Lin, Cherry L Y; Ngai, Karin C S
2015-01-19
With increasing evidence demonstrating the effectiveness of Web-based interventions and mindfulness-based training in improving health, delivering mindfulness training online is an attractive proposition. The aim of this study was to evaluate the efficacy of two Internet-based interventions (basic mindfulness and Health Action Process Approach enhanced mindfulness) with waitlist control. Health Action Process Approach (HAPA) principles were used to enhance participants' efficacy and planning. Participants were recruited online and offline among local universities; 321 university students and staff were randomly assigned to three conditions. The basic and HAPA-enhanced groups completed the 8-week fully automated mindfulness training online. All participants (including control) were asked to complete an online questionnaire pre-program, post-program, and at 3-month follow-up. Significant group by time interaction effect was found. The HAPA-enhanced group showed significantly higher levels of mindfulness from pre-intervention to post-intervention, and such improvement was sustained at follow-up. Both the basic and HAPA-enhanced mindfulness groups showed better mental well-being from pre-intervention to post-intervention, and improvement was sustained at 3-month follow-up. Online mindfulness training can improve mental health. An online platform is a viable medium to implement and disseminate evidence-based interventions and is a highly scalable approach to reach the general public. Chinese Clinical Trial Registry (ChiCTR): ChiCTR-TRC-12002954; http://www.chictr.org/en/proj/show.aspx?proj=3904 (Archived by WebCite at http://www.webcitation.org/6VCdG09pA).
ERIC Educational Resources Information Center
Balmford, James; Borland, Ron; Benda, Peter; Howard, Steve
2013-01-01
The aim was to better understand structural factors associated with uptake of automated tailored interventions for smoking cessation. In a prospective randomized controlled trial with interventions only offered, not mandated, participants were randomized based on the following: web-based expert system (QuitCoach); text messaging program (onQ);…
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
Web-based automation of green building rating index and life cycle cost analysis
NASA Astrophysics Data System (ADS)
Shahzaib Khan, Jam; Zakaria, Rozana; Aminuddin, Eeydzah; IzieAdiana Abidin, Nur; Sahamir, Shaza Rina; Ahmad, Rosli; Nafis Abas, Darul
2018-04-01
Sudden decline in financial markets and economic meltdown has slow down adaptation and lowered interest of investors towards green certified buildings due to their higher initial costs. Similarly, it is essential to fetch investor’s attention towards more development of green buildings through automated tools for the construction projects. Though, historical dearth is found on the automation of green building rating tools that brings up an essential gap to develop an automated analog computerized programming tool. This paper present a proposed research aim to develop an integrated web-based automated analog computerized programming that applies green building rating assessment tool, green technology and life cycle cost analysis. It also emphasizes to identify variables of MyCrest and LCC to be integrated and developed in a framework then transformed into automated analog computerized programming. A mix methodology of qualitative and quantitative survey and its development portray the planned to carry MyCrest-LCC integration to an automated level. In this study, the preliminary literature review enriches better understanding of Green Building Rating Tools (GBRT) integration to LCC. The outcome of this research is a pave way for future researchers to integrate other efficient tool and parameters that contributes towards green buildings and future agendas.
ActionMap: A web-based software that automates loci assignments to framework maps.
Albini, Guillaume; Falque, Matthieu; Joets, Johann
2003-07-01
Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/).
ActionMap: a web-based software that automates loci assignments to framework maps
Albini, Guillaume; Falque, Matthieu; Joets, Johann
2003-01-01
Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/). PMID:12824426
NASA Astrophysics Data System (ADS)
Samadzadegan, F.; Saber, M.; Zahmatkesh, H.; Joze Ghazi Khanlou, H.
2013-09-01
Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA) to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS) has guided geospatial data processing in a Service Oriented Architecture (SOA) platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.
Singh, Tulika; Sharma, Madhurima; Singla, Veenu; Khandelwal, Niranjan
2016-01-01
The objective of our study was to calculate mammographic breast density with a fully automated volumetric breast density measurement method and to compare it to breast imaging reporting and data system (BI-RADS) breast density categories assigned by two radiologists. A total of 476 full-field digital mammography examinations with standard mediolateral oblique and craniocaudal views were evaluated by two blinded radiologists and BI-RADS density categories were assigned. Using a fully automated software, mean fibroglandular tissue volume, mean breast volume, and mean volumetric breast density were calculated. Based on percentage volumetric breast density, a volumetric density grade was assigned from 1 to 4. The weighted overall kappa was 0.895 (almost perfect agreement) for the two radiologists' BI-RADS density estimates. A statistically significant difference was seen in mean volumetric breast density among the BI-RADS density categories. With increased BI-RADS density category, increase in mean volumetric breast density was also seen (P < 0.001). A significant positive correlation was found between BI-RADS categories and volumetric density grading by fully automated software (ρ = 0.728, P < 0.001 for first radiologist and ρ = 0.725, P < 0.001 for second radiologist). Pairwise estimates of the weighted kappa between Volpara density grade and BI-RADS density category by two observers showed fair agreement (κ = 0.398 and 0.388, respectively). In our study, a good correlation was seen between density grading using fully automated volumetric method and density grading using BI-RADS density categories assigned by the two radiologists. Thus, the fully automated volumetric method may be used to quantify breast density on routine mammography. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Utility of an automated thermal-based approach for monitoring evapotranspiration
USDA-ARS?s Scientific Manuscript database
A very simple remote sensing-based model for water use monitoring is presented. The model acronym DATTUTDUT, (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature) is a Dutch word which loosely translates as “It’s unbelievable that it works”. DATTUTDUT is fully automated and o...
Radio and Optical Telescopes for School Students and Professional Astronomers
NASA Astrophysics Data System (ADS)
Hosmer, Laura; Langston, G.; Heatherly, S.; Towner, A. P.; Ford, J.; Simon, R. S.; White, S.; O'Neil, K. L.; Haipslip, J.; Reichart, D.
2013-01-01
The NRAO 20m telescope is now on-line as a part of UNC's Skynet worldwide telescope network. The NRAO is completing integration of radio astronomy tools with the Skynet web interface. We present the web interface and astronomy projects that allow students and astronomers from all over the country to become Radio Astronomers. The 20 meter radio telescope at NRAO in Green Bank, WV is dedicated to public education and also is part of an experiment in public funding for astronomy. The telescope has a fantastic new web-based interface, with priority queuing, accommodating priority for paying customers and enabling free use of otherwise unused time. This revival included many software and hardware improvements including automatic calibration and improved time integration resulting in improved data processing, and a new ultra high resolution spectrometer. This new spectrometer is optimized for very narrow spectral lines, which will allow astronomers to study complex molecules and very cold regions of space in remarkable detail. In accordance with focusing on broader impacts, many public outreach and high school education activities have been completed with many confirmed future activities. The 20 meter is now a fully automated, powerful tool capable of professional grade results available to anyone in the world. Drop by our poster and try out real-time telescope control!
Toward automated assessment of health Web page quality using the DISCERN instrument.
Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael
2017-05-01
As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN classifiers. The code for the probabilistic consensus model is available at https://bitbucket.org/A_2/em_dawid/ . © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ERIC Educational Resources Information Center
Ong, S. K.; Mannan, M. A.
2004-01-01
This paper presents a web-based interactive teaching package that provides a comprehensive and conducive yet dynamic and interactive environment for a module on automated machine tools in the Manufacturing Division at the National University of Singapore. The use of Internet technologies in this teaching tool makes it possible to conjure…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.R.; Bonner, C.A.; Ostenak, C.A.
1989-01-01
ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototypical robotic system, for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multi-drawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface and data-base system are provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric andmore » gamma-ray data acquisition and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices. 10 refs., 10 figs., 4 tabs.« less
Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines
2017-06-24
Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.
Web-based magazine design for self publishers
NASA Astrophysics Data System (ADS)
Hunter, Andrew; Slatter, David; Greig, Darryl
2011-03-01
Short run printing technology and web services such as MagCloud provide new opportunities for long-tail magazine publishing. They enable self publishers to supply magazines to a wide range of communities, including groups that are too small to be viable as target communities for conventional publishers. In a Web 2.0 world where users constantly discover new services and where they may be infrequent patrons of any single service, it is unreasonable to expect users to learn the complex service behaviors. Furthermore, we want to open up publishing opportunities to novices who are unlikely to have prior experience of publishing and who lack design expertise. Magazine design automation is an ambitious goal, but recent progress with another web service, Autophotobook, proves that some level of automation of publication design is feasible. This paper describes our current research effort to extend the automation capabilities of Autophotobook to address the issues of magazine design so that we can provide a service to support professional-quality self publishing by novice users for a wide range of community types and sizes.
Suppa, Per; Anker, Ulrich; Spies, Lothar; Bopp, Irene; Rüegger-Frey, Brigitte; Klaghofer, Richard; Gocke, Carola; Hampel, Harald; Beck, Sacha; Buchert, Ralph
2015-01-01
Hippocampal volume is a promising biomarker to enhance the accuracy of the diagnosis of dementia due to Alzheimer's disease (AD). However, whereas hippocampal volume is well studied in patient samples from clinical trials, its value in clinical routine patient care is still rather unclear. The aim of the present study, therefore, was to evaluate fully automated atlas-based hippocampal volumetry for detection of AD in the setting of a secondary care expert memory clinic for outpatients. One-hundred consecutive patients with memory complaints were clinically evaluated and categorized into three diagnostic groups: AD, intermediate AD, and non-AD. A software tool based on open source software (Statistical Parametric Mapping SPM8) was employed for fully automated tissue segmentation and stereotactical normalization of high-resolution three-dimensional T1-weighted magnetic resonance images. Predefined standard masks were used for computation of grey matter volume of the left and right hippocampus which then was scaled to the patient's total grey matter volume. The right hippocampal volume provided an area under the receiver operating characteristic curve of 84% for detection of AD patients in the whole sample. This indicates that fully automated MR-based hippocampal volumetry fulfills the requirements for a relevant core feasible biomarker for detection of AD in everyday patient care in a secondary care memory clinic for outpatients. The software used in the present study has been made freely available as an SPM8 toolbox. It is robust and fast so that it is easily integrated into routine workflow.
The web-based information system for small and medium enterprises of Tomsk region
NASA Astrophysics Data System (ADS)
Senchenko, P. V.; Zhukovskiy, O. I.; Gritsenko, Yu B.; Senchenko, A. P.; Gritsenko, L. M.; Kovaleva, E. V.
2017-01-01
This paper presents the web enabled automated information data support system of small and medium-sized enterprises of Tomsk region. We define the purpose and application field of the system. In addition, we build a generic architecture and find system functions.
Khan, Ali R; Wang, Lei; Beg, Mirza Faisal
2008-07-01
Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM)-based label-propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse-to-fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington's disease and Tourette's syndrome subjects, and the right hippocampus of Schizophrenia subjects.
Automated ocean color product validation for the Southern California Bight
NASA Astrophysics Data System (ADS)
Davis, Curtiss O.; Tufillaro, Nicholas; Jones, Burt; Arnone, Robert
2012-06-01
Automated match ups allow us to maintain and improve the products of current satellite ocean color sensors (MODIS, MERIS), and new sensors (VIIRS). As part of the VIIRS mission preparation, we have created a web based automated match up tool that provides access to searchable fields for date, site, and products, and creates match-ups between satellite (MODIS, MERIS, VIIRS), and in-situ measurements (HyperPRO and SeaPRISM). The back end of the system is a 'mySQL' database, and the front end is a `php' web portal with pull down menus for searchable fields. Based on selections, graphics are generated showing match-ups and statistics, and ascii files are created for downloads for the matchup data. Examples are shown for matching the satellite data with the data from Platform Eureka SeaPRISM off L.A. Harbor in the Southern California Bight.
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; Painho, M.
2017-09-01
The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.
Li, Wei; Abram, François; Pelletier, Jean-Pierre; Raynauld, Jean-Pierre; Dorais, Marc; d'Anjou, Marc-André; Martel-Pelletier, Johanne
2010-01-01
Joint effusion is frequently associated with osteoarthritis (OA) flare-up and is an important marker of therapeutic response. This study aimed at developing and validating a fully automated system based on magnetic resonance imaging (MRI) for the quantification of joint effusion volume in knee OA patients. MRI examinations consisted of two axial sequences: a T2-weighted true fast imaging with steady-state precession and a T1-weighted gradient echo. An automated joint effusion volume quantification system using MRI was developed and validated (a) with calibrated phantoms (cylinder and sphere) and effusion from knee OA patients; (b) with assessment by manual quantification; and (c) by direct aspiration. Twenty-five knee OA patients with joint effusion were included in the study. The automated joint effusion volume quantification was developed as a four stage sequencing process: bone segmentation, filtering of unrelated structures, segmentation of joint effusion, and subvoxel volume calculation. Validation experiments revealed excellent coefficients of variation with the calibrated cylinder (1.4%) and sphere (0.8%) phantoms. Comparison of the OA knee joint effusion volume assessed by the developed automated system and by manual quantification was also excellent (r = 0.98; P < 0.0001), as was the comparison with direct aspiration (r = 0.88; P = 0.0008). The newly developed fully automated MRI-based system provided precise quantification of OA knee joint effusion volume with excellent correlation with data from phantoms, a manual system, and joint aspiration. Such an automated system will be instrumental in improving the reproducibility/reliability of the evaluation of this marker in clinical application.
Kuder, Margaret; Goheen, Mary Jett; Dize, Laura; Barnes, Mathilda; Gaydos, Charlotte A
2015-05-01
The www.iwantthekit.org provides Internet-based, at-home sexually transmitted infection screening. The Web site implemented an automated test result access system. To evaluate potential deleterious effects of the new system, we analyzed demographics, Web site usage, and treatment. The post-Web site design captured more participant information and no decrease in requests, kit return, or treatment adherence.
Near real time water quality monitoring of Chivero and Manyame lakes of Zimbabwe
NASA Astrophysics Data System (ADS)
Muchini, Ronald; Gumindoga, Webster; Togarepi, Sydney; Pinias Masarira, Tarirai; Dube, Timothy
2018-05-01
Zimbabwe's water resources are under pressure from both point and non-point sources of pollution hence the need for regular and synoptic assessment. In-situ and laboratory based methods of water quality monitoring are point based and do not provide a synoptic coverage of the lakes. This paper presents novel methods for retrieving water quality parameters in Chivero and Manyame lakes, Zimbabwe, from remotely sensed imagery. Remotely sensed derived water quality parameters are further validated using in-situ data. It also presents an application for automated retrieval of those parameters developed in VB6, as well as a web portal for disseminating the water quality information to relevant stakeholders. The web portal is developed, using Geoserver, open layers and HTML. Results show the spatial variation of water quality and an automated remote sensing and GIS system with a web front end to disseminate water quality information.
The Automated Geospatial Watershed Assessment (AGWA) tool is a desktop application that uses widely available standardized spatial datasets to derive inputs for multi-scale hydrologic models (Miller et al., 2007). The required data sets include topography (DEM data), soils, clima...
Progress in Fully Automated Abdominal CT Interpretation
Summers, Ronald M.
2016-01-01
OBJECTIVE Automated analysis of abdominal CT has advanced markedly over just the last few years. Fully automated assessment of organs, lymph nodes, adipose tissue, muscle, bowel, spine, and tumors are some examples where tremendous progress has been made. Computer-aided detection of lesions has also improved dramatically. CONCLUSION This article reviews the progress and provides insights into what is in store in the near future for automated analysis for abdominal CT, ultimately leading to fully automated interpretation. PMID:27101207
reCAPTCHA: human-based character recognition via Web security measures.
von Ahn, Luis; Maurer, Benjamin; McMillen, Colin; Abraham, David; Blum, Manuel
2008-09-12
CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are widespread security measures on the World Wide Web that prevent automated programs from abusing online services. They do so by asking humans to perform a task that computers cannot yet perform, such as deciphering distorted characters. Our research explored whether such human effort can be channeled into a useful purpose: helping to digitize old printed material by asking users to decipher scanned words from books that computerized optical character recognition failed to recognize. We showed that this method can transcribe text with a word accuracy exceeding 99%, matching the guarantee of professional human transcribers. Our apparatus is deployed in more than 40,000 Web sites and has transcribed over 440 million words.
Kolodziejczyk, Julia K; Norman, Gregory J; Barrera-Ng, Angelica; Dillon, Lindsay; Marshall, Simon; Arredondo, Elva; Rock, Cheryl L; Raab, Fred; Griswold, William G; Sullivan, Mark; Patrick, Kevin
2013-11-06
Little is known about the feasibility and acceptability of tailored text message based weight loss programs for English and Spanish-language speakers. This pilot study evaluated the feasibility, acceptability, and estimated impact of a tailored text message based weight loss program for English and Spanish-language speakers. The purpose of this pilot study was to inform the development of a full-scale randomized trial. There were 20 overweight or obese participants (mean age 40.10, SD 8.05; 8/20, 40% male; 9/20, 45% Spanish-speakers) that were recruited in San Diego, California, from March to May 2011 and evaluated in a one-group pre/post clinical trial. For 8 weeks, participants received and responded to 3-5 text messages daily sent from a fully automated text messaging system. They also received printed weight loss materials and brief 10-15 minute weekly counseling calls. To estimate the impact of the program, the primary outcome was weight (kg) measured during face-to-face measurement visits by trained research staff. Pre and post differences in weight were analyzed with a one-way repeated measures analysis of variance. Differences by language preference at both time points were analyzed with t tests. Body mass index and weight management behaviors also were examined. Feasibility and acceptability were determined by recruitment success, adherence (ie, percentage of replies to interactive text messages and attrition), and participant satisfaction. Participants who completed the final assessment (N=18) decreased body weight by 1.85 kg (F1,17=10.80, P=.004, CI∆ 0.66-3.03, η(2)=0.39). At both time points, there were no differences in weight by language preference. Participants responded to 88.04% (986/1120) of interactive text messages, attrition rate was 10% (2/20), and 94% (19/20) of participants reported satisfaction with the program. This fully automated text message based weight program was feasible with English and Spanish-speakers and may have promoted modest weight loss over an 8-week period. Clinicaltrials.gov NCT01171586; http://clinicaltrials.gov/ct2/show/NCT01171586 (Archived by WebCite at http://www.webcitation.org/6Ksr6dl7n).
Meyer, Denny; Austin, David William; Kyrios, Michael
2011-01-01
Background The development of e-mental health interventions to treat or prevent mental illness and to enhance wellbeing has risen rapidly over the past decade. This development assists the public in sidestepping some of the obstacles that are often encountered when trying to access traditional face-to-face mental health care services. Objective The objective of our study was to investigate the posttreatment effectiveness of five fully automated self-help cognitive behavior e-therapy programs for generalized anxiety disorder (GAD), panic disorder with or without agoraphobia (PD/A), obsessive–compulsive disorder (OCD), posttraumatic stress disorder (PTSD), and social anxiety disorder (SAD) offered to the international public via Anxiety Online, an open-access full-service virtual psychology clinic for anxiety disorders. Methods We used a naturalistic participant choice, quasi-experimental design to evaluate each of the five Anxiety Online fully automated self-help e-therapy programs. Participants were required to have at least subclinical levels of one of the anxiety disorders to be offered the associated disorder-specific fully automated self-help e-therapy program. These programs are offered free of charge via Anxiety Online. Results A total of 225 people self-selected one of the five e-therapy programs (GAD, n = 88; SAD, n = 50; PD/A, n = 40; PTSD, n = 30; OCD, n = 17) and completed their 12-week posttreatment assessment. Significant improvements were found on 21/25 measures across the five fully automated self-help programs. At postassessment we observed significant reductions on all five anxiety disorder clinical disorder severity ratings (Cohen d range 0.72–1.22), increased confidence in managing one’s own mental health care (Cohen d range 0.70–1.17), and decreases in the total number of clinical diagnoses (except for the PD/A program, where a positive trend was found) (Cohen d range 0.45–1.08). In addition, we found significant improvements in quality of life for the GAD, OCD, PTSD, and SAD e-therapy programs (Cohen d range 0.11–0.96) and significant reductions relating to general psychological distress levels for the GAD, PD/A, and PTSD e-therapy programs (Cohen d range 0.23–1.16). Overall, treatment satisfaction was good across all five e-therapy programs, and posttreatment assessment completers reported using their e-therapy program an average of 395.60 (SD 272.2) minutes over the 12-week treatment period. Conclusions Overall, all five fully automated self-help e-therapy programs appear to be delivering promising high-quality outcomes; however, the results require replication. Trial Registration Australian and New Zealand Clinical Trials Registry ACTRN121611000704998; http://www.anzctr.org.au/trial_view.aspx?ID=336143 (Archived by WebCite at http://www.webcitation.org/618r3wvOG) PMID:22057287
Vierhile, Molly
2017-01-01
Background Web-based cognitive-behavioral therapeutic (CBT) apps have demonstrated efficacy but are characterized by poor adherence. Conversational agents may offer a convenient, engaging way of getting support at any time. Objective The objective of the study was to determine the feasibility, acceptability, and preliminary efficacy of a fully automated conversational agent to deliver a self-help program for college students who self-identify as having symptoms of anxiety and depression. Methods In an unblinded trial, 70 individuals age 18-28 years were recruited online from a university community social media site and were randomized to receive either 2 weeks (up to 20 sessions) of self-help content derived from CBT principles in a conversational format with a text-based conversational agent (Woebot) (n=34) or were directed to the National Institute of Mental Health ebook, “Depression in College Students,” as an information-only control group (n=36). All participants completed Web-based versions of the 9-item Patient Health Questionnaire (PHQ-9), the 7-item Generalized Anxiety Disorder scale (GAD-7), and the Positive and Negative Affect Scale at baseline and 2-3 weeks later (T2). Results Participants were on average 22.2 years old (SD 2.33), 67% female (47/70), mostly non-Hispanic (93%, 54/58), and Caucasian (79%, 46/58). Participants in the Woebot group engaged with the conversational agent an average of 12.14 (SD 2.23) times over the study period. No significant differences existed between the groups at baseline, and 83% (58/70) of participants provided data at T2 (17% attrition). Intent-to-treat univariate analysis of covariance revealed a significant group difference on depression such that those in the Woebot group significantly reduced their symptoms of depression over the study period as measured by the PHQ-9 (F=6.47; P=.01) while those in the information control group did not. In an analysis of completers, participants in both groups significantly reduced anxiety as measured by the GAD-7 (F1,54= 9.24; P=.004). Participants’ comments suggest that process factors were more influential on their acceptability of the program than content factors mirroring traditional therapy. Conclusions Conversational agents appear to be a feasible, engaging, and effective way to deliver CBT. PMID:28588005
Barrero, Roberto A; Napier, Kathryn R; Cunnington, James; Liefting, Lia; Keenan, Sandi; Frampton, Rebekah A; Szabo, Tamas; Bulman, Simon; Hunter, Adam; Ward, Lisa; Whattam, Mark; Bellgard, Matthew I
2017-01-11
Detection and preventing entry of exotic viruses and viroids at the border is critical for protecting plant industries trade worldwide. Existing post entry quarantine screening protocols rely on time-consuming biological indicators and/or molecular assays that require knowledge of infecting viral pathogens. Plants have developed the ability to recognise and respond to viral infections through Dicer-like enzymes that cleave viral sequences into specific small RNA products. Many studies reported the use of a broad range of small RNAs encompassing the product sizes of several Dicer enzymes involved in distinct biological pathways. Here we optimise the assembly of viral sequences by using specific small RNA subsets. We sequenced the small RNA fractions of 21 plants held at quarantine glasshouse facilities in Australia and New Zealand. Benchmarking of several de novo assembler tools yielded SPAdes using a kmer of 19 to produce the best assembly outcomes. We also found that de novo assembly using 21-25 nt small RNAs can result in chimeric assemblies of viral sequences and plant host sequences. Such non-specific assemblies can be resolved by using 21-22 nt or 24 nt small RNAs subsets. Among the 21 selected samples, we identified contigs with sequence similarity to 18 viruses and 3 viroids in 13 samples. Most of the viruses were assembled using only 21-22 nt long virus-derived siRNAs (viRNAs), except for one Citrus endogenous pararetrovirus that was more efficiently assembled using 24 nt long viRNAs. All three viroids found in this study were fully assembled using either 21-22 nt or 24 nt viRNAs. Optimised analysis workflows were customised within the Yabi web-based analytical environment. We present a fully automated viral surveillance and diagnosis web-based bioinformatics toolkit that provides a flexible, user-friendly, robust and scalable interface for the discovery and diagnosis of viral pathogens. We have implemented an automated viral surveillance and diagnosis (VSD) bioinformatics toolkit that produces improved viruses and viroid sequence assemblies. The VSD toolkit provides several optimised and reusable workflows applicable to distinct viral pathogens. We envisage that this resource will facilitate the surveillance and diagnosis viral pathogens in plants, insects and invertebrates.
SeqMule: automated pipeline for analysis of human exome/genome sequencing data.
Guo, Yunfei; Ding, Xiaolei; Shen, Yufeng; Lyon, Gholson J; Wang, Kai
2015-09-18
Next-generation sequencing (NGS) technology has greatly helped us identify disease-contributory variants for Mendelian diseases. However, users are often faced with issues such as software compatibility, complicated configuration, and no access to high-performance computing facility. Discrepancies exist among aligners and variant callers. We developed a computational pipeline, SeqMule, to perform automated variant calling from NGS data on human genomes and exomes. SeqMule integrates computational-cluster-free parallelization capability built on top of the variant callers, and facilitates normalization/intersection of variant calls to generate consensus set with high confidence. SeqMule integrates 5 alignment tools, 5 variant calling algorithms and accepts various combinations all by one-line command, therefore allowing highly flexible yet fully automated variant calling. In a modern machine (2 Intel Xeon X5650 CPUs, 48 GB memory), when fast turn-around is needed, SeqMule generates annotated VCF files in a day from a 30X whole-genome sequencing data set; when more accurate calling is needed, SeqMule generates consensus call set that improves over single callers, as measured by both Mendelian error rate and consistency. SeqMule supports Sun Grid Engine for parallel processing, offers turn-key solution for deployment on Amazon Web Services, allows quality check, Mendelian error check, consistency evaluation, HTML-based reports. SeqMule is available at http://seqmule.openbioinformatics.org.
Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T
2015-06-01
To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe volume and greater variability in edge detection for the left hepatic lobe compared to manual segmentation.
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.
Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
Salimi, Nima; Loh, Kar Hoe; Kaur Dhillon, Sarinder; Chong, Ving Ching
2016-01-01
Background. Fish species may be identified based on their unique otolith shape or contour. Several pattern recognition methods have been proposed to classify fish species through morphological features of the otolith contours. However, there has been no fully-automated species identification model with the accuracy higher than 80%. The purpose of the current study is to develop a fully-automated model, based on the otolith contours, to identify the fish species with the high classification accuracy. Methods. Images of the right sagittal otoliths of 14 fish species from three families namely Sciaenidae, Ariidae, and Engraulidae were used to develop the proposed identification model. Short-time Fourier transform (STFT) was used, for the first time in the area of otolith shape analysis, to extract important features of the otolith contours. Discriminant Analysis (DA), as a classification technique, was used to train and test the model based on the extracted features. Results. Performance of the model was demonstrated using species from three families separately, as well as all species combined. Overall classification accuracy of the model was greater than 90% for all cases. In addition, effects of STFT variables on the performance of the identification model were explored in this study. Conclusions. Short-time Fourier transform could determine important features of the otolith outlines. The fully-automated model proposed in this study (STFT-DA) could predict species of an unknown specimen with acceptable identification accuracy. The model codes can be accessed at http://mybiodiversityontologies.um.edu.my/Otolith/ and https://peerj.com/preprints/1517/. The current model has flexibility to be used for more species and families in future studies.
Bewick, Bridgette M; West, Robert M; Barkham, Michael; Mulhern, Brendan; Marlow, Robert; Traviss, Gemma; Hill, Andrew J
2013-07-24
Alcohol consumption in the student population continues to be cause for concern. Building on the established evidence base for traditional brief interventions, interventions using the Internet as a mode of delivery are being developed. Published evidence of replication of initial findings and ongoing development and modification of Web-based personalized feedback interventions for student alcohol use is relatively rare. The current paper reports on the replication of the initial Unitcheck feasibility trial. To evaluate the effectiveness of Unitcheck, a Web-based intervention that provides instant personalized feedback on alcohol consumption. It was hypothesized that use of Unitcheck would be associated with a reduction in alcohol consumption. A randomized control trial with two arms (control=assessment only; intervention=fully automated personalized feedback delivered using a Web-based intervention). The intervention was available week 1 through to week 15. Students at a UK university who were completing a university-wide annual student union electronic survey were invited to participate in the current study. Participants (n=1618) were stratified by sex, age group, year of study, self-reported alcohol consumption, then randomly assigned to one of the two arms, and invited to participate in the current trial. Participants were not blind to allocation. In total, n=1478 (n=723 intervention, n=755 control) participants accepted the invitation. Of these, 70% were female, the age ranged from 17-50 years old, and 88% were white/white British. Data were collected electronically via two websites: one for each treatment arm. Participants completed assessments at weeks 1, 16, and 34. Assessment included CAGE, a 7-day retrospective drinking diary, and drinks consumed per drinking occasion. The regression model predicted a monitoring effect, with participants who completed assessments reducing alcohol consumption over the final week. Further reductions were predicted for those allocated to receive the intervention, and additional reductions were predicted as the number of visits to the intervention website increased. Unitcheck can reduce the amount of alcohol consumed, and the reduction can be sustained in the medium term (ie, 19 weeks after intervention was withdrawn). The findings suggest self-monitoring is an active ingredient to Web-based personalized feedback.
Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly
2013-01-01
High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652
An Intelligent Case-Based Help Desk Providing Web-Based Support for EOSDIS Customers
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.; Thurman, David A.
1998-01-01
This paper describes a project that extends the concept of help desk automation by offering World Wide Web access to a case-based help desk. It explores the use of case-based reasoning and cognitive engineering models to create an 'intelligent' help desk system, one that learns. It discusses the AutoHelp architecture for such a help desk and summarizes the technologies used to create a help desk for NASA data users.
An ontology-driven tool for structured data acquisition using Web forms.
Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A
2017-08-01
Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.
Hoehl, Melanie M; Weißert, Michael; Dannenberg, Arne; Nesch, Thomas; Paust, Nils; von Stetten, Felix; Zengerle, Roland; Slocum, Alexander H; Steigert, Juergen
2014-06-01
This paper introduces a disposable battery-driven heating system for loop-mediated isothermal DNA amplification (LAMP) inside a centrifugally-driven DNA purification platform (LabTube). We demonstrate LabTube-based fully automated DNA purification of as low as 100 cell-equivalents of verotoxin-producing Escherichia coli (VTEC) in water, milk and apple juice in a laboratory centrifuge, followed by integrated and automated LAMP amplification with a reduction of hands-on time from 45 to 1 min. The heating system consists of two parallel SMD thick film resistors and a NTC as heating and temperature sensing elements. They are driven by a 3 V battery and controlled by a microcontroller. The LAMP reagents are stored in the elution chamber and the amplification starts immediately after the eluate is purged into the chamber. The LabTube, including a microcontroller-based heating system, demonstrates contamination-free and automated sample-to-answer nucleic acid testing within a laboratory centrifuge. The heating system can be easily parallelized within one LabTube and it is deployable for a variety of heating and electrical applications.
Solenhill, Madeleine; Grotta, Alessandra; Pasquali, Elena; Bakkman, Linda; Bellocco, Rino; Trolle Lagerros, Ylva
2016-08-11
Lifestyle-related health problems are an important health concern in the transport service industry. Web- and telephone-based interventions could be suitable for this target group requiring tailored approaches. To evaluate the effect of tailored Web-based health feedback and optional telephone coaching to improve lifestyle factors (body mass index-BMI, dietary intake, physical activity, stress, sleep, tobacco and alcohol consumption, disease history, self-perceived health, and motivation to change health habits), in comparison to no health feedback or telephone coaching. Overall, 3,876 employees in the Swedish transport services were emailed a Web-based questionnaire. They were randomized into: control group (group A, 498 of 1238 answered, 40.23%), or intervention Web (group B, 482 of 1305 answered, 36.93%), or intervention Web + telephone (group C, 493 of 1333 answered, 36.98%). All groups received an identical questionnaire, only the interventions differed. Group B received tailored Web-based health feedback, and group C received tailored Web-based health feedback + optional telephone coaching if the participants' reported health habits did not meet the national guidelines, or if they expressed motivation to change health habits. The Web-based feedback was fully automated. Telephone coaching was performed by trained health counselors. Nine months later, all participants received a follow-up questionnaire and intervention Web + telephone. Descriptive statistics, the chi-square test, analysis of variance, and generalized estimating equation (GEE) models were used. Overall, 981 of 1473 (66.60%) employees participated at baseline (men: 66.7%, mean age: 44 years, mean BMI: 26.4 kg/m(2)) and follow-up. No significant differences were found in reported health habits between the 3 groups over time. However, significant changes were found in motivation to change. The intervention groups reported higher motivation to improve dietary habits (144 of 301 participants, 47.8%, and 165 of 324 participants, 50.9%, for groups B and C, respectively) and physical activity habits (181 of 301 participants, 60.1%, and 207 of 324 participants, 63.9%, for B and C, respectively) compared with the control group A (122 of 356 participants, 34.3%, for diet and 177 of 356 participants, 49.7%, for physical activity). At follow-up, the intervention groups had significantly decreased motivation (group B: P<.001 for change in diet; P<.001 for change in physical activity; group C: P=.007 for change in diet; P<.001 for change in physical activity), whereas the control group reported significantly increased motivation to change diet and physical activity (P<.001 for change in diet; P<.001 for change in physical activity). Tailored Web-based health feedback and the offering of optional telephone coaching did not have a positive health effect on employees in the transport services. However, our findings suggest an increased short-term motivation to change health behaviors related to diet and physical activity among those receiving tailored Web-based health feedback.
Automating Visualization Service Generation with the WATT Compiler
NASA Astrophysics Data System (ADS)
Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.
2007-12-01
As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web services. In particular, we will detail the generation of a charge density visualization service applicable to output from the quantum calculations of the VLab computation workflows, plus another service for mantle convection visualization. We also discuss WATT-LIVE [2], a web-based interface that allows users to interact with WATT. With WATT-LIVE users submit Tcl code, retrieve its C++ translation with various files and scripts necessary to locally install the tailor-made web service, or launch the service for a limited session on our test server. This work is supported by NSF through the ITR grant NSF-0426867. [1] Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu, September 2007. [2] WATT-LIVE website, http://vlab2.scs.fsu.edu/watt-live, September 2007.
2010-01-01
Introduction Joint effusion is frequently associated with osteoarthritis (OA) flare-up and is an important marker of therapeutic response. This study aimed at developing and validating a fully automated system based on magnetic resonance imaging (MRI) for the quantification of joint effusion volume in knee OA patients. Methods MRI examinations consisted of two axial sequences: a T2-weighted true fast imaging with steady-state precession and a T1-weighted gradient echo. An automated joint effusion volume quantification system using MRI was developed and validated (a) with calibrated phantoms (cylinder and sphere) and effusion from knee OA patients; (b) with assessment by manual quantification; and (c) by direct aspiration. Twenty-five knee OA patients with joint effusion were included in the study. Results The automated joint effusion volume quantification was developed as a four stage sequencing process: bone segmentation, filtering of unrelated structures, segmentation of joint effusion, and subvoxel volume calculation. Validation experiments revealed excellent coefficients of variation with the calibrated cylinder (1.4%) and sphere (0.8%) phantoms. Comparison of the OA knee joint effusion volume assessed by the developed automated system and by manual quantification was also excellent (r = 0.98; P < 0.0001), as was the comparison with direct aspiration (r = 0.88; P = 0.0008). Conclusions The newly developed fully automated MRI-based system provided precise quantification of OA knee joint effusion volume with excellent correlation with data from phantoms, a manual system, and joint aspiration. Such an automated system will be instrumental in improving the reproducibility/reliability of the evaluation of this marker in clinical application. PMID:20846392
Arab, Lenore; Hahn, Harry; Henry, Judith; Chacko, Sara; Winter, Ashley; Cambou, Mary C
2010-03-01
Screening and tracking subjects and data management in clinical trials require significant investments in manpower that can be reduced through the use of web-based systems. To support a validation trial of various dietary assessment tools that required multiple clinic visits and eight repeats of online assessments, we developed an interactive web-based system to automate all levels of management of a biomarker-based clinical trial. The "Energetics System" was developed to support 1) the work of the study coordinator in recruiting, screening and tracking subject flow, 2) the need of the principal investigator to review study progress, and 3) continuous data analysis. The system was designed to automate web-based self-screening into the trial. It supported scheduling tasks and triggered tailored messaging for late and non-responders. For the investigators, it provided real-time status overviews on all subjects, created electronic case reports, supported data queries and prepared analytic data files. Encryption and multi-level password protection were used to insure data privacy. The system was programmed iteratively and required six months of a web programmer's time along with active team engagement. In this study the enhancement in speed and efficiency of recruitment and quality of data collection as a result of this system outweighed the initial investment. Web-based systems have the potential to streamline the process of recruitment and day-to-day management of clinical trials in addition to improving efficiency and quality. Because of their added value they should be considered for trials of moderate size or complexity. Copyright 2009 Elsevier Inc. All rights reserved.
Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina
2016-04-01
To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.
Haug, Severin; Kowatsch, Tobias; Castro, Raquel Paz; Filler, Andreas; Schaub, Michael P
2014-08-07
Problem drinking, particularly risky single-occasion drinking is widespread among adolescents and young adults in most Western countries. Mobile phone text messaging allows a proactive and cost-effective delivery of short messages at any time and place and allows the delivery of individualised information at times when young people typically drink alcohol. The main objective of the planned study is to test the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people with heterogeneous educational level. A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted to test the efficacy of the intervention in comparison to assessment only. The fully-automated intervention program will provide an online feedback based on the social norms approach as well as individually tailored mobile phone text messages to stimulate (1) positive outcome expectations to drink within low-risk limits, (2) self-efficacy to resist alcohol and (3) planning processes to translate intentions to resist alcohol into action. Program participants will receive up to two weekly text messages over a time period of 3 months. Study participants will be 934 students from approximately 93 upper secondary and vocational schools in Switzerland. Main outcome criterion will be risky single-occasion drinking in the past 30 days preceding the follow-up assessment. This is the first study testing the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people. Given that this intervention approach proves to be effective, it could be easily implemented in various settings, and it could reach large numbers of young people in a cost-effective way. Current Controlled Trials ISRCTN59944705.
An Auto-management Thesis Program WebMIS Based on Workflow
NASA Astrophysics Data System (ADS)
Chang, Li; Jie, Shi; Weibo, Zhong
An auto-management WebMIS based on workflow for bachelor thesis program is given in this paper. A module used for workflow dispatching is designed and realized using MySQL and J2EE according to the work principle of workflow engine. The module can automatively dispatch the workflow according to the date of system, login information and the work status of the user. The WebMIS changes the management from handwork to computer-work which not only standardizes the thesis program but also keeps the data and documents clean and consistent.
A General Tool for Engineering the NAD/NADP Cofactor Preference of Oxidoreductases.
Cahn, Jackson K B; Werlang, Caroline A; Baumschlager, Armin; Brinkmann-Chen, Sabine; Mayo, Stephen L; Arnold, Frances H
2017-02-17
The ability to control enzymatic nicotinamide cofactor utilization is critical for engineering efficient metabolic pathways. However, the complex interactions that determine cofactor-binding preference render this engineering particularly challenging. Physics-based models have been insufficiently accurate and blind directed evolution methods too inefficient to be widely adopted. Building on a comprehensive survey of previous studies and our own prior engineering successes, we present a structure-guided, semirational strategy for reversing enzymatic nicotinamide cofactor specificity. This heuristic-based approach leverages the diversity and sensitivity of catalytically productive cofactor binding geometries to limit the problem to an experimentally tractable scale. We demonstrate the efficacy of this strategy by inverting the cofactor specificity of four structurally diverse NADP-dependent enzymes: glyoxylate reductase, cinnamyl alcohol dehydrogenase, xylose reductase, and iron-containing alcohol dehydrogenase. The analytical components of this approach have been fully automated and are available in the form of an easy-to-use web tool: Cofactor Specificity Reversal-Structural Analysis and Library Design (CSR-SALAD).
Zimmerman, Thea Palmer; Hull, Stephen G; McNutt, Suzanne; Mittl, Beth; Islam, Noemi; Guenther, Patricia M; Thompson, Frances E; Potischman, Nancy A; Subar, Amy F
2009-12-01
The National Cancer Institute (NCI) is developing an automated, self-administered 24-hour dietary recall (ASA24) application to collect and code dietary intake data. The goal of the ASA24 development is to create a web-based dietary interview based on the US Department of Agriculture (USDA) Automated Multiple Pass Method (AMPM) instrument currently used in the National Health and Nutrition Examination Survey (NHANES). The ASA24 food list, detail probes, and portion probes were drawn from the AMPM instrument; portion-size pictures from Baylor College of Medicine's Food Intake Recording Software System (FIRSSt) were added; and the food code/portion code assignments were linked to the USDA Food and Nutrient Database for Dietary Studies (FNDDS). The requirements that the interview be self-administered and fully auto-coded presented several challenges as the AMPM probes and responses were linked with the FNDDS food codes and portion pictures. This linking was accomplished through a "food pathway," or the sequence of steps that leads from a respondent's initial food selection, through the AMPM probes and portion pictures, to the point at which a food code and gram weight portion size are assigned. The ASA24 interview database that accomplishes this contains more than 1,100 food probes and more than 2 million food pathways and will include about 10,000 pictures of individual foods depicting up to 8 portion sizes per food. The ASA24 will make the administration of multiple days of recalls in large-scale studies economical and feasible.
ERIC Educational Resources Information Center
Cotos, Elena
2010-01-01
This dissertation presents an innovative approach to the development and empirical evaluation of Automated Writing Evaluation (AWE) technology used for teaching and learning. It introduces IADE (Intelligent Academic Discourse Evaluator), a new web-based AWE program that analyzes research article Introduction sections and generates immediate,…
Christen, Matthias; Del Medico, Luca; Christen, Heinz; Christen, Beat
2017-01-01
Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.
ROBOCAL: An automated NDA (nondestructive analysis) calorimetry and gamma isotopic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.R.; Powell, W.D.; Ostenak, C.A.
1989-11-01
ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototype robotic system for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multidrawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface is provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric and gamma-ray data acquisitionmore » and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices.« less
NASA Astrophysics Data System (ADS)
Adinolfi, M.; Archilli, F.; Baldini, W.; Baranov, A.; Derkach, D.; Panin, A.; Pearce, A.; Ustyuzhanin, A.
2017-10-01
Data quality monitoring, DQM, is crucial in a high-energy physics experiment to ensure the correct functioning of the experimental apparatus during the data taking. DQM at LHCb is carried out in two phases. The first one is performed on-site, in real time, using unprocessed data directly from the LHCb detector, while the second, also performed on-site, requires the reconstruction of the data selected by the LHCb trigger system and occurs later. For the LHC Run II data taking the LHCb collaboration has re-engineered the DQM protocols and the DQM graphical interface, moving the latter to a web-based monitoring system, called Monet, thus allowing researchers to perform the second phase off-site. In order to support the operator’s task, Monet is also equipped with an automated, fully configurable alarm system, thus allowing its use not only for DQM purposes, but also to track and assess the quality of LHCb software and simulation over time.
SOAP based web services and their future role in VO projects
NASA Astrophysics Data System (ADS)
Topf, F.; Jacquey, C.; Génot, V.; Cecconi, B.; André, N.; Zhang, T. L.; Kallio, E.; Lammer, H.; Facsko, G.; Stöckler, R.; Khodachenko, M.
2011-10-01
Modern state-of-the-art web services are from crucial importance for the interoperability of different VO tools existing in the planetary community. SOAP based web services assure the interconnectability between different data sources and tools by providing a common protocol for communication. This paper will point out a best practice approach with the Automated Multi-Dataset Analysis Tool (AMDA) developed by CDPP, Toulouse and the provision of VEX/MAG data from a remote database located at IWF, Graz. Furthermore a new FP7 project IMPEx will be introduced with a potential usage example of AMDA web services in conjunction with simulation models.
Development of a Global Agricultural Hotspot Detection and Early Warning System
NASA Astrophysics Data System (ADS)
Lemoine, G.; Rembold, F.; Urbano, F.; Csak, G.
2015-12-01
The number of web based platforms for crop monitoring has grown rapidly over the last years and anomaly maps and time profiles of remote sensing derived indicators can be accessed online thanks to a number of web based portals. However, while these systems make available a large amount of crop monitoring data to the agriculture and food security analysts, there is no global platform which provides agricultural production hotspot warning in a highly automatic and timely manner. Therefore a web based system providing timely warning evidence as maps and short narratives is currently under development by the Joint Research Centre. The system (called "HotSpot Detection System of Agriculture Production Anomalies", HSDS) will focus on water limited agricultural systems worldwide. The automatic analysis of relevant meteorological and vegetation indicators at selected administrative units (Gaul 1 level) will trigger warning messages for the areas where anomalous conditions are observed. The level of warning (ranging from "watch" to "alert") will depend on the nature and number of indicators for which an anomaly is detected. Information regarding the extent of the agricultural areas concerned by the anomaly and the progress of the agricultural season will complement the warning label. In addition, we are testing supplementary detailed information from other sources for the areas triggering a warning. These regard the automatic web-based and food security-tailored analysis of media (using the JRC Media Monitor semantic search engine) and the automatic detection of active crop area using Sentinel 1, upcoming Sentinel-2 and Landsat 8 imagery processed in Google Earth Engine. The basic processing will be fully automated and updated every 10 days exploiting low resolution rainfall estimates and satellite vegetation indices. Maps, trend graphs and statistics accompanied by short narratives edited by a team of crop monitoring experts, will be made available on the website on a monthly basis.
Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.
Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian
2018-03-26
In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.
2014-01-01
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933
Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T
2014-09-10
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.
An automated image-collection system for crystallization experiments using SBS standard microplates.
Brostromer, Erik; Nan, Jie; Su, Xiao Dong
2007-02-01
As part of a structural genomics platform in a university laboratory, a low-cost in-house-developed automated imaging system for SBS microplate experiments has been designed and constructed. The imaging system can scan a microplate in 2-6 min for a 96-well plate depending on the plate layout and scanning options. A web-based crystallization database system has been developed, enabling users to follow their crystallization experiments from a web browser. As the system has been designed and built by students and crystallographers using commercially available parts, this report is aimed to serve as a do-it-yourself example for laboratory robotics.
Adopting and adapting a commercial view of web services for the Navy
NASA Astrophysics Data System (ADS)
Warner, Elizabeth; Ladner, Roy; Katikaneni, Uday; Petry, Fred
2005-05-01
Web Services are being adopted as the enabling technology to provide net-centric capabilities for many Department of Defense operations. The Navy Enterprise Portal, for example, is Web Services-based, and the Department of the Navy is promulgating guidance for developing Web Services. Web Services, however, only constitute a baseline specification that provides the foundation on which users, under current approaches, write specialized applications in order to retrieve data over the Internet. Application development may increase dramatically as the number of different available Web Services increases. Reasons for specialized application development include XML schema versioning differences, adoption/use of diverse business rules, security access issues, and time/parameter naming constraints, among others. We are currently developing for the US Navy a system which will improve delivery of timely and relevant meteorological and oceanographic (MetOc) data to the warfighter. Our objective is to develop an Advanced MetOc Broker (AMB) that leverages Web Services technology to identify, retrieve and integrate relevant MetOc data in an automated manner. The AMB will utilize a Mediator, which will be developed by applying ontological research and schema matching techniques to MetOc forms of data. The AMB, using the Mediator, will support a new, advanced approach to the use of Web Services; namely, the automated identification, retrieval and integration of MetOc data. Systems based on this approach will then not require extensive end-user application development for each Web Service from which data can be retrieved. Users anywhere on the globe will be able to receive timely environmental data that fits their particular needs.
Peak picking multidimensional NMR spectra with the contour geometry based algorithm CYPICK.
Würz, Julia M; Güntert, Peter
2017-01-01
The automated identification of signals in multidimensional NMR spectra is a challenging task, complicated by signal overlap, noise, and spectral artifacts, for which no universally accepted method is available. Here, we present a new peak picking algorithm, CYPICK, that follows, as far as possible, the manual approach taken by a spectroscopist who analyzes peak patterns in contour plots of the spectrum, but is fully automated. Human visual inspection is replaced by the evaluation of geometric criteria applied to contour lines, such as local extremality, approximate circularity (after appropriate scaling of the spectrum axes), and convexity. The performance of CYPICK was evaluated for a variety of spectra from different proteins by systematic comparison with peak lists obtained by other, manual or automated, peak picking methods, as well as by analyzing the results of automated chemical shift assignment and structure calculation based on input peak lists from CYPICK. The results show that CYPICK yielded peak lists that compare in most cases favorably to those obtained by other automated peak pickers with respect to the criteria of finding a maximal number of real signals, a minimal number of artifact peaks, and maximal correctness of the chemical shift assignments and the three-dimensional structure obtained by fully automated assignment and structure calculation.
HYDRA: A Middleware-Oriented Integrated Architecture for e-Procurement in Supply Chains
NASA Astrophysics Data System (ADS)
Alor-Hernandez, Giner; Aguilar-Lasserre, Alberto; Juarez-Martinez, Ulises; Posada-Gomez, Ruben; Cortes-Robles, Guillermo; Garcia-Martinez, Mario Alberto; Gomez-Berbis, Juan Miguel; Rodriguez-Gonzalez, Alejandro
The Service-Oriented Architecture (SOA) development paradigm has emerged to improve the critical issues of creating, modifying and extending solutions for business processes integration, incorporating process automation and automated exchange of information between organizations. Web services technology follows the SOA's principles for developing and deploying applications. Besides, Web services are considered as the platform for SOA, for both intra- and inter-enterprise communication. However, an SOA does not incorporate information about occurring events into business processes, which are the main features of supply chain management. These events and information delivery are addressed in an Event-Driven Architecture (EDA). Taking this into account, we propose a middleware-oriented integrated architecture that offers a brokering service for the procurement of products in a Supply Chain Management (SCM) scenario. As salient contributions, our system provides a hybrid architecture combining features of both SOA and EDA and a set of mechanisms for business processes pattern management, monitoring based on UML sequence diagrams, Web services-based management, event publish/subscription and reliable messaging service.
Automated Data Quality Assurance using OGC Sensor Web Enablement Frameworks for Marine Observatories
NASA Astrophysics Data System (ADS)
Toma, Daniel; Bghiel, Ikram; del Rio, Joaquin; Hidalgo, Alberto; Carreras, Normandino; Manuel, Antoni
2014-05-01
Over the past years, environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent. Therefore, many sensor networks are increasingly deployed to monitor our environment. But due to the large number of sensor manufacturers, accompanying protocols and data encoding, automated integration and data quality assurance of diverse sensors in an observing systems is not straightforward, requiring development of data management code and manual tedious configuration. However, over the past few years it has been demonstrated that Open-Geospatial Consortium (OGC) frameworks can enable web services with fully-described sensor systems, including data processing, sensor characteristics and quality control tests and results. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The data management software which enables access to sensors, data processing and quality control tests has to be implemented and the results have to be manually mapped to the SWE models. In this contribution, we describe a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) OGC PUCK protocol - a simple standard embedded instrument protocol to store and retrieve directly from the devices the declarative description of sensor characteristics and quality control tests, (2) an automatic mechanism for data processing and quality control tests underlying the Sensor Web - the Sensor Interface Descriptor (SID) concept, as well as (3) a model for the declarative description of sensor which serves as a generic data management mechanism - designed as a profile and extension of OGC SWE's SensorML standard. We implement and evaluate our approach by applying it to the OBSEA Observatory, and can be used to demonstrate the ability to assess data quality for temperature, salinity, air pressure and wind speed and direction observations off the coast of Garraf, in the north-eastern Spain.
Kooistra, Lammert; Bergsma, Aldo; Chuma, Beatus; de Bruin, Sytze
2009-01-01
This paper describes the development of a sensor web based approach which combines earth observation and in situ sensor data to derive typical information offered by a dynamic web mapping service (WMS). A prototype has been developed which provides daily maps of vegetation productivity for the Netherlands with a spatial resolution of 250 m. Daily available MODIS surface reflectance products and meteorological parameters obtained through a Sensor Observation Service (SOS) were used as input for a vegetation productivity model. This paper presents the vegetation productivity model, the sensor data sources and the implementation of the automated processing facility. Finally, an evaluation is made of the opportunities and limitations of sensor web based approaches for the development of web services which combine both satellite and in situ sensor sources. PMID:22574019
ERIC Educational Resources Information Center
Tsai, Min-Hsiu
2017-01-01
Who is the most preferred and deemed the most helpful reviewer in improving student writing? This study exercised a blended teaching method which consists of three currently prevailing reviewers: the automated grading system (AGS, a web-based method), the peer review (a process-oriented approach), and the teacher grading technique (the…
A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System
ERIC Educational Resources Information Center
Chim, Hung; Deng, Xiaotie
2008-01-01
We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…
2014-01-01
Background There is a need for cost-effective weight management interventions that primary care can deliver to reduce the morbidity caused by obesity. Automated web-based interventions might provide a solution, but evidence suggests that they may be ineffective without additional human support. The main aim of this study was to carry out a feasibility trial of a web-based weight management intervention in primary care, comparing different levels of nurse support, to determine the optimal combination of web-based and personal support to be tested in a full trial. Methods This was an individually randomised four arm parallel non-blinded trial, recruiting obese patients in primary care. Following online registration, patients were randomly allocated by the automated intervention to either usual care, the web-based intervention only, or the web-based intervention with either basic nurse support (3 sessions in 3 months) or regular nurse support (7 sessions in 6 months). The main outcome measure (intended as the primary outcome for the main trial) was weight loss in kg at 12 months. As this was a feasibility trial no statistical analyses were carried out, but we present means, confidence intervals and effect sizes for weight loss in each group, uptake and retention, and completion of intervention components and outcome measures. Results All randomised patients were included in the weight loss analyses (using Last Observation Carried Forward). At 12 months mean weight loss was: usual care group (n = 43) 2.44 kg; web-based only group (n = 45) 2.30 kg; basic nurse support group (n = 44) 4.31 kg; regular nurse support group (n = 47) 2.50 kg. Intervention effect sizes compared with usual care were: d = 0.01 web-based; d = 0.34 basic nurse support; d = 0.02 regular nurse support. Two practices deviated from protocol by providing considerable weight management support to their usual care patients. Conclusions This study demonstrated the feasibility of delivering a web-based weight management intervention supported by practice nurses in primary care, and suggests that the combination of the web-based intervention with basic nurse support could provide an effective solution to weight management support in a primary care context. Trial registration Current Controlled Trials ISRCTN31685626. PMID:24886516
Hahn, Harry; Henry, Judith; Chacko, Sara; Winter, Ashley; Cambou, Mary C
2010-01-01
Screening and tracking subjects and data management in clinical trials require significant investments in manpower that can be reduced through the use of web-based systems. To support a validation trial of various dietary assessment tools that required multiple clinic visits and eight repeats of online assessments, we developed an interactive web-based system to automate all levels of management of a biomarker-based clinical trial. The “Energetics System” was developed to support 1) the work of the study coordinator in recruiting, screening and tracking subject flow, 2) the need of the principal investigator to review study progress, and 3) continuous data analysis. The system was designed to automate web-based self-screening into the trial. It supported scheduling tasks and triggered tailored messaging for late and non-responders. For the investigators, it provided real time status overviews on all subjects, created electronic case reports, supported data queries and prepared analytic data files. Encryption and multi-level password protection were used to insure data privacy. The system was programmed iteratively and required six months of a web programmer's time along with active team engagement. In this study the enhancement in speed and efficiency of recruitment and quality of data collection as a result of this system outweighed the initial investment. Web-based systems have the potential to streamline the process of recruitment and day-to-day management of clinical trials in addition to improving efficiency and quality. Because of their added value they should be considered for trials of moderate size or complexity. Grant support: NIH funded R01CA105048. PMID:19925884
NASA Astrophysics Data System (ADS)
Hopp, T.; Zapf, M.; Ruiter, N. V.
2014-03-01
An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.
How a Fully Automated eHealth Program Simulates Three Therapeutic Processes: A Case Study.
Holter, Marianne T S; Johansen, Ayna; Brendryen, Håvar
2016-06-28
eHealth programs may be better understood by breaking down the components of one particular program and discussing its potential for interactivity and tailoring in regard to concepts from face-to-face counseling. In the search for the efficacious elements within eHealth programs, it is important to understand how a program using lapse management may simultaneously support working alliance, internalization of motivation, and behavior maintenance. These processes have been applied to fully automated eHealth programs individually. However, given their significance in face-to-face counseling, it may be important to simulate the processes simultaneously in interactive, tailored programs. We propose a theoretical model for how fully automated behavior change eHealth programs may be more effective by simulating a therapist's support of a working alliance, internalization of motivation, and managing lapses. We show how the model is derived from theory and its application to Endre, a fully automated smoking cessation program that engages the user in several "counseling sessions" about quitting. A descriptive case study based on tools from the intervention mapping protocol shows how each therapeutic process is simulated. The program supports the user's working alliance through alliance factors, the nonembodied relational agent Endre and computerized motivational interviewing. Computerized motivational interviewing also supports internalized motivation to quit, whereas a lapse management component responds to lapses. The description operationalizes working alliance, internalization of motivation, and managing lapses, in terms of eHealth support of smoking cessation. A program may simulate working alliance, internalization of motivation, and lapse management through interactivity and individual tailoring, potentially making fully automated eHealth behavior change programs more effective.
How a Fully Automated eHealth Program Simulates Three Therapeutic Processes: A Case Study
Johansen, Ayna; Brendryen, Håvar
2016-01-01
Background eHealth programs may be better understood by breaking down the components of one particular program and discussing its potential for interactivity and tailoring in regard to concepts from face-to-face counseling. In the search for the efficacious elements within eHealth programs, it is important to understand how a program using lapse management may simultaneously support working alliance, internalization of motivation, and behavior maintenance. These processes have been applied to fully automated eHealth programs individually. However, given their significance in face-to-face counseling, it may be important to simulate the processes simultaneously in interactive, tailored programs. Objective We propose a theoretical model for how fully automated behavior change eHealth programs may be more effective by simulating a therapist’s support of a working alliance, internalization of motivation, and managing lapses. Methods We show how the model is derived from theory and its application to Endre, a fully automated smoking cessation program that engages the user in several “counseling sessions” about quitting. A descriptive case study based on tools from the intervention mapping protocol shows how each therapeutic process is simulated. Results The program supports the user’s working alliance through alliance factors, the nonembodied relational agent Endre and computerized motivational interviewing. Computerized motivational interviewing also supports internalized motivation to quit, whereas a lapse management component responds to lapses. The description operationalizes working alliance, internalization of motivation, and managing lapses, in terms of eHealth support of smoking cessation. Conclusions A program may simulate working alliance, internalization of motivation, and lapse management through interactivity and individual tailoring, potentially making fully automated eHealth behavior change programs more effective. PMID:27354373
Automated MRI segmentation for individualized modeling of current flow in the human head.
Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C
2013-12-01
High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Density-based parallel skin lesion border detection with webCL
2015-01-01
Background Dermoscopy is a highly effective and noninvasive imaging technique used in diagnosis of melanoma and other pigmented skin lesions. Many aspects of the lesion under consideration are defined in relation to the lesion border. This makes border detection one of the most important steps in dermoscopic image analysis. In current practice, dermatologists often delineate borders through a hand drawn representation based upon visual inspection. Due to the subjective nature of this technique, intra- and inter-observer variations are common. Because of this, the automated assessment of lesion borders in dermoscopic images has become an important area of study. Methods Fast density based skin lesion border detection method has been implemented in parallel with a new parallel technology called WebCL. WebCL utilizes client side computing capabilities to use available hardware resources such as multi cores and GPUs. Developed WebCL-parallel density based skin lesion border detection method runs efficiently from internet browsers. Results Previous research indicates that one of the highest accuracy rates can be achieved using density based clustering techniques for skin lesion border detection. While these algorithms do have unfavorable time complexities, this effect could be mitigated when implemented in parallel. In this study, density based clustering technique for skin lesion border detection is parallelized and redesigned to run very efficiently on the heterogeneous platforms (e.g. tablets, SmartPhones, multi-core CPUs, GPUs, and fully-integrated Accelerated Processing Units) by transforming the technique into a series of independent concurrent operations. Heterogeneous computing is adopted to support accessibility, portability and multi-device use in the clinical settings. For this, we used WebCL, an emerging technology that enables a HTML5 Web browser to execute code in parallel for heterogeneous platforms. We depicted WebCL and our parallel algorithm design. In addition, we tested parallel code on 100 dermoscopy images and showed the execution speedups with respect to the serial version. Results indicate that parallel (WebCL) version and serial version of density based lesion border detection methods generate the same accuracy rates for 100 dermoscopy images, in which mean of border error is 6.94%, mean of recall is 76.66%, and mean of precision is 99.29% respectively. Moreover, WebCL version's speedup factor for 100 dermoscopy images' lesion border detection averages around ~491.2. Conclusions When large amount of high resolution dermoscopy images considered in a usual clinical setting along with the critical importance of early detection and diagnosis of melanoma before metastasis, the importance of fast processing dermoscopy images become obvious. In this paper, we introduce WebCL and the use of it for biomedical image processing applications. WebCL is a javascript binding of OpenCL, which takes advantage of GPU computing from a web browser. Therefore, WebCL parallel version of density based skin lesion border detection introduced in this study can supplement expert dermatologist, and aid them in early diagnosis of skin lesions. While WebCL is currently an emerging technology, a full adoption of WebCL into the HTML5 standard would allow for this implementation to run on a very large set of hardware and software systems. WebCL takes full advantage of parallel computational resources including multi-cores and GPUs on a local machine, and allows for compiled code to run directly from the Web Browser. PMID:26423836
Density-based parallel skin lesion border detection with webCL.
Lemon, James; Kockara, Sinan; Halic, Tansel; Mete, Mutlu
2015-01-01
Dermoscopy is a highly effective and noninvasive imaging technique used in diagnosis of melanoma and other pigmented skin lesions. Many aspects of the lesion under consideration are defined in relation to the lesion border. This makes border detection one of the most important steps in dermoscopic image analysis. In current practice, dermatologists often delineate borders through a hand drawn representation based upon visual inspection. Due to the subjective nature of this technique, intra- and inter-observer variations are common. Because of this, the automated assessment of lesion borders in dermoscopic images has become an important area of study. Fast density based skin lesion border detection method has been implemented in parallel with a new parallel technology called WebCL. WebCL utilizes client side computing capabilities to use available hardware resources such as multi cores and GPUs. Developed WebCL-parallel density based skin lesion border detection method runs efficiently from internet browsers. Previous research indicates that one of the highest accuracy rates can be achieved using density based clustering techniques for skin lesion border detection. While these algorithms do have unfavorable time complexities, this effect could be mitigated when implemented in parallel. In this study, density based clustering technique for skin lesion border detection is parallelized and redesigned to run very efficiently on the heterogeneous platforms (e.g. tablets, SmartPhones, multi-core CPUs, GPUs, and fully-integrated Accelerated Processing Units) by transforming the technique into a series of independent concurrent operations. Heterogeneous computing is adopted to support accessibility, portability and multi-device use in the clinical settings. For this, we used WebCL, an emerging technology that enables a HTML5 Web browser to execute code in parallel for heterogeneous platforms. We depicted WebCL and our parallel algorithm design. In addition, we tested parallel code on 100 dermoscopy images and showed the execution speedups with respect to the serial version. Results indicate that parallel (WebCL) version and serial version of density based lesion border detection methods generate the same accuracy rates for 100 dermoscopy images, in which mean of border error is 6.94%, mean of recall is 76.66%, and mean of precision is 99.29% respectively. Moreover, WebCL version's speedup factor for 100 dermoscopy images' lesion border detection averages around ~491.2. When large amount of high resolution dermoscopy images considered in a usual clinical setting along with the critical importance of early detection and diagnosis of melanoma before metastasis, the importance of fast processing dermoscopy images become obvious. In this paper, we introduce WebCL and the use of it for biomedical image processing applications. WebCL is a javascript binding of OpenCL, which takes advantage of GPU computing from a web browser. Therefore, WebCL parallel version of density based skin lesion border detection introduced in this study can supplement expert dermatologist, and aid them in early diagnosis of skin lesions. While WebCL is currently an emerging technology, a full adoption of WebCL into the HTML5 standard would allow for this implementation to run on a very large set of hardware and software systems. WebCL takes full advantage of parallel computational resources including multi-cores and GPUs on a local machine, and allows for compiled code to run directly from the Web Browser.
Cost-Effectiveness Analysis of the Automation of a Circulation System.
ERIC Educational Resources Information Center
Mosley, Isobel
A general methodology for cost effectiveness analysis was developed and applied to the Colorado State University library loan desk. The cost effectiveness of the existing semi-automated circulation system was compared with that of a fully manual one, based on the existing manual subsystem. Faculty users' time and computer operating costs were…
Communicating Earth Observation (EO)-based landslide mapping capabilities to practitioners
NASA Astrophysics Data System (ADS)
Albrecht, Florian; Hölbling, Daniel; Eisank, Clemens; Weinke, Elisabeth; Vecchiotti, Filippo; Kociu, Arben
2016-04-01
Current remote sensing methods and the available Earth Observation (EO) data for landslide mapping already can support practitioners in their processes for gathering and for using landslide information. Information derived from EO data can support emergency services and authorities in rapid mapping after landslide-triggering events, in landslide monitoring and can serve as a relevant basis for hazard and risk mapping. These applications also concern owners, maintainers and insurers of infrastructure. Most often practitioners have a rough overview of the potential and limits of EO-based methods for landslide mapping. However, semi-automated image analysis techniques are still rarely used in practice. This limits the opportunity for user feedback, which would contribute to improve the methods for delivering fully adequate results in terms of accuracy, applicability and reliability. Moreover, practitioners miss information on the best way of integrating the methods in their daily processes. Practitioners require easy-to-grasp interfaces for testing new methods, which in turn would provide researchers with valuable user feedback. We introduce ongoing work towards an innovative web service which will allow for fast and efficient provision of EO-based landslide information products and that supports online processing. We investigate the applicability of various very high resolution (VHR), e.g. WorldView-2/3, Pleiades, and high resolution (HR), e.g. Landsat, Sentinel-2, optical EO data for semi-automated mapping based on object-based image analysis (OBIA). The methods, i.e. knowledge-based and statistical OBIA routines, are evaluated regarding their suitability for inclusion in a web service that is easy to use with the least amount of necessary training. The pre-operational web service will be implemented for selected study areas in the Alps (Austria, Italy), where weather-induced landslides have happened in the past. We will test the service on its usability together with potential users from the Geological Survey of Austria (GBA), various geological services of provinces of Austria, Germany and Italy, the Austrian Service for Torrent and Avalanche Control (WLV), the Austrian Federal Forestry Office (ÖBf), the Austrian Mountaineering Club (ÖAV) and infrastructure owners like the Austrian Road Maintenance Agency (ASFINAG). The results will show how EO-based landslide information products can be made accessible to responsible authorities in an innovative and easy manner and how new analysis methods can be promoted among a broad audience. Thus, the communication and knowledge exchange between researchers, the public, stakeholders and practitioners can be improved.
ATALARS Operational Requirements: Automated Tactical Aircraft Launch and Recovery System
DOT National Transportation Integrated Search
1988-04-01
The Automated Tactical Aircraft Launch and Recovery System (ATALARS) is a fully automated air traffic management system intended for the military service but is also fully compatible with civil air traffic control systems. This report documents a fir...
Forster, Hannah; Walsh, Marianne C; O'Donovan, Clare B; Woolhead, Clara; McGirr, Caroline; Daly, E J; O'Riordan, Richard; Celis-Morales, Carlos; Fallaize, Rosalind; Macready, Anna L; Marsaux, Cyril F M; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Kolossa, Silvia; Hartwig, Kai; Mavrogianni, Christina; Tsirigoti, Lydia; Lambrinou, Christina P; Godlewska, Magdalena; Surwiłło, Agnieszka; Gjelstad, Ingrid Merethe Fange; Drevon, Christian A; Manios, Yannis; Traczyk, Iwona; Martinez, J Alfredo; Saris, Wim H M; Daniel, Hannelore; Lovegrove, Julie A; Mathers, John C; Gibney, Michael J; Gibney, Eileen R; Brennan, Lorraine
2016-06-30
Despite numerous healthy eating campaigns, the prevalence of diets high in saturated fatty acids, sugar, and salt and low in fiber, fruit, and vegetables remains high. With more people than ever accessing the Internet, Web-based dietary assessment instruments have the potential to promote healthier dietary behaviors via personalized dietary advice. The objectives of this study were to develop a dietary feedback system for the delivery of consistent personalized dietary advice in a multicenter study and to examine the impact of automating the advice system. The development of the dietary feedback system included 4 components: (1) designing a system for categorizing nutritional intakes; (2) creating a method for prioritizing 3 nutrient-related goals for subsequent targeted dietary advice; (3) constructing decision tree algorithms linking data on nutritional intake to feedback messages; and (4) developing personal feedback reports. The system was used manually by researchers to provide personalized nutrition advice based on dietary assessment to 369 participants during the Food4Me randomized controlled trial, with an automated version developed on completion of the study. Saturated fatty acid, salt, and dietary fiber were most frequently selected as nutrient-related goals across the 7 centers. Average agreement between the manual and automated systems, in selecting 3 nutrient-related goals for personalized dietary advice across the centers, was highest for nutrient-related goals 1 and 2 and lower for goal 3, averaging at 92%, 87%, and 63%, respectively. Complete agreement between the 2 systems for feedback advice message selection averaged at 87% across the centers. The dietary feedback system was used to deliver personalized dietary advice within a multi-country study. Overall, there was good agreement between the manual and automated feedback systems, giving promise to the use of automated systems for personalizing dietary advice. Clinicaltrials.gov NCT01530139; https://clinicaltrials.gov/ct2/show/NCT01530139 (Archived by WebCite at http://www.webcitation.org/6ht5Dgj8I).
Fully automated analysis of multi-resolution four-channel micro-array genotyping data
NASA Astrophysics Data System (ADS)
Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.
2006-03-01
We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.
Grotta, Alessandra; Pasquali, Elena; Bakkman, Linda; Bellocco, Rino; Trolle Lagerros, Ylva
2016-01-01
Background Lifestyle-related health problems are an important health concern in the transport service industry. Web- and telephone-based interventions could be suitable for this target group requiring tailored approaches. Objective To evaluate the effect of tailored Web-based health feedback and optional telephone coaching to improve lifestyle factors (body mass index—BMI, dietary intake, physical activity, stress, sleep, tobacco and alcohol consumption, disease history, self-perceived health, and motivation to change health habits), in comparison to no health feedback or telephone coaching. Methods Overall, 3,876 employees in the Swedish transport services were emailed a Web-based questionnaire. They were randomized into: control group (group A, 498 of 1238 answered, 40.23%), or intervention Web (group B, 482 of 1305 answered, 36.93%), or intervention Web + telephone (group C, 493 of 1333 answered, 36.98%). All groups received an identical questionnaire, only the interventions differed. Group B received tailored Web-based health feedback, and group C received tailored Web-based health feedback + optional telephone coaching if the participants’ reported health habits did not meet the national guidelines, or if they expressed motivation to change health habits. The Web-based feedback was fully automated. Telephone coaching was performed by trained health counselors. Nine months later, all participants received a follow-up questionnaire and intervention Web + telephone. Descriptive statistics, the chi-square test, analysis of variance, and generalized estimating equation (GEE) models were used. Results Overall, 981 of 1473 (66.60%) employees participated at baseline (men: 66.7%, mean age: 44 years, mean BMI: 26.4 kg/m2) and follow-up. No significant differences were found in reported health habits between the 3 groups over time. However, significant changes were found in motivation to change. The intervention groups reported higher motivation to improve dietary habits (144 of 301 participants, 47.8%, and 165 of 324 participants, 50.9%, for groups B and C, respectively) and physical activity habits (181 of 301 participants, 60.1%, and 207 of 324 participants, 63.9%, for B and C, respectively) compared with the control group A (122 of 356 participants, 34.3%, for diet and 177 of 356 participants, 49.7%, for physical activity). At follow-up, the intervention groups had significantly decreased motivation (group B: P<.001 for change in diet; P<.001 for change in physical activity; group C: P=.007 for change in diet; P<.001 for change in physical activity), whereas the control group reported significantly increased motivation to change diet and physical activity (P<.001 for change in diet; P<.001 for change in physical activity). Conclusion Tailored Web-based health feedback and the offering of optional telephone coaching did not have a positive health effect on employees in the transport services. However, our findings suggest an increased short-term motivation to change health behaviors related to diet and physical activity among those receiving tailored Web-based health feedback. PMID:27514859
Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S
2009-11-01
Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.
ProDeGe: A computational protocol for fully automated decontamination of genomes
Tennessen, Kristin; Andersen, Evan; Clingenpeel, Scott; ...
2015-06-09
Single amplified genomes and genomes assembled from metagenomes have enabled the exploration of uncultured microorganisms at an unprecedented scale. However, both these types of products are plagued by contamination. Since these genomes are now being generated in a high-throughput manner and sequences from them are propagating into public databases to drive novel scientific discoveries, rigorous quality controls and decontamination protocols are urgently needed. Here, we present ProDeGe (Protocol for fully automated Decontamination of Genomes), the first computational protocol for fully automated decontamination of draft genomes. ProDeGe classifies sequences into two classes—clean and contaminant—using a combination of homology and feature-based methodologies.more » On average, 84% of sequence from the non-target organism is removed from the data set (specificity) and 84% of the sequence from the target organism is retained (sensitivity). Lastly, the procedure operates successfully at a rate of ~0.30 CPU core hours per megabase of sequence and can be applied to any type of genome sequence.« less
ProDeGe: A computational protocol for fully automated decontamination of genomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tennessen, Kristin; Andersen, Evan; Clingenpeel, Scott
Single amplified genomes and genomes assembled from metagenomes have enabled the exploration of uncultured microorganisms at an unprecedented scale. However, both these types of products are plagued by contamination. Since these genomes are now being generated in a high-throughput manner and sequences from them are propagating into public databases to drive novel scientific discoveries, rigorous quality controls and decontamination protocols are urgently needed. Here, we present ProDeGe (Protocol for fully automated Decontamination of Genomes), the first computational protocol for fully automated decontamination of draft genomes. ProDeGe classifies sequences into two classes—clean and contaminant—using a combination of homology and feature-based methodologies.more » On average, 84% of sequence from the non-target organism is removed from the data set (specificity) and 84% of the sequence from the target organism is retained (sensitivity). Lastly, the procedure operates successfully at a rate of ~0.30 CPU core hours per megabase of sequence and can be applied to any type of genome sequence.« less
NASA Astrophysics Data System (ADS)
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.
2017-08-01
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G
2017-07-06
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment planning and can be readily applied to different treatment sites and modalities.
Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam
2017-11-01
Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
A New User Interface for On-Demand Customizable Data Products for Sensors in a SensorWeb
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Sullivan, Don
2011-01-01
A SensorWeb is a set of sensors, which can consist of ground, airborne and space-based sensors interoperating in an automated or autonomous collaborative manner. The NASA SensorWeb toolbox, developed at NASA/GSFC in collaboration with NASA/JPL, NASA/Ames and other partners, is a set of software and standards that (1) enables users to create virtual private networks of sensors over open networks; (2) provides the capability to orchestrate their actions; (3) provides the capability to customize the output data products and (4) enables automated delivery of the data products to the users desktop. A recent addition to the SensorWeb Toolbox is a new user interface, together with web services co-resident with the sensors, to enable rapid creation, loading and execution of new algorithms for processing sensor data. The web service along with the user interface follows the Open Geospatial Consortium (OGC) standard called Web Coverage Processing Service (WCPS). This presentation will detail the prototype that was built and how the WCPS was tested against a HyspIRI flight testbed and an elastic computation cloud on the ground with EO-1 data. HyspIRI is a future NASA decadal mission. The elastic computation cloud stores EO-1 data and runs software similar to Amazon online shopping.
Barbesi, Donato; Vicente Vilas, Víctor; Millet, Sylvain; Sandow, Miguel; Colle, Jean-Yves; Aldave de Las Heras, Laura
2017-01-01
A LabVIEW ® -based software for the control of the fully automated multi-sequential flow injection analysis Lab-on-Valve (MSFIA-LOV) platform AutoRAD performing radiochemical analysis is described. The analytical platform interfaces an Arduino ® -based device triggering multiple detectors providing a flexible and fit for purpose choice of detection systems. The different analytical devices are interfaced to the PC running LabVIEW ® VI software using USB and RS232 interfaces, both for sending commands and receiving confirmation or error responses. The AUTORAD platform has been successfully applied for the chemical separation and determination of Sr, an important fission product pertinent to nuclear waste.
USDA-ARS?s Scientific Manuscript database
Our objective was to validate the 2012 version of the Automated Self-Administered 24-Hour Dietary Recall for Children (ASA24-Kids-2012), a self-administered web-based 24-hour dietary recall (24hDR) instrument, among children aged 9 to 11 years, in two sites using a quasiexperimental design. In one s...
Automated MeSH indexing of the World-Wide Web.
Fowler, J.; Kouramajian, V.; Maram, S.; Devadhar, V.
1995-01-01
To facilitate networked discovery and information retrieval in the biomedical domain, we have designed a system for automatic assignment of Medical Subject Headings to documents retrieved from the World-Wide Web. Our prototype implementations show significant promise. We describe our methods and discuss the further development of a completely automated indexing tool called the "Web-MeSH Medibot." PMID:8563421
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
Cristancho-Lacroix, Victoria; Moulin, Florence; Wrobel, Jérémy; Batrancourt, Bénédicte; Plichart, Matthieu; De Rotrou, Jocelyne; Cantegreil-Kallen, Inge; Rigaud, Anne-Sophie
2014-09-15
Web-based programs have been developed for informal caregivers of people with Alzheimer's disease (PWAD). However, these programs can prove difficult to adopt, especially for older people, who are less familiar with the Internet than other populations. Despite the fundamental role of usability testing in promoting caregivers' correct use and adoption of these programs, to our knowledge, this is the first study describing this process before evaluating a program for caregivers of PWAD in a randomized clinical trial. The objective of the study was to describe the development process of a fully automated Web-based program for caregivers of PWAD, aiming to reduce caregivers' stress, and based on the user-centered design approach. There were 49 participants (12 health care professionals, 6 caregivers, and 31 healthy older adults) that were involved in a double iterative design allowing for the adaptation of program content and for the enhancement of website usability. This process included three component parts: (1) project team workshops, (2) a proof of concept, and (3) two usability tests. The usability tests were based on a mixed methodology using behavioral analysis, semistructured interviews, and a usability questionnaire. The user-centered design approach provided valuable guidelines to adapt the content and design of the program, and to improve website usability. The professionals, caregivers (mainly spouses), and older adults considered that our project met the needs of isolated caregivers. Participants underlined that contact between caregivers would be desirable. During usability observations, the mistakes of users were also due to ergonomics issues from Internet browsers and computer interfaces. Moreover, negative self-stereotyping was evidenced, when comparing interviews and results of behavioral analysis. Face-to-face psycho-educational programs may be used as a basis for Web-based programs. Nevertheless, a user-centered design approach involving targeted users (or their representatives) remains crucial for their correct use and adoption. For future user-centered design studies, we recommend to involve end-users from preconception stages, using a mixed research method in usability evaluations, and implementing pilot studies to evaluate acceptability and feasibility of programs.
Autonomous Learning through Task-Based Instruction in Fully Online Language Courses
ERIC Educational Resources Information Center
Lee, Lina
2016-01-01
This study investigated the affordances for autonomous learning in a fully online learning environment involving the implementation of task-based instruction in conjunction with Web 2.0 technologies. To that end, four-skill-integrated tasks and digital tools were incorporated into the coursework. Data were collected using midterm reflections,…
Automated acquisition system for routine, noninvasive monitoring of physiological data.
Ogawa, M; Tamura, T; Togawa, T
1998-01-01
A fully automated, noninvasive data-acquisition system was developed to permit long-term measurement of physiological functions at home, without disturbing subjects' normal routines. The system consists of unconstrained monitors built into furnishings and structures in a home environment. An electrocardiographic (ECG) monitor in the bathtub measures heart function during bathing, a temperature monitor in the bed measures body temperature, and a weight monitor built into the toilet serves as a scale to record weight. All three monitors are connected to one computer and function with data-acquisition programs and a data format rule. The unconstrained physiological parameter monitors and fully automated measurement procedures collect data noninvasively without the subject's awareness. The system was tested for 1 week by a healthy male subject, aged 28, in laboratory-based facilities.
Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey
NASA Astrophysics Data System (ADS)
Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Maire, Jérôme; Marchis, Franck; Graham, James R.; Macintosh, Bruce; Ammons, S. Mark; Bailey, Vanessa P.; Barman, Travis S.; Bruzzone, Sebastian; Bulger, Joanna; Cotten, Tara; Doyon, René; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Goodsell, Stephen; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn M.; Larkin, James E.; Marley, Mark S.; Metchev, Stanimir; Nielsen, Eric L.; Oppenheimer, Rebecca; Palmer, David W.; Patience, Jennifer; Poyneer, Lisa A.; Pueyo, Laurent; Rajan, Abhijith; Rantakyrö, Fredrik T.; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane J.
2018-01-01
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multiyear direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines (DRPs) together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our DRPs. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.
NASA Astrophysics Data System (ADS)
Buck, J. J. H.; Phillips, A.; Lorenzo, A.; Kokkinaki, A.; Hearn, M.; Gardner, T.; Thorne, K.
2017-12-01
The National Oceanography Centre (NOC) operate a fleet of approximately 36 autonomous marine platforms including submarine gliders, autonomous underwater vehicles, and autonomous surface vehicles. Each platform effectivity has the capability to observe the ocean and collect data akin to a small research vessel. This is creating a growth in data volumes and complexity while the amount of resource available to manage data remains static. The OceanIds Command and Control (C2) project aims to solve these issues by fully automating the data archival, processing and dissemination. The data architecture being implemented jointly by NOC and the Scottish Association for Marine Science (SAMS) includes a single Application Programming Interface (API) gateway to handle authentication, forwarding and delivery of both metadata and data. Technicians and principle investigators will enter expedition data prior to deployment of vehicles enabling automated data processing when vehicles are deployed. The system will support automated metadata acquisition from platforms as this technology moves towards operational implementation. The metadata exposure to the web builds on a prototype developed by the European Commission supported SenseOCEAN project and is via open standards including World Wide Web Consortium (W3C) RDF/XML and the use of the Semantic Sensor Network ontology and Open Geospatial Consortium (OGC) SensorML standard. Data will be delivered in the marine domain Everyone's Glider Observatory (EGO) format and OGC Observations and Measurements. Additional formats will be served by implementation of endpoints such as the NOAA ERDDAP tool. This standardised data delivery via the API gateway enables timely near-real-time data to be served to Oceanids users, BODC users, operational users and big data systems. The use of open standards will also enable web interfaces to be rapidly built on the API gateway and delivery to European research infrastructures that include aligned reference models for data infrastructure.
Use of Annotations for Component and Framework Interoperability
NASA Astrophysics Data System (ADS)
David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.
2009-12-01
The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the western United States at the USDA NRCS National Water and Climate Center. PRMS is a component based modular precipitation-runoff model developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow and general basin hydrology. The new OMS 3.0 PRMS model source code is more concise and flexible as a result of using the new framework’s annotation based approach. The fully annotated components are now providing information directly for (i) model assembly and building, (ii) dataflow analysis for implicit multithreading, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks. As a prototype example, model code annotations were used to generate binding and mediation code to allow the use of OMS 3.0 model components within the OpenMI context.
Chemical Transformation Simulator
The Chemical Transformation Simulator (CTS) is a web-based, high-throughput screening tool that automates the calculation and collection of physicochemical properties for an organic chemical of interest and its predicted products resulting from transformations in environmental sy...
Fitzpatrick, Kathleen Kara; Darcy, Alison; Vierhile, Molly
2017-06-06
Web-based cognitive-behavioral therapeutic (CBT) apps have demonstrated efficacy but are characterized by poor adherence. Conversational agents may offer a convenient, engaging way of getting support at any time. The objective of the study was to determine the feasibility, acceptability, and preliminary efficacy of a fully automated conversational agent to deliver a self-help program for college students who self-identify as having symptoms of anxiety and depression. In an unblinded trial, 70 individuals age 18-28 years were recruited online from a university community social media site and were randomized to receive either 2 weeks (up to 20 sessions) of self-help content derived from CBT principles in a conversational format with a text-based conversational agent (Woebot) (n=34) or were directed to the National Institute of Mental Health ebook, "Depression in College Students," as an information-only control group (n=36). All participants completed Web-based versions of the 9-item Patient Health Questionnaire (PHQ-9), the 7-item Generalized Anxiety Disorder scale (GAD-7), and the Positive and Negative Affect Scale at baseline and 2-3 weeks later (T2). Participants were on average 22.2 years old (SD 2.33), 67% female (47/70), mostly non-Hispanic (93%, 54/58), and Caucasian (79%, 46/58). Participants in the Woebot group engaged with the conversational agent an average of 12.14 (SD 2.23) times over the study period. No significant differences existed between the groups at baseline, and 83% (58/70) of participants provided data at T2 (17% attrition). Intent-to-treat univariate analysis of covariance revealed a significant group difference on depression such that those in the Woebot group significantly reduced their symptoms of depression over the study period as measured by the PHQ-9 (F=6.47; P=.01) while those in the information control group did not. In an analysis of completers, participants in both groups significantly reduced anxiety as measured by the GAD-7 (F 1,54 = 9.24; P=.004). Participants' comments suggest that process factors were more influential on their acceptability of the program than content factors mirroring traditional therapy. Conversational agents appear to be a feasible, engaging, and effective way to deliver CBT. ©Kathleen Kara Fitzpatrick, Alison Darcy, Molly Vierhile. Originally published in JMIR Mental Health (http://mental.jmir.org), 06.06.2017.
Fully Automated Sunspot Detection and Classification Using SDO HMI Imagery in MATLAB
2014-03-27
FULLY AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB THESIS Gordon M. Spahr, Second Lieutenant, USAF AFIT-ENP-14-M-34...CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB THESIS Presented to the Faculty Department of Engineering Physics Graduate School of Engineering and Management Air...DISTRIUBUTION UNLIMITED. AFIT-ENP-14-M-34 FULLY AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB Gordon M. Spahr, BS Second
Autonomous Satellite Command and Control through the World Wide Web: Phase 3
NASA Technical Reports Server (NTRS)
Cantwell, Brian; Twiggs, Robert
1998-01-01
NASA's New Millenium Program (NMP) has identified a variety of revolutionary technologies that will support orders of magnitude improvements in the capabilities of spacecraft missions. This program's Autonomy team has focused on science and engineering automation technologies. In doing so, it has established a clear development roadmap specifying the experiments and demonstrations required to mature these technologies. The primary developmental thrusts of this roadmap are in the areas of remote agents, PI/operator interface, planning/scheduling fault management, and smart execution architectures. Phases 1 and 2 of the ASSET Project (previously known as the WebSat project) have focused on establishing World Wide Web-based commanding and telemetry services as an advanced means of interfacing a spacecraft system with the PI and operators. Current automated capabilities include Web-based command submission, limited contact scheduling, command list generation and transfer to the ground station, spacecraft support for demonstrations experiments, data transfer from the ground station back to the ASSET system, data archiving, and Web-based telemetry distribution. Phase 2 was finished in December 1996. During January-December 1997 work was commenced on Phase 3 of the ASSET Project. Phase 3 is the subject of this report. This phase permitted SSDL and its project partners to expand the ASSET system in a variety of ways. These added capabilities included the advancement of ground station capabilities, the adaptation of spacecraft on-board software, and the expansion of capabilities of the ASSET management algorithms. Specific goals of Phase 3 were: (1) Extend Web-based goal-level commanding for both the payload PI and the spacecraft engineer; (2) Support prioritized handling of multiple PIs as well as associated payload experimenters; (3) Expand the number and types of experiments supported by the ASSET system and its associated spacecraft; (4) Implement more advanced resource management, modeling and fault management capabilities that integrate the space and ground segments of the space system hardware; (5) Implement a beacon monitoring test; (6) Implement an experimental blackboard controller for space system management; (7) Further define typical ground station developments required for Internet-based remote control and for full system automation of the PI-to-spacecraft link. Each of those goals is examined in the next section. Significant sections of this report were also published as a conference paper.
Progressive Assessment of Student Engagement with Web-Based Guided Learning
ERIC Educational Resources Information Center
Katuk, Norliza
2013-01-01
Purpose: The purpose of this research is to investigate student engagement in guided web-based learning systems. It looks into students' engagement and their behavioral patterns in two types of guided learning systems (i.e. a fully- and a partially-guided). The research also aims to demonstrate how the engagement evolves from the…
Villanti, Andrea C; Jacobs, Megan A; Zawistowski, Grace; Brookover, Jody; Stanton, Cassandra A; Graham, Amanda L
2015-07-16
Few studies have addressed enrollment and retention methods in online smoking cessation interventions. Fully automated Web-based trials can yield large numbers of participants rapidly but suffer from high rates of attrition. Personal contact with participants can increase recruitment of smokers into cessation trials and improve participant retention. To compare the impact of Web-based (WEB) and phone (PH) baseline assessments on enrollment and retention metrics in the context of a Facebook smoking cessation study. Participants were recruited via Facebook and Google ads which were randomly displayed to adult smokers in the United States over 27 days from August to September 2013. On each platform, two identical ads were randomly displayed to users who fit the advertising parameters. Clicking on one of the ads resulted in randomization to WEB, and clicking on the other ad resulted in randomization to PH. Following online eligibility screening and informed consent, participants in the WEB arm completed the baseline survey online whereas PH participants completed the baseline survey by phone with a research assistant. All participants were contacted at 30 days to complete a follow-up survey that assessed use of the cessation intervention and smoking outcomes. Participants were paid $15 for follow-up survey completion. A total of 4445 people clicked on the WEB ad and 4001 clicked on the PH ad: 12.04% (n=535) of WEB participants and 8.30% (n=332) of PH participants accepted the online study invitation (P<.001). Among the 726 participants who completed online eligibility screening, an equivalent proportion in both arms was eligible and an equivalent proportion of the eligible participants in both arms provided informed consent. There was significant drop-off between consent and completion of the baseline survey in the PH arm, resulting in enrollment rates of 32.7% (35/107) for the PH arm and 67.9% (114/168) for the WEB arm (P<.001). The overall enrollment rate among everyone who clicked on a study ad was 2%. There were no between group differences in the proportion that installed the Facebook app (66/114, 57.9% WEB vs 17/35, 49% PH) or that completed the 30-day follow-up survey (49/114, 43.0% WEB vs 16/35, 46% PH). A total of $6074 was spent on ads, generating 3,834,289 impressions and resulting in 8446 clicks (average cost $0.72 per click). Per participant enrollment costs for advertising alone were $27 WEB and $87 PH. A more intensive phone baseline assessment protocol yielded a lower rate of enrollment, equivalent follow-up rates, and higher enrollment costs compared to a Web-based assessment protocol. Future research should focus on honing mixed-mode assessment protocols to further optimize enrollment and retention.
Pertuz, Said; McDonald, Elizabeth S.; Weinstein, Susan P.; Conant, Emily F.
2016-01-01
Purpose To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Materials and Methods Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board–approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration–cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Results Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging–based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Conclusion Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment. © RSNA, 2015 Online supplemental material is available for this article. PMID:26491909
Grønbæk, Morten; Helge, Jørn Wulff; Severin, Maria; Curtis, Tine; Tolstrup, Janne Schurmann
2012-01-01
Background Many people in Western countries do not follow public health physical activity (PA) recommendations. Web-based interventions provide cost- and time-efficient means of delivering individually targeted lifestyle modification at a population level. Objective To examine whether access to a website with individually tailored feedback and suggestions on how to increase PA led to improved PA, anthropometrics, and health measurements. Methods Physically inactive adults (n = 12,287) participating in a nationwide eHealth survey and health examination in Denmark were randomly assigned to either an intervention (website) (n = 6055) or a no-intervention control group (n = 6232) in 2008. The intervention website was founded on the theories of stages of change and of planned behavior and, apart from a forum page where a physiotherapist answered questions about PA and training, was fully automated. After 3 and again after 6 months we emailed participants invitations to answer a Web-based follow-up questionnaire, which included the long version of the International Physical Activity Questionnaire. A subgroup of participants (n = 1190) were invited to a follow-up health examination at 3 months. Results Less than 22.0% (694/3156) of the participants logged on to the website once and only 7.0% (222/3159) logged on frequently. We found no difference in PA level between the website and control groups at 3- and 6-month follow-ups. By dividing participants into three groups according to use of the intervention website, we found a significant difference in total and leisure-time PA in the website group. The follow-up health examination showed no significant reductions in body mass index, waist circumference, body fat percentage, and blood pressure, or improvements in arm strength and aerobic fitness in the website group. Conclusions Based on our findings, we suggest that active users of a Web-based PA intervention can improve their level of PA. However, for unmotivated users, single-tailored feedback may be too brief. Future research should focus on developing more sophisticated interventions with the potential to reach both motivated and unmotivated sedentary individuals. Trial Registration Clinicaltrials.gov NCT01295203; http://clinicaltrials.gov/ct2/show/NCT01295203 (Archived by WebCite at http://www.webcitation.org/6B7HDMqiQ) PMID:23111127
Del Medico, Luca; Christen, Heinz; Christen, Beat
2017-01-01
Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner. PMID:28531174
Using crowdsourced web content for informing water systems operations in snow-dominated catchments
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Castelletti, Andrea; Fedorov, Roman; Fraternali, Piero
2016-12-01
Snow is a key component of the hydrologic cycle in many regions of the world. Despite recent advances in environmental monitoring that are making a wide range of data available, continuous snow monitoring systems that can collect data at high spatial and temporal resolution are not well established yet, especially in inaccessible high-latitude or mountainous regions. The unprecedented availability of user-generated data on the web is opening new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatiotemporally dense. In this paper, we contribute a novel crowdsourcing procedure for extracting snow-related information from public web images, either produced by users or generated by touristic webcams. A fully automated process fetches mountain images from multiple sources, identifies the peaks present therein, and estimates virtual snow indexes representing a proxy of the snow-covered area. Our procedure has the potential for complementing traditional snow-related information, minimizing costs and efforts for obtaining the virtual snow indexes and, at the same time, maximizing the portability of the procedure to several locations where such public images are available. The operational value of the obtained virtual snow indexes is assessed for a real-world water-management problem, the regulation of Lake Como, where we use these indexes for informing the daily operations of the lake. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance.
Maity, Maitreya; Dhane, Dhiraj; Mungle, Tushar; Maiti, A K; Chakraborty, Chandan
2017-10-26
Web-enabled e-healthcare system or computer assisted disease diagnosis has a potential to improve the quality and service of conventional healthcare delivery approach. The article describes the design and development of a web-based distributed healthcare management system for medical information and quantitative evaluation of microscopic images using machine learning approach for malaria. In the proposed study, all the health-care centres are connected in a distributed computer network. Each peripheral centre manages its' own health-care service independently and communicates with the central server for remote assistance. The proposed methodology for automated evaluation of parasites includes pre-processing of blood smear microscopic images followed by erythrocytes segmentation. To differentiate between different parasites; a total of 138 quantitative features characterising colour, morphology, and texture are extracted from segmented erythrocytes. An integrated pattern classification framework is designed where four feature selection methods viz. Correlation-based Feature Selection (CFS), Chi-square, Information Gain, and RELIEF are employed with three different classifiers i.e. Naive Bayes', C4.5, and Instance-Based Learning (IB1) individually. Optimal features subset with the best classifier is selected for achieving maximum diagnostic precision. It is seen that the proposed method achieved with 99.2% sensitivity and 99.6% specificity by combining CFS and C4.5 in comparison with other methods. Moreover, the web-based tool is entirely designed using open standards like Java for a web application, ImageJ for image processing, and WEKA for data mining considering its feasibility in rural places with minimal health care facilities.
VisBOL: Web-Based Tools for Synthetic Biology Design Visualization.
McLaughlin, James Alastair; Pocock, Matthew; Mısırlı, Göksel; Madsen, Curtis; Wipat, Anil
2016-08-19
VisBOL is a Web-based application that allows the rendering of genetic circuit designs, enabling synthetic biologists to visually convey designs in SBOL visual format. VisBOL designs can be exported to formats including PNG and SVG images to be embedded in Web pages, presentations and publications. The VisBOL tool enables the automated generation of visualizations from designs specified using the Synthetic Biology Open Language (SBOL) version 2.0, as well as a range of well-known bioinformatics formats including GenBank and Pigeoncad notation. VisBOL is provided both as a user accessible Web site and as an open-source (BSD) JavaScript library that can be used to embed diagrams within other content and software.
Kesner, Adam Leon; Kuntner, Claudia
2010-10-01
Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.
McRoy, Susan; Jones, Sean; Kurmally, Adam
2016-09-01
This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions. © The Author(s) 2015.
Musiat, Peter; Conrod, Patricia; Treasure, Janet; Tylee, Andre; Williams, Chris; Schmidt, Ulrike
2014-01-01
A large proportion of university students show symptoms of common mental disorders, such as depression, anxiety, substance use disorders and eating disorders. Novel interventions are required that target underlying factors of multiple disorders. To evaluate the efficacy of a transdiagnostic trait-focused web-based intervention aimed at reducing symptoms of common mental disorders in university students. Students were recruited online (n=1047, age: M=21.8, SD=4.2) and categorised into being at high or low risk for mental disorders based on their personality traits. Participants were allocated to a cognitive-behavioural trait-focused (n=519) or a control intervention (n=528) using computerised simple randomisation. Both interventions were fully automated and delivered online (trial registration: ISRCTN14342225). Participants were blinded and outcomes were self-assessed at baseline, at 6 weeks and at 12 weeks after registration. Primary outcomes were current depression and anxiety, assessed on the Patient Health Questionnaire (PHQ9) and Generalised Anxiety Disorder Scale (GAD7). Secondary outcome measures focused on alcohol use, disordered eating, and other outcomes. Students at high risk were successfully identified using personality indicators and reported poorer mental health. A total of 520 students completed the 6-week follow-up and 401 students completed the 12-week follow-up. Attrition was high across intervention groups, but comparable to other web-based interventions. Mixed effects analyses revealed that at 12-week follow up the trait-focused intervention reduced depression scores by 3.58 (p<.001, 95%CI [5.19, 1.98]) and anxiety scores by 2.87 (p=.018, 95%CI [1.31, 4.43]) in students at high risk. In high-risk students, between group effect sizes were 0.58 (depression) and 0.42 (anxiety). In addition, self-esteem was improved. No changes were observed regarding the use of alcohol or disordered eating. This study suggests that a transdiagnostic web-based intervention for university students targeting underlying personality risk factors may be a promising way of preventing common mental disorders with a low-intensity intervention. ControlledTrials.com ISRCTN14342225.
Musiat, Peter; Conrod, Patricia; Treasure, Janet; Tylee, Andre; Williams, Chris; Schmidt, Ulrike
2014-01-01
Background A large proportion of university students show symptoms of common mental disorders, such as depression, anxiety, substance use disorders and eating disorders. Novel interventions are required that target underlying factors of multiple disorders. Aims To evaluate the efficacy of a transdiagnostic trait-focused web-based intervention aimed at reducing symptoms of common mental disorders in university students. Method Students were recruited online (n = 1047, age: M = 21.8, SD = 4.2) and categorised into being at high or low risk for mental disorders based on their personality traits. Participants were allocated to a cognitive-behavioural trait-focused (n = 519) or a control intervention (n = 528) using computerised simple randomisation. Both interventions were fully automated and delivered online (trial registration: ISRCTN14342225). Participants were blinded and outcomes were self-assessed at baseline, at 6 weeks and at 12 weeks after registration. Primary outcomes were current depression and anxiety, assessed on the Patient Health Questionnaire (PHQ9) and Generalised Anxiety Disorder Scale (GAD7). Secondary outcome measures focused on alcohol use, disordered eating, and other outcomes. Results Students at high risk were successfully identified using personality indicators and reported poorer mental health. A total of 520 students completed the 6-week follow-up and 401 students completed the 12-week follow-up. Attrition was high across intervention groups, but comparable to other web-based interventions. Mixed effects analyses revealed that at 12-week follow up the trait-focused intervention reduced depression scores by 3.58 (p<.001, 95%CI [5.19, 1.98]) and anxiety scores by 2.87 (p = .018, 95%CI [1.31, 4.43]) in students at high risk. In high-risk students, between group effect sizes were 0.58 (depression) and 0.42 (anxiety). In addition, self-esteem was improved. No changes were observed regarding the use of alcohol or disordered eating. Conclusions This study suggests that a transdiagnostic web-based intervention for university students targeting underlying personality risk factors may be a promising way of preventing common mental disorders with a low-intensity intervention. Trial Registration ControlledTrials.com ISRCTN14342225 PMID:24736388
Peng, Chen; Frommlet, Alexandra; Perez, Manuel; Cobas, Carlos; Blechschmidt, Anke; Dominguez, Santiago; Lingel, Andreas
2016-04-14
NMR binding assays are routinely applied in hit finding and validation during early stages of drug discovery, particularly for fragment-based lead generation. To this end, compound libraries are screened by ligand-observed NMR experiments such as STD, T1ρ, and CPMG to identify molecules interacting with a target. The analysis of a high number of complex spectra is performed largely manually and therefore represents a limiting step in hit generation campaigns. Here we report a novel integrated computational procedure that processes and analyzes ligand-observed proton and fluorine NMR binding data in a fully automated fashion. A performance evaluation comparing automated and manual analysis results on (19)F- and (1)H-detected data sets shows that the program delivers robust, high-confidence hit lists in a fraction of the time needed for manual analysis and greatly facilitates visual inspection of the associated NMR spectra. These features enable considerably higher throughput, the assessment of larger libraries, and shorter turn-around times.
Databases Don't Measure Motivation
ERIC Educational Resources Information Center
Yeager, Joseph
2005-01-01
Automated persuasion is the Holy Grail of quantitatively biased data base designers. However, data base histories are, at best, probabilistic estimates of customer behavior and do not make use of more sophisticated qualitative motivational profiling tools. While usually absent from web designer thinking, qualitative motivational profiling can be…
DOT National Transportation Integrated Search
2014-08-01
Fully automated or autonomous vehicles (AVs) hold great promise for the future of transportation. By 2020 : Google, auto manufacturers and other technology providers intend to introduce self-driving cars to the public with : either limited or fully a...
Tompkins County Rideshare Coalition.
DOT National Transportation Integrated Search
2013-09-01
The objective of this project was to pilot the implementation of automated on-line ridesharing in Tompkins : County, NY. Zimride was used as the web based ride matching platform. Four portals were developed to : serve different groups: three fo...
AMP: a science-driven web-based application for the TeraGrid
NASA Astrophysics Data System (ADS)
Woitaszek, M.; Metcalfe, T.; Shorrock, I.
The Asteroseismic Modeling Portal (AMP) provides a web-based interface for astronomers to run and view simulations that derive the properties of Sun-like stars from observations of their pulsation frequencies. In this paper, we describe the architecture and implementation of AMP, highlighting the lightweight design principles and tools used to produce a functional fully-custom web-based science application in less than a year. Targeted as a TeraGrid science gateway, AMP's architecture and implementation are intended to simplify its orchestration of TeraGrid computational resources. AMP's web-based interface was developed as a traditional standalone database-backed web application using the Python-based Django web development framework, allowing us to leverage the Django framework's capabilities while cleanly separating the user interface development from the grid interface development. We have found this combination of tools flexible and effective for rapid gateway development and deployment.
Beijbom, Oscar; Edmunds, Peter J.; Roelfsema, Chris; Smith, Jennifer; Kline, David I.; Neal, Benjamin P.; Dunlap, Matthew J.; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B. Greg; Kriegman, David
2015-01-01
Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157
Automated System for Early Breast Cancer Detection in Mammograms
NASA Technical Reports Server (NTRS)
Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.
1993-01-01
The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.
Ultramap v3 - a Revolution in Aerial Photogrammetry
NASA Astrophysics Data System (ADS)
Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.
2012-07-01
In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.
Suppa, Per; Hampel, Harald; Spies, Lothar; Fiebach, Jochen B; Dubois, Bruno; Buchert, Ralph
2015-01-01
Hippocampus volumetry based on magnetic resonance imaging (MRI) has not yet been translated into everyday clinical diagnostic patient care, at least in part due to limited availability of appropriate software tools. In the present study, we evaluate a fully-automated and computationally efficient processing pipeline for atlas based hippocampal volumetry using freely available Statistical Parametric Mapping (SPM) software in 198 amnestic mild cognitive impairment (MCI) subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI1). Subjects were grouped into MCI stable and MCI to probable Alzheimer's disease (AD) converters according to follow-up diagnoses at 12, 24, and 36 months. Hippocampal grey matter volume (HGMV) was obtained from baseline T1-weighted MRI and then corrected for total intracranial volume and age. Average processing time per subject was less than 4 minutes on a standard PC. The area under the receiver operator characteristic curve of the corrected HGMV for identification of MCI to probable AD converters within 12, 24, and 36 months was 0.78, 0.72, and 0.71, respectively. Thus, hippocampal volume computed with the fully-automated processing pipeline provides similar power for prediction of MCI to probable AD conversion as computationally more expensive methods. The whole processing pipeline has been made freely available as an SPM8 toolbox. It is easily set up and integrated into everyday clinical patient care.
Fully automated processing of fMRI data in SPM: from MRI scanner to PACS.
Maldjian, Joseph A; Baer, Aaron H; Kraft, Robert A; Laurienti, Paul J; Burdette, Jonathan H
2009-01-01
Here we describe the Wake Forest University Pipeline, a fully automated method for the processing of fMRI data using SPM. The method includes fully automated data transfer and archiving from the point of acquisition, real-time batch script generation, distributed grid processing, interface to SPM in MATLAB, error recovery and data provenance, DICOM conversion and PACS insertion. It has been used for automated processing of fMRI experiments, as well as for the clinical implementation of fMRI and spin-tag perfusion imaging. The pipeline requires no manual intervention, and can be extended to any studies requiring offline processing.
A Comprehensive Web-Based Patient Information Environment
2001-10-25
hospitals. Keywords - I. INTRODUCTION This paper describes a comprehensive, web-enabled, patient - centric medical information system called PiRiLiS...clinically focused. The system was found to reduce time for medical administration. The ability to view the entire patient record at anytime, anywhere in...Abstract- The paper describes a new type of medical information environment which is fully web-enabled. The system can handle any type medical
NASA Astrophysics Data System (ADS)
Mann, Christopher; Narasimhamurthi, Natarajan
1998-08-01
This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.
NASA Astrophysics Data System (ADS)
Tao, Yu; Muller, Jan-Peter; Poole, William
2016-12-01
We present a wide range of research results in the area of orbit-to-orbit and orbit-to-ground data fusion, achieved within the EU-FP7 PRoVisG project and EU-FP7 PRoViDE project. We focus on examples from three Mars rover missions, i.e. MER-A/B and MSL, to provide examples of a new fully automated offline method for rover localisation. We start by introducing the mis-registration discovered between the current HRSC and HiRISE datasets. Then we introduce the HRSC to CTX and CTX to HiRISE co-registration workflow. Finally, we demonstrate results of wide baseline stereo reconstruction with fixed mast position rover stereo imagery and its application to ground-to-orbit co-registration with HiRISE orthorectified image. We show examples of the quantitative assessment of recomputed rover traverses, and extensional exploitation of the co-registered datasets in visualisation and within an interactive web-GIS.
NASA Astrophysics Data System (ADS)
Lim, C. C.; Chang, K.-C.
2012-07-01
As the massive tsunami that struck northeast Japan in 11 March 2011 after a magnitude 9.0 earthquake, it reveals that people are living in a critical environment. Although great improvement has been achieved in disaster prevention technologies, many natural disasters are still unpredictable. In addition to the prevention, rapid and effective responses to such disasters are also crucial. One of the key elements to success is the information dissemination of disaster, including both area and people living within that region. In the past decade, web-based spatial information system has become the major platform for both data sharing and displaying. What is coming next is the development of web-based spatial data analysis. A web-based service allows people to implement spatial analysis immediately as long as the internet connection among database and application servers is available. This useful and helpful spatial information is able to be accessed by all users around the world almost simultaneously. The main goal of this paper is to implement a spatial data analysis module based on service oriented architecture (SOA) concept. The main interest and focus of our study is based on the knowledge regularization processes of spatial data analysis to achieve the automated land cover change detection (LCCD) over internet. The proposed automated model is tested and verified by FORMOSAT-2 imageries taken in 2005 and in 2008. It will be published online for users around the world to maximize the add-on value and minimize the cost of the spatial data, moreover, to reveal the situations of disaster rapidly.
Xu, Weiyi; Wan, Feng; Lou, Yufeng; Jin, Jiali; Mao, Weilin
2014-01-01
A number of automated devices for pretransfusion testing have recently become available. This study evaluated the Immucor Galileo System, a fully automated device based on the microplate hemagglutination technique for ABO/Rh (D) determinations. Routine ABO/Rh typing tests were performed on 13,045 samples using the Immucor automated instruments. Manual tube method was used to resolve ABO forward and reverse grouping discrepancies. D-negative test results were investigated and confirmed manually by the indirect antiglobulin test (IAT). The system rejected 70 tests for sample inadequacy. 87 samples were read as "No-type-determined" due to forward and reverse grouping discrepancies. 25 tests gave these results because of sample hemolysis. After further tests, we found 34 tests were caused by weakened RBC antibodies, 5 tests were attributable to weak A and/or B antigens, 4 tests were due to mixed-field reactions, and 8 tests had high titer cold agglutinin with blood qualifications which react only at temperatures below 34 degrees C. In the remaining 11 cases, irregular RBC antibodies were identified in 9 samples (seven anti-M and two anti-P) and two subgroups were identified in 2 samples (one A1 and one A2) by a reference laboratory. As for D typing, 2 weak D+ samples missed by automated systems gave negative results, but weak-positive reactions were observed in the IAT. The Immucor Galileo System is reliable and suited for ABO and D blood groups, some reasons may cause a discrepancy in ABO/D typing using a fully automated system. It is suggested that standardization of sample collection may improve the performance of the fully automated system.
WIFIP: a web-based user interface for automated synchrotron beamlines.
Sallaz-Damaz, Yoann; Ferrer, Jean Luc
2017-09-01
The beamline control software, through the associated graphical user interface (GUI), is the user access point to the experiment, interacting with synchrotron beamline components and providing automated routines. FIP, the French beamline for the Investigation of Proteins, is a highly automatized macromolecular crystallography (MX) beamline at the European Synchrotron Radiation Facility. On such a beamline, a significant number of users choose to control their experiment remotely. This is often performed with a limited bandwidth and from a large choice of computers and operating systems. Furthermore, this has to be possible in a rapidly evolving experimental environment, where new developments have to be easily integrated. To face these challenges, a light, platform-independent, control software and associated GUI are required. Here, WIFIP, a web-based user interface developed at FIP, is described. Further than being the present FIP control interface, WIFIP is also a proof of concept for future MX control software.
Probst, Yasmine; Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh
2016-07-28
Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used.
Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh
2016-01-01
Background Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. Objective This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Methods Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. Results The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Conclusions Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used. PMID:27471104
Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-01-01
Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977
Automated MRI segmentation for individualized modeling of current flow in the human head
NASA Astrophysics Data System (ADS)
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-12-01
Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J
2017-08-01
Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.
AMPA: an automated web server for prediction of protein antimicrobial regions.
Torrent, Marc; Di Tommaso, Paolo; Pulido, David; Nogués, M Victòria; Notredame, Cedric; Boix, Ester; Andreu, David
2012-01-01
AMPA is a web application for assessing the antimicrobial domains of proteins, with a focus on the design on new antimicrobial drugs. The application provides fast discovery of antimicrobial patterns in proteins that can be used to develop new peptide-based drugs against pathogens. Results are shown in a user-friendly graphical interface and can be downloaded as raw data for later examination. AMPA is freely available on the web at http://tcoffee.crg.cat/apps/ampa. The source code is also available in the web. marc.torrent@upf.edu; david.andreu@upf.edu Supplementary data are available at Bioinformatics online.
Modeling and docking antibody structures with Rosetta
Weitzner, Brian D.; Jeliazkov, Jeliazko R.; Lyskov, Sergey; Marze, Nicholas; Kuroda, Daisuke; Frick, Rahel; Adolf-Bryfogle, Jared; Biswas, Naireeta; Dunbrack, Roland L.; Gray, Jeffrey J.
2017-01-01
We describe Rosetta-based computational protocols for predicting the three-dimensional structure of an antibody from sequence (RosettaAntibody) and then docking the antibody to protein antigens (SnugDock). Antibody modeling leverages canonical loop conformations to graft large segments from experimentally-determined structures as well as (1) energetic calculations to minimize loops, (2) docking methodology to refine the VL–VH relative orientation, and (3) de novo prediction of the elusive complementarity determining region (CDR) H3 loop. To alleviate model uncertainty, antibody–antigen docking resamples CDR loop conformations and can use multiple models to represent an ensemble of conformations for the antibody, the antigen or both. These protocols can be run fully-automated via the ROSIE web server (http://rosie.rosettacommons.org/) or manually on a computer with user control of individual steps. For best results, the protocol requires roughly 1,000 CPU-hours for antibody modeling and 250 CPU-hours for antibody–antigen docking. Tasks can be completed in under a day by using public supercomputers. PMID:28125104
Winter, Mark R.; Liu, Mo; Monteleone, David; Melunis, Justin; Hershberg, Uri; Goderie, Susan K.; Temple, Sally; Cohen, Andrew R.
2015-01-01
Summary Time-lapse microscopy can capture patterns of development through multiple divisions for an entire clone of proliferating cells. Images are taken every few minutes over many days, generating data too vast to process completely by hand. Computational analysis of this data can benefit from occasional human guidance. Here we combine improved automated algorithms with minimized human validation to produce fully corrected segmentation, tracking, and lineaging results with dramatic reduction in effort. A web-based viewer provides access to data and results. The improved approach allows efficient analysis of large numbers of clones. Using this method, we studied populations of progenitor cells derived from the anterior and posterior embryonic mouse cerebral cortex, each growing in a standardized culture environment. Progenitors from the anterior cortex were smaller, less motile, and produced smaller clones compared to those from the posterior cortex, demonstrating cell-intrinsic differences that may contribute to the areal organization of the cerebral cortex. PMID:26344906
Highly automated on-orbit operations of the NuSTAR telescope
NASA Astrophysics Data System (ADS)
Roberts, Bryce; Bester, Manfred; Dumlao, Renee; Eckert, Marty; Johnson, Sam; Lewis, Mark; McDonald, John; Pease, Deron; Picard, Greg; Thorsness, Jeremy
2014-08-01
UC Berkeley's Space Sciences Laboratory (SSL) currently operates a fleet of seven NASA satellites, which conduct research in the fields of space physics and astronomy. The newest addition to this fleet is a high-energy X-ray telescope called the Nuclear Spectroscopic Telescope Array (NuSTAR). Since 2012, SSL has conducted on-orbit operations for NuSTAR on behalf of the lead institution, principle investigator, and Science Operations Center at the California Institute of Technology. NuSTAR operations benefit from a truly multi-mission ground system architecture design focused on automation and autonomy that has been honed by over a decade of continual improvement and ground network expansion. This architecture has made flight operations possible with nominal 40 hours per week staffing, while not compromising mission safety. The remote NuSTAR Science Operation Center (SOC) and Mission Operations Center (MOC) are joined by a two-way electronic interface that allows the SOC to submit automatically validated telescope pointing requests, and also to receive raw data products that are automatically produced after downlink. Command loads are built and uploaded weekly, and a web-based timeline allows both the SOC and MOC to monitor the state of currently scheduled spacecraft activities. Network routing and the command and control system are fully automated by MOC's central scheduling system. A closed-loop data accounting system automatically detects and retransmits data gaps. All passes are monitored by two independent paging systems, which alert staff of pass support problems or anomalous telemetry. NuSTAR mission operations now require less than one attended pass support per workday.
NASA Astrophysics Data System (ADS)
Clark, M.
2009-09-01
In the past, the physical presence and direct interaction of the astronomer with an observatory's staff and telescope equipment encouraged understanding and responsiveness between both staff and observers. But now, observatories often face the problem of expediently exchanging information with observers. New observatory procedures and policies such as automated-, remote- and service-observing, dynamic scheduling, data pipelining, or fully software-arbitrated telescope control provide for more efficient telescope use, but they have reduced the role of the observer to that of a customer rather than a partner in the process of observing. Topics for discussion will include scheduling, data quality, control interfaces, training and preparation for observing, and information distribution technologies, e.g., use of web sites, email, and RSS feeds.
A Web service substitution method based on service cluster nets
NASA Astrophysics Data System (ADS)
Du, YuYue; Gai, JunJing; Zhou, MengChu
2017-11-01
Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.
Internet Based Robot Control Using CORBA Based Communications
2009-12-01
Proceedings of the IADIS International Conference WWW/Internet, ICWI 2002, pp. 485–490. [5] Flanagan, David , Farley, Jim, Crawford, William, and...Conference on Robotics andAutomation, ICRA’00., pp. 2019–2024. [7] Schulz, D., Burgard, W., Cremers , A., Fox, D., and Thrun, S. (2000), Web interfaces
Bhavani, Selvaraj Rani; Senthilkumar, Jagatheesan; Chilambuchelvan, Arul Gnanaprakasam; Manjula, Dhanabalachandran; Krishnamoorthy, Ramasamy; Kannan, Arputharaj
2015-03-27
The Internet has greatly enhanced health care, helping patients stay up-to-date on medical issues and general knowledge. Many cancer patients use the Internet for cancer diagnosis and related information. Recently, cloud computing has emerged as a new way of delivering health services but currently, there is no generic and fully automated cloud-based self-management intervention for breast cancer patients, as practical guidelines are lacking. We investigated the prevalence and predictors of cloud use for medical diagnosis among women with breast cancer to gain insight into meaningful usage parameters to evaluate the use of generic, fully automated cloud-based self-intervention, by assessing how breast cancer survivors use a generic self-management model. The goal of this study was implemented and evaluated with a new prototype called "CIMIDx", based on representative association rules that support the diagnosis of medical images (mammograms). The proposed Cloud-Based System Support Intelligent Medical Image Diagnosis (CIMIDx) prototype includes two modules. The first is the design and development of the CIMIDx training and test cloud services. Deployed in the cloud, the prototype can be used for diagnosis and screening mammography by assessing the cancers detected, tumor sizes, histology, and stage of classification accuracy. To analyze the prototype's classification accuracy, we conducted an experiment with data provided by clients. Second, by monitoring cloud server requests, the CIMIDx usage statistics were recorded for the cloud-based self-intervention groups. We conducted an evaluation of the CIMIDx cloud service usage, in which browsing functionalities were evaluated from the end-user's perspective. We performed several experiments to validate the CIMIDx prototype for breast health issues. The first set of experiments evaluated the diagnostic performance of the CIMIDx framework. We collected medical information from 150 breast cancer survivors from hospitals and health centers. The CIMIDx prototype achieved high sensitivity of up to 99.29%, and accuracy of up to 98%. The second set of experiments evaluated CIMIDx use for breast health issues, using t tests and Pearson chi-square tests to assess differences, and binary logistic regression to estimate the odds ratio (OR) for the predictors' use of CIMIDx. For the prototype usage statistics for the same 150 breast cancer survivors, we interviewed 114 (76.0%), through self-report questionnaires from CIMIDx blogs. The frequency of log-ins/person ranged from 0 to 30, total duration/person from 0 to 1500 minutes (25 hours). The 114 participants continued logging in to all phases, resulting in an intervention adherence rate of 44.3% (95% CI 33.2-55.9). The overall performance of the prototype for the good category, reported usefulness of the prototype (P=.77), overall satisfaction of the prototype (P=.31), ease of navigation (P=.89), user friendliness evaluation (P=.31), and overall satisfaction (P=.31). Positive evaluations given by 100 participants via a Web-based questionnaire supported our hypothesis. The present study shows that women felt favorably about the use of a generic fully automated cloud-based self- management prototype. The study also demonstrated that the CIMIDx prototype resulted in the detection of more cancers in screening and diagnosing patients, with an increased accuracy rate.
Aquatic models, genomics and chemical risk management.
Cheng, Keith C; Hinton, David E; Mattingly, Carolyn J; Planchart, Antonio
2012-01-01
The 5th Aquatic Animal Models for Human Disease meeting follows four previous meetings (Nairn et al., 2001; Schmale, 2004; Schmale et al., 2007; Hinton et al., 2009) in which advances in aquatic animal models for human disease research were reported, and community discussion of future direction was pursued. At this meeting, discussion at a workshop entitled Bioinformatics and Computational Biology with Web-based Resources (20 September 2010) led to an important conclusion: Aquatic model research using feral and experimental fish, in combination with web-based access to annotated anatomical atlases and toxicological databases, yields data that advance our understanding of human gene function, and can be used to facilitate environmental management and drug development. We propose here that the effects of genes and environment are best appreciated within an anatomical context - the specifically affected cells and organs in the whole animal. We envision the use of automated, whole-animal imaging at cellular resolution and computational morphometry facilitated by high-performance computing and automated entry into toxicological databases, as anchors for genetic and toxicological data, and as connectors between human and model system data. These principles should be applied to both laboratory and feral fish populations, which have been virtually irreplaceable sentinals for environmental contamination that results in human morbidity and mortality. We conclude that automation, database generation, and web-based accessibility, facilitated by genomic/transcriptomic data and high-performance and cloud computing, will potentiate the unique and potentially key roles that aquatic models play in advancing systems biology, drug development, and environmental risk management. Copyright © 2011 Elsevier Inc. All rights reserved.
Lubowitz, James H; Smith, Patrick A
2012-03-01
In 2011, postsurgical patient outcome data may be compiled in a research registry, allowing comparative-effectiveness research and cost-effectiveness analysis by use of Health Insurance Portability and Accountability Act-compliant, institutional review board-approved, Food and Drug Administration-approved, remote, Web-based data collection systems. Computerized automation minimizes cost and minimizes surgeon time demand. A research registry can be a powerful tool to observe and understand variations in treatment and outcomes, to examine factors that influence prognosis and quality of life, to describe care patterns, to assess effectiveness, to monitor safety, and to change provider practice through feedback of data. Registry of validated, prospective outcome data is required for arthroscopic and related researchers and the public to advocate with governments and health payers. The goal is to develop evidence-based data to determine the best methods for treating patients. Copyright © 2012 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2010-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
ERIC Educational Resources Information Center
Kim, Jae-Il; Lee, Sook; Kim, Jung-Hee
2013-01-01
The effectiveness of methods to prevent stroke recurrence and of education focusing on learners' needs has not been fully explored. The aims of this study were to assess the effects of such interventions among stroke patients and their primary caregivers and to evaluate the feasibility of a web-based stroke education program. The participants were…
Forster, Hannah; Walsh, Marianne C; O'Donovan, Clare B; Woolhead, Clara; McGirr, Caroline; Daly, E.J; O'Riordan, Richard; Celis-Morales, Carlos; Fallaize, Rosalind; Macready, Anna L; Marsaux, Cyril F M; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Kolossa, Silvia; Hartwig, Kai; Mavrogianni, Christina; Tsirigoti, Lydia; Lambrinou, Christina P; Godlewska, Magdalena; Surwiłło, Agnieszka; Gjelstad, Ingrid Merethe Fange; Drevon, Christian A; Manios, Yannis; Traczyk, Iwona; Martinez, J Alfredo; Saris, Wim H M; Daniel, Hannelore; Lovegrove, Julie A; Mathers, John C; Gibney, Michael J; Gibney, Eileen R
2016-01-01
Background Despite numerous healthy eating campaigns, the prevalence of diets high in saturated fatty acids, sugar, and salt and low in fiber, fruit, and vegetables remains high. With more people than ever accessing the Internet, Web-based dietary assessment instruments have the potential to promote healthier dietary behaviors via personalized dietary advice. Objective The objectives of this study were to develop a dietary feedback system for the delivery of consistent personalized dietary advice in a multicenter study and to examine the impact of automating the advice system. Methods The development of the dietary feedback system included 4 components: (1) designing a system for categorizing nutritional intakes; (2) creating a method for prioritizing 3 nutrient-related goals for subsequent targeted dietary advice; (3) constructing decision tree algorithms linking data on nutritional intake to feedback messages; and (4) developing personal feedback reports. The system was used manually by researchers to provide personalized nutrition advice based on dietary assessment to 369 participants during the Food4Me randomized controlled trial, with an automated version developed on completion of the study. Results Saturated fatty acid, salt, and dietary fiber were most frequently selected as nutrient-related goals across the 7 centers. Average agreement between the manual and automated systems, in selecting 3 nutrient-related goals for personalized dietary advice across the centers, was highest for nutrient-related goals 1 and 2 and lower for goal 3, averaging at 92%, 87%, and 63%, respectively. Complete agreement between the 2 systems for feedback advice message selection averaged at 87% across the centers. Conclusions The dietary feedback system was used to deliver personalized dietary advice within a multi-country study. Overall, there was good agreement between the manual and automated feedback systems, giving promise to the use of automated systems for personalizing dietary advice. Trial Registration Clinicaltrials.gov NCT01530139; https://clinicaltrials.gov/ct2/show/NCT01530139 (Archived by WebCite at http://www.webcitation.org/6ht5Dgj8I) PMID:27363307
Instrumental Analysis Chemistry Laboratory
ERIC Educational Resources Information Center
Munoz de la Pena, Arsenio; Gonzalez-Gomez, David; Munoz de la Pena, David; Gomez-Estern, Fabio; Sequedo, Manuel Sanchez
2013-01-01
designed for automating the collection and assessment of laboratory exercises is presented. This Web-based system has been extensively used in engineering courses such as control systems, mechanics, and computer programming. Goodle GMS allows the students to submit their results to a…
77 FR 71177 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-29
... automated Tri-Service, Web- based database containing credentialing, privileging, risk management, and... credentialing, privileging, risk- management and adverse actions capabilities which support medical quality... submitting comments. Mail: Federal Docket Management System Office, 4800 Mark Center Drive, East Tower, 2nd...
NASA Technical Reports Server (NTRS)
Clement, Warren F.; Gorder, Pater J.; Jewell, Wayne F.; Coppenbarger, Richard
1990-01-01
Developing a single-pilot all-weather NOE capability requires fully automatic NOE navigation and flight control. Innovative guidance and control concepts are being investigated to (1) organize the onboard computer-based storage and real-time updating of NOE terrain profiles and obstacles; (2) define a class of automatic anticipative pursuit guidance algorithms to follow the vertical, lateral, and longitudinal guidance commands; (3) automate a decision-making process for unexpected obstacle avoidance; and (4) provide several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the recorded environment which is then used to determine an appropriate evasive maneuver if a nonconformity is observed. This research effort has been evaluated in both fixed-base and moving-base real-time piloted simulations thereby evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and reengagement of the automatic system.
NASA Astrophysics Data System (ADS)
Albrecht, Florian; Weinke, Elisabeth; Eisank, Clemens; Vecchiotti, Filippo; Hölbling, Daniel; Friedl, Barbara; Kociu, Arben
2017-04-01
Regional authorities and infrastructure maintainers in almost all mountainous regions of the Earth need detailed and up-to-date landslide inventories for hazard and risk management. Landslide inventories usually are compiled through ground surveys and manual image interpretation following landslide triggering events. We developed a web service that uses Earth Observation (EO) data to support the mapping and monitoring tasks for improving the collection of landslide information. The planned validation of the EO-based web service does not only cover the analysis of the achievable landslide information quality but also the usability and user friendliness of the user interface. The underlying validation criteria are based on the user requirements and the defined tasks and aims in the work description of the FFG project Land@Slide (EO-based landslide mapping: from methodological developments to automated web-based information delivery). The service will be validated in collaboration with stakeholders, decision makers and experts. Users are requested to test the web service functionality and give feedback with a web-based questionnaire by following the subsequently described workflow. The users will operate the web-service via the responsive user interface and can extract landslide information from EO data. They compare it to reference data for quality assessment, for monitoring changes and for assessing landslide-affected infrastructure. An overview page lets the user explore a list of example projects with resulting landslide maps and mapping workflow descriptions. The example projects include mapped landslides in several test areas in Austria and Northern Italy. Landslides were extracted from high resolution (HR) and very high resolution (VHR) satellite imagery, such as Landsat, Sentinel-2, SPOT-5, WorldView-2/3 or Pléiades. The user can create his/her own project by selecting available satellite imagery or by uploading new data. Subsequently, a new landslide extraction workflow can be initiated through the functionality that the web service provides: (1) a segmentation of the image into spectrally homogeneous objects, (2) a classification of the objects into landslide and non-landslide areas and (3) an editing tool for the manual refinement of extracted landslide boundaries. In addition, the user interface of the web service provides tools that enable the user (4) to perform a monitoring that identifies changes between landslide maps of different points in time, (5) to perform a validation of the landslide maps by comparing them to reference data, and (6) to perform an assessment of affected infrastructure by comparing the landslide maps to respective infrastructure data. After exploring the web service functionality, the users are asked to fill in the online validation protocol in form of a questionnaire in order to provide their feedback. Concerning usability, we evaluate how intuitive the web service functionality can be operated, how well the integrated help information guides the users, and what kind of background information, e.g. remote sensing concepts and theory, is necessary for a practitioner to fully exploit the value of EO data. The feedback will be used for improving the user interface and for the implementation of additional functionality.
The automated data processing architecture for the GPI Exoplanet Survey
NASA Astrophysics Data System (ADS)
Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Graham, James R.; Macintosh, Bruce
2017-09-01
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the GPIES Data Cruncher, combines multiple data reduction pipelines together to intelligently process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow-up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our data reduction pipelines. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real-time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.
Beveridge, Allan
2006-01-01
The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.
The semantic web and computer vision: old AI meets new AI
NASA Astrophysics Data System (ADS)
Mundy, J. L.; Dong, Y.; Gilliam, A.; Wagner, R.
2018-04-01
There has been vast process in linking semantic information across the billions of web pages through the use of ontologies encoded in the Web Ontology Language (OWL) based on the Resource Description Framework (RDF). A prime example is the Wikipedia where the knowledge contained in its more than four million pages is encoded in an ontological database called DBPedia http://wiki.dbpedia.org/. Web-based query tools can retrieve semantic information from DBPedia encoded in interlinked ontologies that can be accessed using natural language. This paper will show how this vast context can be used to automate the process of querying images and other geospatial data in support of report changes in structures and activities. Computer vision algorithms are selected and provided with context based on natural language requests for monitoring and analysis. The resulting reports provide semantically linked observations from images and 3D surface models.
A web-based platform for virtual screening.
Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J
2003-09-01
A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.
NASA Astrophysics Data System (ADS)
Brown, James M.; Campbell, J. Peter; Beers, Andrew; Chang, Ken; Donohue, Kyra; Ostmo, Susan; Chan, R. V. Paul; Dy, Jennifer; Erdogmus, Deniz; Ioannidis, Stratis; Chiang, Michael F.; Kalpathy-Cramer, Jayashree
2018-03-01
Retinopathy of prematurity (ROP) is a disease that affects premature infants, where abnormal growth of the retinal blood vessels can lead to blindness unless treated accordingly. Infants considered at risk of severe ROP are monitored for symptoms of plus disease, characterized by arterial tortuosity and venous dilation at the posterior pole, with a standard photographic definition. Disagreement among ROP experts in diagnosing plus disease has driven the development of computer-based methods that classify images based on hand-crafted features extracted from the vasculature. However, most of these approaches are semi-automated, which are time-consuming and subject to variability. In contrast, deep learning is a fully automated approach that has shown great promise in a wide variety of domains, including medical genetics, informatics and imaging. Convolutional neural networks (CNNs) are deep networks which learn rich representations of disease features that are highly robust to variations in acquisition and image quality. In this study, we utilized a U-Net architecture to perform vessel segmentation and then a GoogLeNet to perform disease classification. The classifier was trained on 3,000 retinal images and validated on an independent test set of patients with different observed progressions and treatments. We show that our fully automated algorithm can be used to monitor the progression of plus disease over multiple patient visits with results that are consistent with the experts' consensus diagnosis. Future work will aim to further validate the method on larger cohorts of patients to assess its applicability within the clinic as a treatment monitoring tool.
Laboratory Testing Protocols for Heparin-Induced Thrombocytopenia (HIT) Testing.
Lau, Kun Kan Edwin; Mohammed, Soma; Pasalic, Leonardo; Favaloro, Emmanuel J
2017-01-01
Heparin-induced thrombocytopenia (HIT) represents a significant high morbidity complication of heparin therapy. The clinicopathological diagnosis of HIT remains challenging for many reasons; thus, laboratory testing represents an important component of an accurate diagnosis. Although there are many assays available to assess HIT, these essentially fall into two categories-(a) immunological assays, and (b) functional assays. The current chapter presents protocols for several HIT assays, being those that are most commonly performed in laboratory practice and have the widest geographic distribution. These comprise a manual lateral flow-based system (STiC), a fully automated latex immunoturbidimetric assay, a fully automated chemiluminescent assay (CLIA), light transmission aggregation (LTA), and whole blood aggregation (Multiplate).
An Open-Source Automated Peptide Synthesizer Based on Arduino and Python.
Gali, Hariprasad
2017-10-01
The development of the first open-source automated peptide synthesizer, PepSy, using Arduino UNO and readily available components is reported. PepSy was primarily designed to synthesize small peptides in a relatively small scale (<100 µmol). Scripts to operate PepSy in a fully automatic or manual mode were written in Python. Fully automatic script includes functions to carry out resin swelling, resin washing, single coupling, double coupling, Fmoc deprotection, ivDde deprotection, on-resin oxidation, end capping, and amino acid/reagent line cleaning. Several small peptides and peptide conjugates were successfully synthesized on PepSy with reasonably good yields and purity depending on the complexity of the peptide.
A new web-based system to improve the monitoring of snow avalanche hazard in France
NASA Astrophysics Data System (ADS)
Bourova, Ekaterina; Maldonado, Eric; Leroy, Jean-Baptiste; Alouani, Rachid; Eckert, Nicolas; Bonnefoy-Demongeot, Mylene; Deschatres, Michael
2016-05-01
Snow avalanche data in the French Alps and Pyrenees have been recorded for more than 100 years in several databases. The increasing amount of observed data required a more integrative and automated service. Here we report the comprehensive web-based Snow Avalanche Information System newly developed to this end for three important data sets: an avalanche chronicle (Enquête Permanente sur les Avalanches, EPA), an avalanche map (Carte de Localisation des Phénomènes d'Avalanche, CLPA) and a compilation of hazard and vulnerability data recorded on selected paths endangering human settlements (Sites Habités Sensibles aux Avalanches, SSA). These data sets are now integrated into a common database, enabling full interoperability between all different types of snow avalanche records: digitized geographic data, avalanche descriptive parameters, eyewitness reports, photographs, hazard and risk levels, etc. The new information system is implemented through modular components using Java-based web technologies with Spring and Hibernate frameworks. It automates the manual data entry and improves the process of information collection and sharing, enhancing user experience and data quality, and offering new outlooks to explore and exploit the huge amount of snow avalanche data available for fundamental research and more applied risk assessment.
Markert, Sven; Joeris, Klaus
2017-01-01
We developed an automated microtiter plate (MTP)-based system for suspension cell culture to meet the increased demands for miniaturized high throughput applications in biopharmaceutical process development. The generic system is based on off-the-shelf commercial laboratory automation equipment and is able to utilize MTPs of different configurations (6-24 wells per plate) in orbital shaken mode. The shaking conditions were optimized by Computational Fluid Dynamics simulations. The fully automated system handles plate transport, seeding and feeding of cells, daily sampling, and preparation of analytical assays. The integration of all required analytical instrumentation into the system enables a hands-off operation which prevents bottlenecks in sample processing. The modular set-up makes the system flexible and adaptable for a continuous extension of analytical parameters and add-on components. The system proved suitable as screening tool for process development by verifying the comparability of results for the MTP-based system and bioreactors regarding profiles of viable cell density, lactate, and product concentration of CHO cell lines. These studies confirmed that 6 well MTPs as well as 24 deepwell MTPs were predictive for a scale up to a 1000 L stirred tank reactor (scale factor 1:200,000). Applying the established cell culture system for automated media blend screening in late stage development, a 22% increase in product yield was achieved in comparison to the reference process. The predicted product increase was subsequently confirmed in 2 L bioreactors. Thus, we demonstrated the feasibility of the automated MTP-based cell culture system for enhanced screening and optimization applications in process development and identified further application areas such as process robustness. The system offers a great potential to accelerate time-to-market for new biopharmaceuticals. Biotechnol. Bioeng. 2017;114: 113-121. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Lessons from the MicroObservatory Net
NASA Astrophysics Data System (ADS)
Brecher, K.; Sadler, P.; Gould, R.; Leiker, S.; Antonucci, P.; Deutsch, F.
1998-12-01
Over the past several years, we have developed a fully integrated automated astronomical telescope system which combines the imaging power of a cooled CCD, with a self-contained and weatherized 15 cm reflecting optical telescope and mount. Each telescope can be pointed and focused remotely, and filters, field of view and exposure times can be changed easily. The MicroObservatory Net consists of five of these telescopes. They are being deployed around the world at widely distributed longitudes for access to distant night skies during local daytime. Remote access to the MicroObservatories over the Internet has been available to select schools since 1995. The telescopes can be controlled in real time or in delay mode, from any computer using Web-based software. Individuals have access to all of the telescope control functions without the need for an `on-site' operator. After a MicroObservatory completes a job, the user is automatically notified by e-mail that the image is available for viewing and downloading from the Web site. Images are archived at the Web site, along with sample challenges and a user bulletin board, all of which encourage collaboration between schools. The Internet address of the telescopes is http://mo-www.harvard.edu/MicroObservatory/. The telescopes were designed for classroom instruction by teachers, as well as for use by students and amateur astronomers for original scientific research projects. In this talk, we will review some of the experiences we, students and teachers have had in using the telescopes. Support for the MicroObservatory Net has been provided by the NSF, Apple Computer, Inc. and Kodak, Inc.
Campbell, J Q; Petrella, A J
2016-09-06
Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.
The NIF DISCO Framework: Facilitating Automated Integration of Neuroscience Content on the Web
Marenco, Luis; Wang, Rixin; Shepherd, Gordon M.; Miller, Perry L.
2013-01-01
This paper describes the capabilities of DISCO, an extensible approach that supports integrative Web-based information dissemination. DISCO is a component of the Neuroscience Information Framework (NIF), an NIH Neuroscience Blueprint initiative that facilitates integrated access to diverse neuroscience resources via the Internet. DISCO facilitates the automated maintenance of several distinct capabilities using a collection of files 1) that are maintained locally by the developers of participating neuroscience resources and 2) that are “harvested” on a regular basis by a central DISCO server. This approach allows central NIF capabilities to be updated as each resource’s content changes over time. DISCO currently supports the following capabilities: 1) resource descriptions, 2) “LinkOut” to a resource’s data items from NCBI Entrez resources such as PubMed, 3) Web-based interoperation with a resource, 4) sharing a resource’s lexicon and ontology, 5) sharing a resource’s database schema, and 6) participation by the resource in neuroscience-related RSS news dissemination. The developers of a resource are free to choose which DISCO capabilities their resource will participate in. Although DISCO is used by NIF to facilitate neuroscience data integration, its capabilities have general applicability to other areas of research. PMID:20387131
The NIF DISCO Framework: facilitating automated integration of neuroscience content on the web.
Marenco, Luis; Wang, Rixin; Shepherd, Gordon M; Miller, Perry L
2010-06-01
This paper describes the capabilities of DISCO, an extensible approach that supports integrative Web-based information dissemination. DISCO is a component of the Neuroscience Information Framework (NIF), an NIH Neuroscience Blueprint initiative that facilitates integrated access to diverse neuroscience resources via the Internet. DISCO facilitates the automated maintenance of several distinct capabilities using a collection of files 1) that are maintained locally by the developers of participating neuroscience resources and 2) that are "harvested" on a regular basis by a central DISCO server. This approach allows central NIF capabilities to be updated as each resource's content changes over time. DISCO currently supports the following capabilities: 1) resource descriptions, 2) "LinkOut" to a resource's data items from NCBI Entrez resources such as PubMed, 3) Web-based interoperation with a resource, 4) sharing a resource's lexicon and ontology, 5) sharing a resource's database schema, and 6) participation by the resource in neuroscience-related RSS news dissemination. The developers of a resource are free to choose which DISCO capabilities their resource will participate in. Although DISCO is used by NIF to facilitate neuroscience data integration, its capabilities have general applicability to other areas of research.
Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J
2018-03-23
Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.
BioBlocks: Programming Protocols in Biology Made Easier.
Gupta, Vishal; Irimia, Jesús; Pau, Iván; Rodríguez-Patón, Alfonso
2017-07-21
The methods to execute biological experiments are evolving. Affordable fluid handling robots and on-demand biology enterprises are making automating entire experiments a reality. Automation offers the benefit of high-throughput experimentation, rapid prototyping, and improved reproducibility of results. However, learning to automate and codify experiments is a difficult task as it requires programming expertise. Here, we present a web-based visual development environment called BioBlocks for describing experimental protocols in biology. It is based on Google's Blockly and Scratch, and requires little or no experience in computer programming to automate the execution of experiments. The experiments can be specified, saved, modified, and shared between multiple users in an easy manner. BioBlocks is open-source and can be customized to execute protocols on local robotic platforms or remotely, that is, in the cloud. It aims to serve as a de facto open standard for programming protocols in Biology.
DOT National Transportation Integrated Search
2003-05-01
The Nebraska Department of Roads (NDOR) currently makes statewide travel information available via the Internet and 511 phone service. As NDOR moves forward with enhancing and automating its ATIS capabilities, the department desires to upgrade its cu...
Parmodel: a web server for automated comparative modeling of proteins.
Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira
2004-12-24
Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .
NASA Astrophysics Data System (ADS)
Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-03-01
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
Raterink, Robert-Jan; Witkam, Yoeri; Vreeken, Rob J; Ramautar, Rawi; Hankemeier, Thomas
2014-10-21
In the field of bioanalysis, there is an increasing demand for miniaturized, automated, robust sample pretreatment procedures that can be easily connected to direct-infusion mass spectrometry (DI-MS) in order to allow the high-throughput screening of drugs and/or their metabolites in complex body fluids like plasma. Liquid-Liquid extraction (LLE) is a common sample pretreatment technique often used for complex aqueous samples in bioanalysis. Despite significant developments that have been made in automated and miniaturized LLE procedures, fully automated LLE techniques allowing high-throughput bioanalytical studies on small-volume samples using direct infusion mass spectrometry, have not been matured yet. Here, we introduce a new fully automated micro-LLE technique based on gas-pressure assisted mixing followed by passive phase separation, coupled online to nanoelectrospray-DI-MS. Our method was characterized by varying the gas flow and its duration through the solvent mixture. For evaluation of the analytical performance, four drugs were spiked to human plasma, resulting in highly acceptable precision (RSD down to 9%) and linearity (R(2) ranging from 0.990 to 0.998). We demonstrate that our new method does not only allow the reliable extraction of analytes from small sample volumes of a few microliters in an automated and high-throughput manner, but also performs comparable or better than conventional offline LLE, in which the handling of small volumes remains challenging. Finally, we demonstrate the applicability of our method for drug screening on dried blood spots showing excellent linearity (R(2) of 0.998) and precision (RSD of 9%). In conclusion, we present the proof of principe of a new high-throughput screening platform for bioanalysis based on a new automated microLLE method, coupled online to a commercially available nano-ESI-DI-MS.
Wong, Kim; Navarro, José Fernández; Bergenstråhle, Ludvig; Ståhl, Patrik L; Lundeberg, Joakim
2018-06-01
Spatial Transcriptomics (ST) is a method which combines high resolution tissue imaging with high troughput transcriptome sequencing data. This data must be aligned with the images for correct visualization, a process that involves several manual steps. Here we present ST Spot Detector, a web tool that automates and facilitates this alignment through a user friendly interface. jose.fernandez.navarro@scilifelab.se. Supplementary data are available at Bioinformatics online.
Buller, David B; Young, Walter F; Bettinghaus, Erwin P; Borland, Ron; Walther, Joseph B; Helme, Donald; Andersen, Peter A; Cutter, Gary R; Maloy, Julie A
2011-01-01
A state budget shortfall defunded 10 local tobacco coalitions during a randomized trial but defunded coalitions continued to have access to 2 technical assistance Web sites. To test the ability of Web-based technology to provide technical assistance to local tobacco control coalitions. Randomized 2-group trial with local tobacco control coalitions as the unit of randomization. Local communities (ie, counties) within the State of Colorado. Leaders and members in 34 local tobacco control coalitions funded by the state health department in Colorado. Two technical assistance Web sites: A Basic Web site with text-based information and a multimedia Enhanced Web site containing learning modules, resources, and communication features. Use of the Web sites in minutes, pages, and session and evaluations of coalition functioning on coalition development, conflict resolution, leadership satisfaction, decision-making satisfaction, shared mission, personal involvement, and organization involvement in survey of leaders and members. Coalitions that were defunded but had access to the multimedia Enhanced Web site during the Fully Funded period and after defunding continued to use it (treatment group × funding status × period, F(3,714) = 3.18, P = .0234). Coalitions with access to the Basic Web site had low Web site use throughout and use by defunded coalitions was nearly zero when funding ceased. Members in defunded Basic Web site coalitions reported that their coalitions functioned worse than defunded Enhanced Web site coalitions (coalition development: group × status, F(1,360) = 4.81, P = .029; conflict resolution: group × status, F(1,306) = 5.69, P = .018; leadership satisfaction: group × status, F(1,342) = 5.69, P = .023). The Enhanced Web site may have had a protective effect on defunded coalitions. Defunded coalitions may have increased their capacity by using the Enhanced Web site when fully funded or by continuing to use the available online resources after defunding. Web-based technical assistance with online training and resources may be a good investment when future funding is not ensured.
Does bacteriology laboratory automation reduce time to results and increase quality management?
Dauwalder, O; Landrieve, L; Laurent, F; de Montclos, M; Vandenesch, F; Lina, G
2016-03-01
Due to reductions in financial and human resources, many microbiological laboratories have merged to build very large clinical microbiology laboratories, which allow the use of fully automated laboratory instruments. For clinical chemistry and haematology, automation has reduced the time to results and improved the management of laboratory quality. The aim of this review was to examine whether fully automated laboratory instruments for microbiology can reduce time to results and impact quality management. This study focused on solutions that are currently available, including the BD Kiestra™ Work Cell Automation and Total Lab Automation and the Copan WASPLab(®). Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Kume, Teruyoshi; Kim, Byeong-Keuk; Waseda, Katsuhisa; Sathyanarayana, Shashidhar; Li, Wenguang; Teo, Tat-Jin; Yock, Paul G; Fitzgerald, Peter J; Honda, Yasuhiro
2013-02-01
The aim of this study was to evaluate a new fully automated lumen border tracing system based on a novel multifrequency processing algorithm. We developed the multifrequency processing method to enhance arterial lumen detection by exploiting the differential scattering characteristics of blood and arterial tissue. The implementation of the method can be integrated into current intravascular ultrasound (IVUS) hardware. This study was performed in vivo with conventional 40-MHz IVUS catheters (Atlantis SR Pro™, Boston Scientific Corp, Natick, MA) in 43 clinical patients with coronary artery disease. A total of 522 frames were randomly selected, and lumen areas were measured after automatically tracing lumen borders with the new tracing system and a commercially available tracing system (TraceAssist™) referred to as the "conventional tracing system." The data assessed by the two automated systems were compared with the results of manual tracings by experienced IVUS analysts. New automated lumen measurements showed better agreement with manual lumen area tracings compared with those of the conventional tracing system (correlation coefficient: 0.819 vs. 0.509). When compared against manual tracings, the new algorithm also demonstrated improved systematic error (mean difference: 0.13 vs. -1.02 mm(2) ) and random variability (standard deviation of difference: 2.21 vs. 4.02 mm(2) ) compared with the conventional tracing system. This preliminary study showed that the novel fully automated tracing system based on the multifrequency processing algorithm can provide more accurate lumen border detection than current automated tracing systems and thus, offer a more reliable quantitative evaluation of lumen geometry. Copyright © 2011 Wiley Periodicals, Inc.
Automating Web Collection and Validation of GPS data for Longitudinal Urban Travel Studies
DOT National Transportation Integrated Search
2012-08-01
Traditional paper and phone travel surveys are expensive, time consuming, and have problems of missing trips, illogical trip sequences, and : imprecise travel time. GPS-based travel surveys can avoid many of these problems and are becoming increasing...
Haug, Severin; Sullivan, Robin; Schaub, Michael Patrick
2014-01-01
Background The relationship between tobacco and cannabis use is strong. When co-smokers try to quit only one substance, this relationship often leads to a substitution effect, that is, the increased use of the remaining substance. Stopping the use of both substances simultaneously is therefore a reasonable strategy, but co-smokers rarely report feeling ready for simultaneous cessation. Thus, the question of how co-smokers can be motivated to attempt a simultaneous cessation has arisen. To reach as many co-smokers as possible, we developed brief Web-based interventions aimed at enhancing the readiness to simultaneously quit tobacco and cannabis use. Objective Our aim was to analyze the efficacy of three different Web-based interventions designed to enhance co-smokers’ readiness to stop tobacco and cannabis use simultaneously. Methods Within a randomized trial, three brief Web-based and fully automated interventions were compared. The first intervention combined the assessment of cigarette dependence and problematic cannabis use with personalized, normative feedback. The second intervention was based on principles of motivational interviewing. As an active psychoeducational control group, the third intervention merely provided information on tobacco, cannabis, and the co-use of the two substances. The readiness to quit tobacco and cannabis simultaneously was measured before and after the intervention (both online) and 8 weeks later (online or over the phone). Secondary outcomes included the frequency of cigarette and cannabis use, as measured at baseline and after 8 weeks. Results A total of 2467 website users were assessed for eligibility based on their self-reported tobacco and cannabis co-use, and 325 participants were ultimately randomized and analyzed. For the post-intervention assessment, generalized estimating equations revealed a significant increase in the readiness to quit tobacco and cannabis in the total sample (B=.33, 95% CI 0.10-0.56, P=.006). However, this effect was not significant for the comparison between baseline and the 8-week follow-up assessment (P=.69). Furthermore, no differential effects between the interventions were found, nor were any significant intervention or time effects found on the frequency of tobacco or cannabis use. Conclusions In the new field of dual interventions for co-smokers of tobacco and cannabis, Web-based interventions can increase the short-term readiness to quit tobacco and cannabis simultaneously. The studied personalized techniques were no more effective than was psychoeducation. The analyzed brief interventions did not change the secondary outcomes, that is the frequency of tobacco and cannabis use. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 56326375; http://www.isrctn.com/ISRCTN56326375 (Archived by WebCite at http://www.webcitation.org/6UUWBh8u0). PMID:25486674
Zbikowski, Susan M; Jack, Lisa M; McClure, Jennifer B; Deprey, Mona; Javitz, Harold S; McAfee, Timothy A; Catz, Sheryl L; Richards, Julie; Bush, Terry; Swan, Gary E
2011-05-01
Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone-Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. One thousand two hundred and two participants were randomized to phone, Web, or combined phone-Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone-Web, 41% Web), and those in the phone-Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities.
2014-01-01
Background The purpose of this article is to give an integrative insight into the theoretical and empirical-based development of the Online Pestkoppenstoppen (Stop Bullies Online/Stop Online Bullies). This intervention aims to reduce the number of cyberbully victims and their symptoms of depression and anxiety (program goal), by teaching cyberbully victims how to cope in an adequate and effective manner with cyberbully incidents (program’s outcomes). Method/Design In developing the program the different steps of the Intervention Mapping protocol are systematically used. In this article we describe each step of Intervention Mapping. Sources used for the development were a literature review, a Delphi study among experts, focus group interviews with the target group, and elements from a proven effective anti-bullying program. The result is a fully automated web-based tailored intervention for cyberbully victims (12-15 years) consisting of three web-based advice sessions delivered over three months. The first advice aims to teach participants how behavior is influenced by the thoughts they have, how to recognize and dispute irrational thoughts and how to form rational thoughts. In the second advice, participants will learn about the way bullying emerges, how their behavior influences bullying and how they can use effective coping strategies in order to stop (online) bullying. In the third advice, participants receive feedback and will learn how to use the Internet and mobile phones in a safe manner. Each advice is tailored to the participant’s personal characteristics (e.g., personality, self-efficacy, coping strategies used and (ir)rational thoughts). To ensure implementation of the program after testing it for effectiveness, the intervention was pretested in the target-population and an implementation plan was designed. Finally, we will elaborate on the planned randomized controlled trial in which the intervention will be compared to a general information group and waiting list control group. This evaluation will provide insight into the intervention’s efficacy to reduce cyberbullying and its negative effects. Discussion Intervention Mapping is a time consuming but profound way to ensure that each step of developing an intervention is taken, and resulted in three web-based tailored pieces of advices that teach adolescents how to cope more effectively with cyberbullying experiences. Trial registration NTR3613, 14-09-2012 PMID:24758264
Jacobs, Niels C L; Völlink, Trijntje; Dehue, Francine; Lechner, Lilian
2014-04-24
The purpose of this article is to give an integrative insight into the theoretical and empirical-based development of the Online Pestkoppenstoppen (Stop Bullies Online/Stop Online Bullies). This intervention aims to reduce the number of cyberbully victims and their symptoms of depression and anxiety (program goal), by teaching cyberbully victims how to cope in an adequate and effective manner with cyberbully incidents (program's outcomes). In developing the program the different steps of the Intervention Mapping protocol are systematically used. In this article we describe each step of Intervention Mapping. Sources used for the development were a literature review, a Delphi study among experts, focus group interviews with the target group, and elements from a proven effective anti-bullying program. The result is a fully automated web-based tailored intervention for cyberbully victims (12-15 years) consisting of three web-based advice sessions delivered over three months. The first advice aims to teach participants how behavior is influenced by the thoughts they have, how to recognize and dispute irrational thoughts and how to form rational thoughts. In the second advice, participants will learn about the way bullying emerges, how their behavior influences bullying and how they can use effective coping strategies in order to stop (online) bullying. In the third advice, participants receive feedback and will learn how to use the Internet and mobile phones in a safe manner. Each advice is tailored to the participant's personal characteristics (e.g., personality, self-efficacy, coping strategies used and (ir)rational thoughts). To ensure implementation of the program after testing it for effectiveness, the intervention was pretested in the target-population and an implementation plan was designed. Finally, we will elaborate on the planned randomized controlled trial in which the intervention will be compared to a general information group and waiting list control group. This evaluation will provide insight into the intervention's efficacy to reduce cyberbullying and its negative effects. Intervention Mapping is a time consuming but profound way to ensure that each step of developing an intervention is taken, and resulted in three web-based tailored pieces of advices that teach adolescents how to cope more effectively with cyberbullying experiences. NTR3613, 14-09-2012.
CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.
Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J
2015-01-01
CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.
Web-Based Interface for Command and Control of Network Sensors
NASA Technical Reports Server (NTRS)
Wallick, Michael N.; Doubleday, Joshua R.; Shams, Khawaja S.
2010-01-01
This software allows for the visualization and control of a network of sensors through a Web browser interface. It is currently being deployed for a network of sensors monitoring Mt. Saint Helen s volcano; however, this innovation is generic enough that it can be deployed for any type of sensor Web. From this interface, the user is able to fully control and monitor the sensor Web. This includes, but is not limited to, sending "test" commands to individual sensors in the network, monitoring for real-world events, and reacting to those events
Improving Conceptual Design for Launch Vehicles
NASA Technical Reports Server (NTRS)
Olds, John R.
1998-01-01
This report summarizes activities performed during the second year of a three year cooperative agreement between NASA - Langley Research Center and Georgia Tech. Year 1 of the project resulted in the creation of a new Cost and Business Assessment Model (CABAM) for estimating the economic performance of advanced reusable launch vehicles including non-recurring costs, recurring costs, and revenue. The current year (second year) activities were focused on the evaluation of automated, collaborative design frameworks (computation architectures or computational frameworks) for automating the design process in advanced space vehicle design. Consistent with NASA's new thrust area in developing and understanding Intelligent Synthesis Environments (ISE), the goals of this year's research efforts were to develop and apply computer integration techniques and near-term computational frameworks for conducting advanced space vehicle design. NASA - Langley (VAB) has taken a lead role in developing a web-based computing architectures within which the designer can interact with disciplinary analysis tools through a flexible web interface. The advantages of this approach are, 1) flexible access to the designer interface through a simple web browser (e.g. Netscape Navigator), 2) ability to include existing 'legacy' codes, and 3) ability to include distributed analysis tools running on remote computers. To date, VAB's internal emphasis has been on developing this test system for the planetary entry mission under the joint Integrated Design System (IDS) program with NASA - Ames and JPL. Georgia Tech's complementary goals this year were to: 1) Examine an alternate 'custom' computational architecture for the three-discipline IDS planetary entry problem to assess the advantages and disadvantages relative to the web-based approach.and 2) Develop and examine a web-based interface and framework for a typical launch vehicle design problem.
NASA Astrophysics Data System (ADS)
Jacobsen, Jurma; Edlich, Stefan
2009-02-01
There is a broad range of potential useful mobile location-based applications. One crucial point seems to be to make them available to the public at large. This case illuminates the abilities of Android - the operating system for mobile devices - to fulfill this demand in the mashup way by use of some special geocoding web services and one integrated web service for getting the nearest cash machines data. It shows an exemplary approach for building mobile location-based mashups for everyone: 1. As a basis for reaching as many people as possible the open source Android OS is assumed to spread widely. 2. Everyone also means that the handset has not to be an expensive GPS device. This is realized by re-utilization of the existing GSM infrastructure with the Cell of Origin (COO) method which makes a lookup of the CellID in one of the growing web available CellID databases. Some of these databases are still undocumented and not yet published. Furthermore the Google Maps API for Mobile (GMM) and the open source counterpart OpenCellID are used. The user's current position localization via lookup of the closest cell to which the handset is currently connected to (COO) is not as precise as GPS, but appears to be sufficient for lots of applications. For this reason the GPS user is the most pleased one - for this user the system is fully automated. In contrary there could be some users who doesn't own a GPS cellular. This user should refine his/her location by one click on the map inside of the determined circular region. The users are then shown and guided by a path to the nearest cash machine by integrating Google Maps API with an overlay. Additionally, the GPS user can keep track of him- or herself by getting a frequently updated view via constantly requested precise GPS data for his or her position.
Bilimoria, Karl Y; Kmiecik, Thomas E; DaRosa, Debra A; Halverson, Amy; Eskandari, Mark K; Bell, Richard H; Soper, Nathaniel J; Wayne, Jeffrey D
2009-04-01
To design a Web-based system to track adverse and near-miss events, to establish an automated method to identify patterns of events, and to assess the adverse event reporting behavior of physicians. A Web-based system was designed to collect physician-reported adverse events including weekly Morbidity and Mortality (M&M) entries and anonymous adverse/near-miss events. An automated system was set up to help identify event patterns. Adverse event frequency was compared with hospital databases to assess reporting completeness. A metropolitan tertiary care center. Identification of adverse event patterns and completeness of reporting. From September 2005 to August 2007, 15,524 surgical patients were reported including 957 (6.2%) adverse events and 34 (0.2%) anonymous reports. The automated pattern recognition system helped identify 4 event patterns from M&M reports and 3 patterns from anonymous/near-miss reporting. After multidisciplinary meetings and expert reviews, the patterns were addressed with educational initiatives, correction of systems issues, and/or intensive quality monitoring. Only 25% of complications and 42% of inpatient deaths were reported. A total of 75.2% of adverse events resulting in permanent disability or death were attributed to the nature of the disease. Interventions to improve reporting were largely unsuccessful. We have developed a user-friendly Web-based system to track complications and identify patterns of adverse events. Underreporting of adverse events and attributing the complication to the nature of the disease represent a problem in reporting culture among surgeons at our institution. Similar systems should be used by surgery departments, particularly those affiliated with teaching hospitals, to identify quality improvement opportunities.
Fully automated segmentation of callus by micro-CT compared to biomechanics.
Bissinger, Oliver; Götz, Carolin; Wolff, Klaus-Dietrich; Hapfelmeier, Alexander; Prodinger, Peter Michael; Tischer, Thomas
2017-07-11
A high percentage of closed femur fractures have slight comminution. Using micro-CT (μCT), multiple fragment segmentation is much more difficult than segmentation of unfractured or osteotomied bone. Manual or semi-automated segmentation has been performed to date. However, such segmentation is extremely laborious, time-consuming and error-prone. Our aim was to therefore apply a fully automated segmentation algorithm to determine μCT parameters and examine their association with biomechanics. The femura of 64 rats taken after randomised inhibitory or neutral medication, in terms of the effect on fracture healing, and controls were closed fractured after a Kirschner wire was inserted. After 21 days, μCT and biomechanical parameters were determined by a fully automated method and correlated (Pearson's correlation). The fully automated segmentation algorithm automatically detected bone and simultaneously separated cortical bone from callus without requiring ROI selection for each single bony structure. We found an association of structural callus parameters obtained by μCT to the biomechanical properties. However, results were only explicable by additionally considering the callus location. A large number of slightly comminuted fractures in combination with therapies that influence the callus qualitatively and/or quantitatively considerably affects the association between μCT and biomechanics. In the future, contrast-enhanced μCT imaging of the callus cartilage might provide more information to improve the non-destructive and non-invasive prediction of callus mechanical properties. As studies evaluating such important drugs increase, fully automated segmentation appears to be clinically important.
Designs and concept reliance of a fully automated high-content screening platform.
Radu, Constantin; Adrar, Hosna Sana; Alamir, Ab; Hatherley, Ian; Trinh, Trung; Djaballah, Hakim
2012-10-01
High-content screening (HCS) is becoming an accepted platform in academic and industry screening labs and does require slightly different logistics for execution. To automate our stand-alone HCS microscopes, namely, an alpha IN Cell Analyzer 3000 (INCA3000), originally a Praelux unit hooked to a Hudson Plate Crane with a maximum capacity of 50 plates per run, and the IN Cell Analyzer 2000 (INCA2000), in which up to 320 plates could be fed per run using the Thermo Fisher Scientific Orbitor, we opted for a 4 m linear track system harboring both microscopes, plate washer, bulk dispensers, and a high-capacity incubator allowing us to perform both live and fixed cell-based assays while accessing both microscopes on deck. Considerations in design were given to the integration of the alpha INCA3000, a new gripper concept to access the onboard nest, and peripheral locations on deck to ensure a self-reliant system capable of achieving higher throughput. The resulting system, referred to as Hestia, has been fully operational since the new year, has an onboard capacity of 504 plates, and harbors the only fully automated alpha INCA3000 unit in the world.
Designs and Concept-Reliance of a Fully Automated High Content Screening Platform
Radu, Constantin; Adrar, Hosna Sana; Alamir, Ab; Hatherley, Ian; Trinh, Trung; Djaballah, Hakim
2013-01-01
High content screening (HCS) is becoming an accepted platform in academic and industry screening labs and does require slightly different logistics for execution. To automate our stand alone HCS microscopes, namely an alpha IN Cell Analyzer 3000 (INCA3000) originally a Praelux unit hooked to a Hudson Plate Crane with a maximum capacity of 50 plates per run; and the IN Cell Analyzer 2000 (INCA2000) where up to 320 plates could be fed per run using the Thermo Fisher Scientific Orbitor, we opted for a 4 meter linear track system harboring both microscopes, plate washer, bulk dispensers, and a high capacity incubator allowing us to perform both live and fixed cell based assays while accessing both microscopes on deck. Considerations in design were given to the integration of the alpha INCA3000, a new gripper concept to access the onboard nest, and peripheral locations on deck to ensure a self reliant system capable of achieving higher throughput. The resulting system, referred to as Hestia, has been fully operational since the New Year, has an onboard capacity of 504 plates, and harbors the only fully automated alpha INCA3000 unit in the World. PMID:22797489
Evaluating Web accessibility at different processing phases
NASA Astrophysics Data System (ADS)
Fernandes, N.; Lopes, R.; Carriço, L.
2012-09-01
Modern Web sites use several techniques (e.g. DOM manipulation) that allow for the injection of new content into their Web pages (e.g. AJAX), as well as manipulation of the HTML DOM tree. This has the consequence that the Web pages that are presented to users (i.e. after browser processing) are different from the original structure and content that is transmitted through HTTP communication (i.e. after browser processing). This poses a series of challenges for Web accessibility evaluation, especially on automated evaluation software. This article details an experimental study designed to understand the differences posed by accessibility evaluation after Web browser processing. We implemented a Javascript-based evaluator, QualWeb, that can perform WCAG 2.0 based accessibility evaluations in the two phases of browser processing. Our study shows that, in fact, there are considerable differences between the HTML DOM trees in both phases, which have the consequence of having distinct evaluation results. We discuss the impact of these results in the light of the potential problems that these differences can pose to designers and developers that use accessibility evaluators that function before browser processing.
Kanera, Iris Maria; Willems, Roy A; Bolman, Catherine A W; Mesters, Ilse; Zambon, Victor; Gijsen, Brigitte Cm; Lechner, Lilian
2016-08-23
A fully automated computer-tailored Web-based self-management intervention, Kanker Nazorg Wijzer (KNW [Cancer Aftercare Guide]), was developed to support early cancer survivors to adequately cope with psychosocial complaints and to promote a healthy lifestyle. The KNW self-management training modules target the following topics: return to work, fatigue, anxiety and depression, relationships, physical activity, diet, and smoking cessation. Participants were guided to relevant modules by personalized module referral advice that was based on participants’ current complaints and identified needs. The aim of this study was to evaluate the adherence to the module referral advice, examine the KNW module use and its predictors, and describe the appreciation of the KNW and its predictors. Additionally, we explored predictors of personal relevance. This process evaluation was conducted as part of a randomized controlled trial. Early cancer survivors with various types of cancer were recruited from 21 Dutch hospitals. Data from online self-report questionnaires and logging data were analyzed from participants allocated to the intervention condition. Chi-square tests were applied to assess the adherence to the module referral advice, negative binominal regression analysis was used to identify predictors of module use, multiple linear regression analysis was applied to identify predictors of the appreciation, and ordered logistic regression analysis was conducted to explore possible predictors of perceived personal relevance. From the respondents (N=231; mean age 55.6, SD 11.5; 79.2% female [183/231]), 98.3% (227/231) were referred to one or more KNW modules (mean 2.9, SD 1.5), and 85.7% (198/231) of participants visited at least one module (mean 2.1, SD 1.6). Significant positive associations were found between the referral to specific modules (range 1-7) and the use of corresponding modules. The likelihoods of visiting modules were higher when respondents were referred to those modules by the module referral advice. Predictors of visiting a higher number of modules were a higher number of referrals by the module referral advice (β=.136, P=.009), and having a partner was significantly related with a lower number of modules used (β=-.256, P=.044). Overall appreciation was high (mean 7.5, SD 1.2; scale 1-10) and was significantly predicted by a higher perceived personal relevance (β=.623, P=.000). None of the demographic and cancer-related characteristics significantly predicted the perceived personal relevance. The KNW in general and more specifically the KNW modules were well used and highly appreciated by early cancer survivors. Indications were found that the module referral advice might be a meaningful intervention component to guide the users in following a preferred selection of modules. These results indicate that the fully automated Web-based KNW provides personal relevant and valuable information and support for early cancer survivors. Therefore, this intervention can complement usual cancer aftercare and may serve as a first step in a stepped-care approach. Nederlands Trial Register: NTR3375; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3375 (Archived by WebCite at http://www.webcitation.org/6jo4jO7kb).
A model-driven privacy compliance decision support for medical data sharing in Europe.
Boussi Rahmouni, H; Solomonides, T; Casassa Mont, M; Shiu, S; Rahmouni, M
2011-01-01
Clinical practitioners and medical researchers often have to share health data with other colleagues across Europe. Privacy compliance in this context is very important but challenging. Automated privacy guidelines are a practical way of increasing users' awareness of privacy obligations and help eliminating unintentional breaches of privacy. In this paper we present an ontology-plus-rules based approach to privacy decision support for the sharing of patient data across European platforms. We use ontologies to model the required domain and context information about data sharing and privacy requirements. In addition, we use a set of Semantic Web Rule Language rules to reason about legal privacy requirements that are applicable to a specific context of data disclosure. We make the complete set invocable through the use of a semantic web application acting as an interactive privacy guideline system can then invoke the full model in order to provide decision support. When asked, the system will generate privacy reports applicable to a specific case of data disclosure described by the user. Also reports showing guidelines per Member State may be obtained. The advantage of this approach lies in the expressiveness and extensibility of the modelling and inference languages adopted and the ability they confer to reason with complex requirements interpreted from high level regulations. However, the system cannot at this stage fully simulate the role of an ethics committee or review board.
Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.
Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P
2017-10-01
In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.
Massive stereo-based DTM production for Mars on cloud computers
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.
2018-05-01
Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.
From Science to e-Science to Semantic e-Science: A Heliosphysics Case Study
NASA Technical Reports Server (NTRS)
Narock, Thomas; Fox, Peter
2011-01-01
The past few years have witnessed unparalleled efforts to make scientific data web accessible. The Semantic Web has proven invaluable in this effort; however, much of the literature is devoted to system design, ontology creation, and trials and tribulations of current technologies. In order to fully develop the nascent field of Semantic e-Science we must also evaluate systems in real-world settings. We describe a case study within the field of Heliophysics and provide a comparison of the evolutionary stages of data discovery, from manual to semantically enable. We describe the socio-technical implications of moving toward automated and intelligent data discovery. In doing so, we highlight how this process enhances what is currently being done manually in various scientific disciplines. Our case study illustrates that Semantic e-Science is more than just semantic search. The integration of search with web services, relational databases, and other cyberinfrastructure is a central tenet of our case study and one that we believe has applicability as a generalized research area within Semantic e-Science. This case study illustrates a specific example of the benefits, and limitations, of semantically replicating data discovery. We show examples of significant reductions in time and effort enable by Semantic e-Science; yet, we argue that a "complete" solution requires integrating semantic search with other research areas such as data provenance and web services.
Svetnik, Vladimir; Ma, Junshui; Soper, Keith A.; Doran, Scott; Renger, John J.; Deacon, Steve; Koblan, Ken S.
2007-01-01
Objective: To evaluate the performance of 2 automated systems, Morpheus and Somnolyzer24X7, with various levels of human review/editing, in scoring polysomnographic (PSG) recordings from a clinical trial using zolpidem in a model of transient insomnia. Methods: 164 all-night PSG recordings from 82 subjects collected during 2 nights of sleep, one under placebo and one under zolpidem (10 mg) treatment were used. For each recording, 6 different methods were used to provide sleep stage scores based on Rechtschaffen & Kales criteria: 1) full manual scoring, 2) automated scoring by Morpheus 3) automated scoring by Somnolyzer24X7, 4) automated scoring by Morpheus with full manual review, 5) automated scoring by Morpheus with partial manual review, 6) automated scoring by Somnolyzer24X7 with partial manual review. Ten traditional clinical efficacy measures of sleep initiation, maintenance, and architecture were calculated. Results: Pair-wise epoch-by-epoch agreements between fully automated and manual scores were in the range of intersite manual scoring agreements reported in the literature (70%-72%). Pair-wise epoch-by-epoch agreements between automated scores manually reviewed were higher (73%-76%). The direction and statistical significance of treatment effect sizes using traditional efficacy endpoints were essentially the same whichever method was used. As the degree of manual review increased, the magnitude of the effect size approached those estimated with fully manual scoring. Conclusion: Automated or semi-automated sleep PSG scoring offers valuable alternatives to costly, time consuming, and intrasite and intersite variable manual scoring, especially in large multicenter clinical trials. Reduction in scoring variability may also reduce the sample size of a clinical trial. Citation: Svetnik V; Ma J; Soper KA; Doran S; Renger JJ; Deacon S; Koblan KS. Evaluation of automated and semi-automated scoring of polysomnographic recordings from a clinical trial using zolpidem in the treatment of insomnia. SLEEP 2007;30(11):1562-1574. PMID:18041489
ERIC Educational Resources Information Center
Snyder, Robin M.
HTML provides a platform-independent way of creating and making multimedia presentations for classroom instruction and making that content available on the Internet. However, time in class is very valuable, so that any way to automate or otherwise assist the presenter in Web page navigation during class can save valuable seconds. This paper…
DoD Application Store: Enabling C2 Agility?
2014-06-01
Framework, will include automated delivery of software patches, web applications, widgets and mobile application packages. The envisioned DoD...Marketplace within the Ozone Widget Framework, will include automated delivery of software patches, web applications, widgets and mobile application...current needs. DoD has started to make inroads within this environment with several Programs of Record (PoR) embracing widgets and other mobile
The automation of an inlet mass flow control system
NASA Technical Reports Server (NTRS)
Supplee, Frank; Tcheng, Ping; Weisenborn, Michael
1989-01-01
The automation of a closed-loop computer controlled system for the inlet mass flow system (IMFS) developed for a wind tunnel facility at Langley Research Center is presented. This new PC based control system is intended to replace the manual control system presently in use in order to fully automate the plug positioning of the IMFS during wind tunnel testing. Provision is also made for communication between the PC and a host-computer in order to allow total animation of the plug positioning and data acquisition during the complete sequence of predetermined plug locations. As extensive running time is programmed for the IMFS, this new automated system will save both manpower and tunnel running time.
NMR-based automated protein structure determination.
Würz, Julia M; Kazemi, Sina; Schmidt, Elena; Bagaria, Anurag; Güntert, Peter
2017-08-15
NMR spectra analysis for protein structure determination can now in many cases be performed by automated computational methods. This overview of the computational methods for NMR protein structure analysis presents recent automated methods for signal identification in multidimensional NMR spectra, sequence-specific resonance assignment, collection of conformational restraints, and structure calculation, as implemented in the CYANA software package. These algorithms are sufficiently reliable and integrated into one software package to enable the fully automated structure determination of proteins starting from NMR spectra without manual interventions or corrections at intermediate steps, with an accuracy of 1-2 Å backbone RMSD in comparison with manually solved reference structures. Copyright © 2017 Elsevier Inc. All rights reserved.
Automated Test Requirement Document Generation
1987-11-01
DIAGNOSTICS BASED ON THE PRINCIPLES OF ARTIFICIAL INTELIGENCE ", 1984 International Test Conference, 01Oct84, (A3, 3, Cs D3, E2, G2, H2, 13, J6, K) 425...j0O GLOSSARY OF ACRONYMS 0 ABBREVIATION DEFINITION AFSATCOM Air Force Satellite Communication Al Artificial Intelligence ASIC Application Specific...In-Test Equipment (BITE) and AI ( Artificial Intelligence) - Expert Systems - need to be fully applied before a completely automated process can be
galaxie--CGI scripts for sequence identification through automated phylogenetic analysis.
Nilsson, R Henrik; Larsson, Karl-Henrik; Ursing, Björn M
2004-06-12
The prevalent use of similarity searches like BLAST to identify sequences and species implicitly assumes the reference database to be of extensive sequence sampling. This is often not the case, restraining the correctness of the outcome as a basis for sequence identification. Phylogenetic inference outperforms similarity searches in retrieving correct phylogenies and consequently sequence identities, and a project was initiated to design a freely available script package for sequence identification through automated Web-based phylogenetic analysis. Three CGI scripts were designed to facilitate qualified sequence identification from a Web interface. Query sequences are aligned to pre-made alignments or to alignments made by ClustalW with entries retrieved from a BLAST search. The subsequent phylogenetic analysis is based on the PHYLIP package for inferring neighbor-joining and parsimony trees. The scripts are highly configurable. A service installation and a version for local use are found at http://andromeda.botany.gu.se/galaxiewelcome.html and http://galaxie.cgb.ki.se
A conceptual model of the automated credibility assessment of the volunteered geographic information
NASA Astrophysics Data System (ADS)
Idris, N. H.; Jackson, M. J.; Ishak, M. H. I.
2014-02-01
The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd - sourced based applications. There are two main components proposed to be assessed in the conceptual model - metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Mathew; Bowen, Brian; Coles, Dwight
The Middleware Automated Deployment Utilities consists the these three components: MAD: Utility designed to automate the deployment of java applications to multiple java application servers. The product contains a front end web utility and backend deployment scripts. MAR: Web front end to maintain and update the components inside database. MWR-Encrypt: Web utility to convert a text string to an encrypted string that is used by the Oracle Weblogic application server. The encryption is done using the built in functions if the Oracle Weblogic product and is mainly used to create an encrypted version of a database password.
"Ordinary People Do This": Rhetorical Examinations of Novice Web Design
ERIC Educational Resources Information Center
Karper, Erin
2005-01-01
Even as weblogs, content management systems, and other forms of automated Web posting and journals are changing the way people create and place content on the Web, new Web pages mushroom overnight. However, many new Web designers produce Web pages that seem to ignore fundamental principles of "good design": full of colored backgrounds, animated…
Srinivasan, Pratul P.; Kim, Leo A.; Mettu, Priyatham S.; Cousins, Scott W.; Comer, Grant M.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
We present a novel fully automated algorithm for the detection of retinal diseases via optical coherence tomography (OCT) imaging. Our algorithm utilizes multiscale histograms of oriented gradient descriptors as feature vectors of a support vector machine based classifier. The spectral domain OCT data sets used for cross-validation consisted of volumetric scans acquired from 45 subjects: 15 normal subjects, 15 patients with dry age-related macular degeneration (AMD), and 15 patients with diabetic macular edema (DME). Our classifier correctly identified 100% of cases with AMD, 100% cases with DME, and 86.67% cases of normal subjects. This algorithm is a potentially impactful tool for the remote diagnosis of ophthalmic diseases. PMID:25360373
Software-supported USER cloning strategies for site-directed mutagenesis and DNA assembly.
Genee, Hans Jasper; Bonde, Mads Tvillinggaard; Bagger, Frederik Otzen; Jespersen, Jakob Berg; Sommer, Morten O A; Wernersson, Rasmus; Olsen, Lars Rønn
2015-03-20
USER cloning is a fast and versatile method for engineering of plasmid DNA. We have developed a user friendly Web server tool that automates the design of optimal PCR primers for several distinct USER cloning-based applications. Our Web server, named AMUSER (Automated DNA Modifications with USER cloning), facilitates DNA assembly and introduction of virtually any type of site-directed mutagenesis by designing optimal PCR primers for the desired genetic changes. To demonstrate the utility, we designed primers for a simultaneous two-position site-directed mutagenesis of green fluorescent protein (GFP) to yellow fluorescent protein (YFP), which in a single step reaction resulted in a 94% cloning efficiency. AMUSER also supports degenerate nucleotide primers, single insert combinatorial assembly, and flexible parameters for PCR amplification. AMUSER is freely available online at http://www.cbs.dtu.dk/services/AMUSER/.
Automated Rocket Propulsion Test Management
NASA Technical Reports Server (NTRS)
Walters, Ian; Nelson, Cheryl; Jones, Helene
2007-01-01
The Rocket Propulsion Test-Automated Management System provides a central location for managing activities associated with Rocket Propulsion Test Management Board, National Rocket Propulsion Test Alliance, and the Senior Steering Group business management activities. A set of authorized users, both on-site and off-site with regard to Stennis Space Center (SSC), can access the system through a Web interface. Web-based forms are used for user input with generation and electronic distribution of reports easily accessible. Major functions managed by this software include meeting agenda management, meeting minutes, action requests, action items, directives, and recommendations. Additional functions include electronic review, approval, and signatures. A repository/library of documents is available for users, and all items are tracked in the system by unique identification numbers and status (open, closed, percent complete, etc.). The system also provides queries and version control for input of all items.
NASA Technical Reports Server (NTRS)
Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.
1984-01-01
Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.
Web technologies for rapid assessment of pollution of the atmosphere of the industrial city
NASA Astrophysics Data System (ADS)
Shaparev, N.; Tokarev, A.; Yakubailik, O.; Soldatov, A.
2018-05-01
The functionality, architectural features, the user interface of the geoinformation web-system of environmental monitoring of Krasnoyarsk is discussed. This system is created in service-oriented architecture. Data collection from the automated stations to monitor the state of atmospheric air has been implemented. An original device to measure the level of contamination of the atmosphere by fine dust PM2.5 has developed. Assessment of the level of air pollution is based on the quality index AQI atmosphere.
Online Strategy Instruction for Integrating Dictionary Skills and Language Awareness
ERIC Educational Resources Information Center
Ranalli, Jim
2013-01-01
This paper explores the feasibility of an automated, online form of L2 strategy instruction (SI) as an alternative to conventional, classroom-based forms that rely primarily on teachers. Feasibility was evaluated by studying the effectiveness, both actual and perceived, of a five-week, online SI course designed to teach web-based dictionary skills…
Yousef Kalafi, Elham; Tan, Wooi Boon; Town, Christopher; Dhillon, Sarinder Kaur
2016-12-22
Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods. Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%. The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in extraction and selection of features will be implemented.
Solvepol: A Reduction Pipeline for Imaging Polarimetry Data
NASA Astrophysics Data System (ADS)
Ramírez, Edgar A.; Magalhães, Antônio M.; Davidson, James W., Jr.; Pereyra, Antonio; Rubinho, Marcelo
2017-05-01
We present a newly, fully automated, data pipeline, Solvepol, designed to reduce and analyze polarimetric data. It has been optimized for imaging data from the Instituto de Astronomía, Geofísica e Ciências Atmosféricas (IAG) of the University of São Paulo (USP), calcite Savart prism plate-based IAGPOL polarimeter. Solvepol is also the basis of a reduction pipeline for the wide-field optical polarimeter that will execute SOUTH POL, a survey of the polarized southern sky. Solvepol was written using the Interactive data language (IDL) and is based on the Image Reduction and Analysis Facility (IRAF) task PCCDPACK, developed by our polarimetry group. We present and discuss reduced data from standard stars and other fields and compare these results with those obtained in the IRAF environment. Our analysis shows that Solvepol, in addition to being a fully automated pipeline, produces results consistent with those reduced by PCCDPACK and reported in the literature.
NASA Technical Reports Server (NTRS)
Stefanov, William L.
2017-01-01
The NASA Earth observations dataset obtained by humans in orbit using handheld film and digital cameras is freely accessible to the global community through the online searchable database at https://eol.jsc.nasa.gov, and offers a useful compliment to traditional ground-commanded sensor data. The dataset includes imagery from the NASA Mercury (1961) through present-day International Space Station (ISS) programs, and currently totals over 2.6 million individual frames. Geographic coverage of the dataset includes land and oceans areas between approximately 52 degrees North and South latitudes, but is spatially and temporally discontinuous. The photographic dataset includes some significant impediments for immediate research, applied, and educational use: commercial RGB films and camera systems with overlapping bandpasses; use of different focal length lenses, unconstrained look angles, and variable spacecraft altitudes; and no native geolocation information. Such factors led to this dataset being underutilized by the community but recent advances in automated and semi-automated image geolocation, image feature classification, and web-based services are adding new value to the astronaut-acquired imagery. A coupled ground software and on-orbit hardware system for the ISS is in development for planned deployment in mid-2017; this system will capture camera pose information for each astronaut photograph to allow automated, full georegistration of the data. The ground system component of the system is currently in use to fully georeference imagery collected in response to International Disaster Charter activations, and the auto-registration procedures are being applied to the extensive historical database of imagery to add value for research and educational purposes. In parallel, machine learning techniques are being applied to automate feature identification and classification throughout the dataset, in order to build descriptive metadata that will improve search capabilities. It is expected that these value additions will increase interest and use of the dataset by the global community.
NASA Astrophysics Data System (ADS)
Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter
2008-03-01
Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.
Jiang, Jiyang; Liu, Tao; Zhu, Wanlin; Koncz, Rebecca; Liu, Hao; Lee, Teresa; Sachdev, Perminder S; Wen, Wei
2018-07-01
We present 'UBO Detector', a cluster-based, fully automated pipeline for extracting and calculating variables for regions of white matter hyperintensities (WMH) (available for download at https://cheba.unsw.edu.au/group/neuroimaging-pipeline). It takes T1-weighted and fluid attenuated inversion recovery (FLAIR) scans as input, and SPM12 and FSL functions are utilised for pre-processing. The candidate clusters are then generated by FMRIB's Automated Segmentation Tool (FAST). A supervised machine learning algorithm, k-nearest neighbor (k-NN), is applied to determine whether the candidate clusters are WMH or non-WMH. UBO Detector generates both image and text (volumes and the number of WMH clusters) outputs for whole brain, periventricular, deep, and lobar WMH, as well as WMH in arterial territories. The computation time for each brain is approximately 15 min. We validated the performance of UBO Detector by showing a) high segmentation (similarity index (SI) = 0.848) and volumetric (intraclass correlation coefficient (ICC) = 0.985) agreement between the UBO Detector-derived and manually traced WMH; b) highly correlated (r 2 > 0.9) and a steady increase of WMH volumes over time; and c) significant associations of periventricular (t = 22.591, p < 0.001) and deep (t = 14.523, p < 0.001) WMH volumes generated by UBO Detector with Fazekas rating scores. With parallel computing enabled in UBO Detector, the processing can take advantage of multi-core CPU's that are commonly available on workstations. In conclusion, UBO Detector is a reliable, efficient and fully automated WMH segmentation pipeline. Copyright © 2018 Elsevier Inc. All rights reserved.
StructRNAfinder: an automated pipeline and web server for RNA families prediction.
Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius
2018-02-17
The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.
Gokce, Sertan Kutal; Guo, Samuel X.; Ghorashian, Navid; Everett, W. Neil; Jarrell, Travis; Kottek, Aubri; Bovik, Alan C.; Ben-Yakar, Adela
2014-01-01
Femtosecond laser nanosurgery has been widely accepted as an axonal injury model, enabling nerve regeneration studies in the small model organism, Caenorhabditis elegans. To overcome the time limitations of manual worm handling techniques, automation and new immobilization technologies must be adopted to improve throughput in these studies. While new microfluidic immobilization techniques have been developed that promise to reduce the time required for axotomies, there is a need for automated procedures to minimize the required amount of human intervention and accelerate the axotomy processes crucial for high-throughput. Here, we report a fully automated microfluidic platform for performing laser axotomies of fluorescently tagged neurons in living Caenorhabditis elegans. The presented automation process reduces the time required to perform axotomies within individual worms to ∼17 s/worm, at least one order of magnitude faster than manual approaches. The full automation is achieved with a unique chip design and an operation sequence that is fully computer controlled and synchronized with efficient and accurate image processing algorithms. The microfluidic device includes a T-shaped architecture and three-dimensional microfluidic interconnects to serially transport, position, and immobilize worms. The image processing algorithms can identify and precisely position axons targeted for ablation. There were no statistically significant differences observed in reconnection probabilities between axotomies carried out with the automated system and those performed manually with anesthetics. The overall success rate of automated axotomies was 67.4±3.2% of the cases (236/350) at an average processing rate of 17.0±2.4 s. This fully automated platform establishes a promising methodology for prospective genome-wide screening of nerve regeneration in C. elegans in a truly high-throughput manner. PMID:25470130
Fully printable, strain-engineered electronic wrap for customizable soft electronics.
Byun, Junghwan; Lee, Byeongmoon; Oh, Eunho; Kim, Hyunjong; Kim, Sangwoo; Lee, Seunghwan; Hong, Yongtaek
2017-03-24
Rapid growth of stretchable electronics stimulates broad uses in multidisciplinary fields as well as industrial applications. However, existing technologies are unsuitable for implementing versatile applications involving adaptable system design and functions in a cost/time-effective way because of vacuum-conditioned, lithographically-predefined processes. Here, we present a methodology for a fully printable, strain-engineered electronic wrap as a universal strategy which makes it more feasible to implement various stretchable electronic systems with customizable layouts and functions. The key aspects involve inkjet-printed rigid island (PRI)-based stretchable platform technology and corresponding printing-based automated electronic functionalization methodology, the combination of which provides fully printed, customized layouts of stretchable electronic systems with simplified process. Specifically, well-controlled contact line pinning effect of printed polymer solution enables the formation of PRIs with tunable thickness; and surface strain analysis on those PRIs leads to the optimized stability and device-to-island fill factor of strain-engineered electronic wraps. Moreover, core techniques of image-based automated pinpointing, surface-mountable device based electronic functionalizing, and one-step interconnection networking of PRIs enable customized circuit design and adaptable functionalities. To exhibit the universality of our approach, multiple types of practical applications ranging from self-computable digital logics to display and sensor system are demonstrated on skin in a customized form.
Fully printable, strain-engineered electronic wrap for customizable soft electronics
NASA Astrophysics Data System (ADS)
Byun, Junghwan; Lee, Byeongmoon; Oh, Eunho; Kim, Hyunjong; Kim, Sangwoo; Lee, Seunghwan; Hong, Yongtaek
2017-03-01
Rapid growth of stretchable electronics stimulates broad uses in multidisciplinary fields as well as industrial applications. However, existing technologies are unsuitable for implementing versatile applications involving adaptable system design and functions in a cost/time-effective way because of vacuum-conditioned, lithographically-predefined processes. Here, we present a methodology for a fully printable, strain-engineered electronic wrap as a universal strategy which makes it more feasible to implement various stretchable electronic systems with customizable layouts and functions. The key aspects involve inkjet-printed rigid island (PRI)-based stretchable platform technology and corresponding printing-based automated electronic functionalization methodology, the combination of which provides fully printed, customized layouts of stretchable electronic systems with simplified process. Specifically, well-controlled contact line pinning effect of printed polymer solution enables the formation of PRIs with tunable thickness; and surface strain analysis on those PRIs leads to the optimized stability and device-to-island fill factor of strain-engineered electronic wraps. Moreover, core techniques of image-based automated pinpointing, surface-mountable device based electronic functionalizing, and one-step interconnection networking of PRIs enable customized circuit design and adaptable functionalities. To exhibit the universality of our approach, multiple types of practical applications ranging from self-computable digital logics to display and sensor system are demonstrated on skin in a customized form.
Fully printable, strain-engineered electronic wrap for customizable soft electronics
Byun, Junghwan; Lee, Byeongmoon; Oh, Eunho; Kim, Hyunjong; Kim, Sangwoo; Lee, Seunghwan; Hong, Yongtaek
2017-01-01
Rapid growth of stretchable electronics stimulates broad uses in multidisciplinary fields as well as industrial applications. However, existing technologies are unsuitable for implementing versatile applications involving adaptable system design and functions in a cost/time-effective way because of vacuum-conditioned, lithographically-predefined processes. Here, we present a methodology for a fully printable, strain-engineered electronic wrap as a universal strategy which makes it more feasible to implement various stretchable electronic systems with customizable layouts and functions. The key aspects involve inkjet-printed rigid island (PRI)-based stretchable platform technology and corresponding printing-based automated electronic functionalization methodology, the combination of which provides fully printed, customized layouts of stretchable electronic systems with simplified process. Specifically, well-controlled contact line pinning effect of printed polymer solution enables the formation of PRIs with tunable thickness; and surface strain analysis on those PRIs leads to the optimized stability and device-to-island fill factor of strain-engineered electronic wraps. Moreover, core techniques of image-based automated pinpointing, surface-mountable device based electronic functionalizing, and one-step interconnection networking of PRIs enable customized circuit design and adaptable functionalities. To exhibit the universality of our approach, multiple types of practical applications ranging from self-computable digital logics to display and sensor system are demonstrated on skin in a customized form. PMID:28338055
Huang, Jianyan; Maram, Jyotsna; Tepelus, Tudor C; Modak, Cristina; Marion, Ken; Sadda, SriniVas R; Chopra, Vikas; Lee, Olivia L
2017-08-07
To determine the reliability of corneal endothelial cell density (ECD) obtained by automated specular microscopy versus that of validated manual methods and factors that predict such reliability. Sharp central images from 94 control and 106 glaucomatous eyes were captured with Konan specular microscope NSP-9900. All images were analyzed by trained graders using Konan CellChek Software, employing the fully- and semi-automated methods as well as Center Method. Images with low cell count (input cells number <100) and/or guttata were compared with the Center and Flex-Center Methods. ECDs were compared and absolute error was used to assess variation. The effect on ECD of age, cell count, cell size, and cell size variation was evaluated. No significant difference was observed between the Center and Flex-Center Methods in corneas with guttata (p=0.48) or low ECD (p=0.11). No difference (p=0.32) was observed in ECD of normal controls <40 yrs old between the fully-automated method and manual Center Method. However, in older controls and glaucomatous eyes, ECD was overestimated by the fully-automated method (p=0.034) and semi-automated method (p=0.025) as compared to manual method. Our findings show that automated analysis significantly overestimates ECD in the eyes with high polymegathism and/or large cell size, compared to the manual method. Therefore, we discourage reliance upon the fully-automated method alone to perform specular microscopy analysis, particularly if an accurate ECD value is imperative. Copyright © 2017. Published by Elsevier España, S.L.U.
A highly scalable information system as extendable framework solution for medical R&D projects.
Holzmüller-Laue, Silke; Göde, Bernd; Stoll, Regina; Thurow, Kerstin
2009-01-01
For research projects in preventive medicine a flexible information management is needed that offers a free planning and documentation of project specific examinations. The system should allow a simple, preferably automated data acquisition from several distributed sources (e.g., mobile sensors, stationary diagnostic systems, questionnaires, manual inputs) as well as an effective data management, data use and analysis. An information system fulfilling these requirements has been developed at the Center for Life Science Automation (celisca). This system combines data of multiple investigations and multiple devices and displays them on a single screen. The integration of mobile sensor systems for comfortable, location-independent capture of time-based physiological parameter and the possibility of observation of these measurements directly by this system allow new scenarios. The web-based information system presented in this paper is configurable by user interfaces. It covers medical process descriptions, operative process data visualizations, a user-friendly process data processing, modern online interfaces (data bases, web services, XML) as well as a comfortable support of extended data analysis with third-party applications.
DOT National Transportation Integrated Search
2009-02-01
The Office of Special Investigations at Iowa Department of Transportation (DOT) collects FWD data on regular basis to evaluate pavement structural conditions. The primary objective of this study was to develop a fully-automated software system for ra...
A Web of applicant attraction: person-organization fit in the context of Web-based recruitment.
Dineen, Brian R; Ash, Steven R; Noe, Raymond A
2002-08-01
Applicant attraction was examined in the context of Web-based recruitment. A person-organization (P-O) fit framework was adopted to examine how the provision of feedback to individuals regarding their potential P-O fit with an organization related to attraction. Objective and subjective P-O fit, agreement with fit feedback, and self-esteem also were examined in relation to attraction. Results of an experiment that manipulated fit feedback level after a self-assessment provided by a fictitious company Web site found that both feedback level and objective P-O fit were positively related to attraction. These relationships were fully mediated by subjective P-O fit. In addition, attraction was related to the interaction of objective fit, feedback, and agreement and objective fit, feedback, and self-esteem. Implications and future Web-based recruitment research directions are discussed.
Ihlow, Alexander; Schweizer, Patrick; Seiffert, Udo
2008-01-23
To find candidate genes that potentially influence the susceptibility or resistance of crop plants to powdery mildew fungi, an assay system based on transient-induced gene silencing (TIGS) as well as transient over-expression in single epidermal cells of barley has been developed. However, this system relies on quantitative microscopic analysis of the barley/powdery mildew interaction and will only become a high-throughput tool of phenomics upon automation of the most time-consuming steps. We have developed a high-throughput screening system based on a motorized microscope which evaluates the specimens fully automatically. A large-scale double-blind verification of the system showed an excellent agreement of manual and automated analysis and proved the system to work dependably. Furthermore, in a series of bombardment experiments an RNAi construct targeting the Mlo gene was included, which is expected to phenocopy resistance mediated by recessive loss-of-function alleles such as mlo5. In most cases, the automated analysis system recorded a shift towards resistance upon RNAi of Mlo, thus providing proof of concept for its usefulness in detecting gene-target effects. Besides saving labor and enabling a screening of thousands of candidate genes, this system offers continuous operation of expensive laboratory equipment and provides a less subjective analysis as well as a complete and enduring documentation of the experimental raw data in terms of digital images. In general, it proves the concept of enabling available microscope hardware to handle challenging screening tasks fully automatically.
Ercan, Ertuğrul; Kırılmaz, Bahadır; Kahraman, İsmail; Bayram, Vildan; Doğan, Hüseyin
2012-11-01
Flow-mediated dilation (FMD) is used to evaluate endothelial functions. Computer-assisted analysis utilizing edge detection permits continuous measurements along the vessel wall. We have developed a new fully automated software program to allow accurate and reproducible measurement. FMD has been measured and analyzed in 18 coronary artery disease (CAD) patients and 17 controls both by manually and by the software developed (computer supported) methods. The agreement between methods was assessed by Bland-Altman analysis. The mean age, body mass index and cardiovascular risk factors were higher in CAD group. Automated FMD% measurement for the control subjects was 18.3±8.5 and 6.8±6.5 for the CAD group (p=0.0001). The intraobserver and interobserver correlation for automated measurement was high (r=0.974, r=0.981, r=0.937, r=0.918, respectively). Manual FMD% at 60th second was correlated with automated FMD % (r=0.471, p=0.004). The new fully automated software© can be used to precise measurement of FMD with low intra- and interobserver variability than manual assessment.
A web-based approach for electrocardiogram monitoring in the home.
Magrabi, F; Lovell, N H; Celler, B G
1999-05-01
A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.
Web-Based Collaborative Publications System: R&Tserve
NASA Technical Reports Server (NTRS)
Abrams, Steve
1997-01-01
R&Tserve is a publications system based on 'commercial, off-the-shelf' (COTS) software that provides a persistent, collaborative workspace for authors and editors to support the entire publication development process from initial submission, through iterative editing in a hierarchical approval structure, and on to 'publication' on the WWW. It requires no specific knowledge of the WWW (beyond basic use) or HyperText Markup Language (HTML). Graphics and URLs are automatically supported. The system includes a transaction archive, a comments utility, help functionality, automated graphics conversion, automated table generation, and an email-based notification system. It may be configured and administered via the WWW and can support publications ranging from single page documents to multiple-volume 'tomes'.
NASA Astrophysics Data System (ADS)
Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean
2018-05-01
A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).
Students' Performance and Satisfaction with Web vs. Paper-Based Practice Quizzes and Lecture Notes
ERIC Educational Resources Information Center
Macedo-Rouet, Monica; Ney, Muriel; Charles, Sandrine; Lallich-Boidin, Genevieve
2009-01-01
The use of computers to deliver course-related materials is rapidly expanding in most universities. Yet the effects of computer vs. printed delivery modes on students' performance and motivation are not yet fully known. We compared the impacts of Web vs. paper to deliver practice quizzes that require information search in lecture notes. Hundred…
Semantic Services in e-Learning: An Argumentation Case Study
ERIC Educational Resources Information Center
Moreale, Emanuela; Vargas-Vera, Maria
2004-01-01
This paper outlines an e-Learning services architecture offering semantic-based services to students and tutors, in particular ways to browse and obtain information through web services. Services could include registration, authentication, tutoring systems, smart question answering for students' queries, automated marking systems and a student…
2015-01-01
Background The Internet has greatly enhanced health care, helping patients stay up-to-date on medical issues and general knowledge. Many cancer patients use the Internet for cancer diagnosis and related information. Recently, cloud computing has emerged as a new way of delivering health services but currently, there is no generic and fully automated cloud-based self-management intervention for breast cancer patients, as practical guidelines are lacking. Objective We investigated the prevalence and predictors of cloud use for medical diagnosis among women with breast cancer to gain insight into meaningful usage parameters to evaluate the use of generic, fully automated cloud-based self-intervention, by assessing how breast cancer survivors use a generic self-management model. The goal of this study was implemented and evaluated with a new prototype called “CIMIDx”, based on representative association rules that support the diagnosis of medical images (mammograms). Methods The proposed Cloud-Based System Support Intelligent Medical Image Diagnosis (CIMIDx) prototype includes two modules. The first is the design and development of the CIMIDx training and test cloud services. Deployed in the cloud, the prototype can be used for diagnosis and screening mammography by assessing the cancers detected, tumor sizes, histology, and stage of classification accuracy. To analyze the prototype’s classification accuracy, we conducted an experiment with data provided by clients. Second, by monitoring cloud server requests, the CIMIDx usage statistics were recorded for the cloud-based self-intervention groups. We conducted an evaluation of the CIMIDx cloud service usage, in which browsing functionalities were evaluated from the end-user’s perspective. Results We performed several experiments to validate the CIMIDx prototype for breast health issues. The first set of experiments evaluated the diagnostic performance of the CIMIDx framework. We collected medical information from 150 breast cancer survivors from hospitals and health centers. The CIMIDx prototype achieved high sensitivity of up to 99.29%, and accuracy of up to 98%. The second set of experiments evaluated CIMIDx use for breast health issues, using t tests and Pearson chi-square tests to assess differences, and binary logistic regression to estimate the odds ratio (OR) for the predictors’ use of CIMIDx. For the prototype usage statistics for the same 150 breast cancer survivors, we interviewed 114 (76.0%), through self-report questionnaires from CIMIDx blogs. The frequency of log-ins/person ranged from 0 to 30, total duration/person from 0 to 1500 minutes (25 hours). The 114 participants continued logging in to all phases, resulting in an intervention adherence rate of 44.3% (95% CI 33.2-55.9). The overall performance of the prototype for the good category, reported usefulness of the prototype (P=.77), overall satisfaction of the prototype (P=.31), ease of navigation (P=.89), user friendliness evaluation (P=.31), and overall satisfaction (P=.31). Positive evaluations given by 100 participants via a Web-based questionnaire supported our hypothesis. Conclusions The present study shows that women felt favorably about the use of a generic fully automated cloud-based self- management prototype. The study also demonstrated that the CIMIDx prototype resulted in the detection of more cancers in screening and diagnosing patients, with an increased accuracy rate. PMID:25830608
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
2018-01-01
Background Smartphone apps that provide women with information about their daily fertility status during their menstrual cycles can contribute to the contraceptive method mix. However, if these apps claim to help a user prevent pregnancy, they must undergo similar rigorous research required for other contraceptive methods. Georgetown University’s Institute for Reproductive Health is conducting a prospective longitudinal efficacy trial on Dot (Dynamic Optimal Timing), an algorithm-based fertility app designed to help women prevent pregnancy. Objective The aim of this paper was to highlight decision points during the recruitment-enrollment process and the effect of modifications on enrollment numbers and demographics. Recruiting eligible research participants for a contraceptive efficacy study and enrolling an adequate number to statistically assess the effectiveness of Dot is critical. Recruiting and enrolling participants for the Dot study involved making decisions based on research and analytic data, constant process modification, and close monitoring and evaluation of the effect of these modifications. Methods Originally, the only option for women to enroll in the study was to do so over the phone with a study representative. On noticing low enrollment numbers, we examined the 7 steps from the time a woman received the recruitment message until she completed enrollment and made modifications accordingly. In modification 1, we added call-back and voicemail procedures to increase the number of completed calls. Modification 2 involved using a chat and instant message (IM) features to facilitate study enrollment. In modification 3, the process was fully automated to allow participants to enroll in the study without the aid of study representatives. Results After these modifications were implemented, 719 women were enrolled in the study over a 6-month period. The majority of participants (494/719, 68.7%) were enrolled during modification 3, in which they had the option to enroll via phone, chat, or the fully automated process. Overall, 29.2% (210/719) of the participants were enrolled via a phone call, 19.9% (143/719) via chat/IM, and 50.9% (366/719) directly through the fully automated process. With respect to the demographic profile of our study sample, we found a significant statistical difference in education level across all modifications (P<.05) but not in age or race or ethnicity (P>.05). Conclusions Our findings show that agile and consistent modifications to the recruitment and enrollment process were necessary to yield an appropriate sample size. An automated process resulted in significantly higher enrollment rates than one that required phone interaction with study representatives. Although there were some differences in demographic characteristics of enrollees as the process was modified, in general, our study population is diverse and reflects the overall United States population in terms of race/ethnicity, age, and education. Additional research is proposed to identify how differences in mode of enrollment and demographic characteristics may affect participants’ performance in the study. Trial Registration ClinicalTrials.gov NCT02833922; http://clinicaltrials.gov/ct2/show/NCT02833922 (Archived by WebCite at http://www.webcitation.org/6yj5FHrBh) PMID:29678802
Objectively Optimized Observation Direction System Providing Situational Awareness for a Sensor Web
NASA Astrophysics Data System (ADS)
Aulov, O.; Lary, D. J.
2010-12-01
There is great utility in having a flexible and automated objective observation direction system for the decadal survey missions and beyond. Such a system allows us to optimize the observations made by suite of sensors to address specific goals from long term monitoring to rapid response. We have developed such a prototype using a network of communicating software elements to control a heterogeneous network of sensor systems, which can have multiple modes and flexible viewing geometries. Our system makes sensor systems intelligent and situationally aware. Together they form a sensor web of multiple sensors working together and capable of automated target selection, i.e. the sensors “know” where they are, what they are able to observe, what targets and with what priorities they should observe. This system is implemented in three components. The first component is a Sensor Web simulator. The Sensor Web simulator describes the capabilities and locations of each sensor as a function of time, whether they are orbital, sub-orbital, or ground based. The simulator has been implemented using AGIs Satellite Tool Kit (STK). STK makes it easy to analyze and visualize optimal solutions for complex space scenarios, and perform complex analysis of land, sea, air, space assets, and shares results in one integrated solution. The second component is target scheduler that was implemented with STK Scheduler. STK Scheduler is powered by a scheduling engine that finds better solutions in a shorter amount of time than traditional heuristic algorithms. The global search algorithm within this engine is based on neural network technology that is capable of finding solutions to larger and more complex problems and maximizing the value of limited resources. The third component is a modeling and data assimilation system. It provides situational awareness by supplying the time evolution of uncertainty and information content metrics that are used to tell us what we need to observe and the priority we should give to the observations. A prototype of this component was implemented with AutoChem. AutoChem is NASA release software constituting an automatic code generation, symbolic differentiator, analysis, documentation, and web site creation tool for atmospheric chemical modeling and data assimilation. Its model is explicit and uses an adaptive time-step, error monitoring time integration scheme for stiff systems of equations. AutoChem was the first model to ever have the facility to perform 4D-Var data assimilation and Kalman filter. The project developed a control system with three main accomplishments. First, fully multivariate observational and theoretical information with associated uncertainties was combined using a full Kalman filter data assimilation system. Second, an optimal distribution of the computations and of data queries was achieved by utilizing high performance computers/load balancing and a set of automatically mirrored databases. Third, inter-instrument bias correction was performed using machine learning. The PI for this project was Dr. David Lary of the UMBC Joint Center for Earth Systems Technology at NASA/Goddard Space Flight Center.
A fully automated FTIR system for remote sensing of greenhouse gases in the tropics
NASA Astrophysics Data System (ADS)
Geibel, M. C.; Gerbig, C.; Feist, D. G.
2010-07-01
This article introduces a new fully automated FTIR system that is part of the Total Carbon Column Observing Network. It will provide continuous ground-based measurements of column-averaged volume mixing ratio for CO2, CH4 and several other greenhouse gases in the tropics. Housed in a 20-foot shipping container it was developed as a transportable system that could be deployed almost anywhere in the world. We describe the automation concept which relies on three autonomous subsystems and their interaction. Crucial components like a sturdy and reliable solar tracker dome are described in detail. First results of total column measurements at Jena, Germany show that the instrument works well and can provide diurnal as well as seasonal cycle for CO2. Instrument line shape measurements with an HCl cell suggest that the instrument stays well-aligned over several months. After a short test campaign for side by side intercomaprison with an existing TCCON instrument in Australia, the system will be transported to its final destination Ascension Island.
Jack, Lisa M.; McClure, Jennifer B.; Deprey, Mona; Javitz, Harold S.; McAfee, Timothy A.; Catz, Sheryl L.; Richards, Julie; Bush, Terry; Swan, Gary E.
2011-01-01
Introduction: Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone–Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. Methods: One thousand two hundred and two participants were randomized to phone, Web, or combined phone–Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Results: Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone–Web, 41% Web), and those in the phone–Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Conclusions: Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities. PMID:21330267
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated cell counter. 864.5200 Section 864.5200....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the...
21 CFR 864.5240 - Automated blood cell diluting apparatus.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood cell diluting apparatus. 864.5240... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting apparatus is a fully automated or semi-automated device used to make appropriate dilutions of a blood sample...
21 CFR 864.5240 - Automated blood cell diluting apparatus.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated blood cell diluting apparatus. 864.5240... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting apparatus is a fully automated or semi-automated device used to make appropriate dilutions of a blood sample...
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Fully automated short-term incubation cycle antimicrobial susceptibility system. 866.1645 Section 866.1645 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES...
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Fully automated short-term incubation cycle antimicrobial susceptibility system. 866.1645 Section 866.1645 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES...
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Fully automated short-term incubation cycle antimicrobial susceptibility system. 866.1645 Section 866.1645 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES...
21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Fully automated short-term incubation cycle antimicrobial susceptibility system. 866.1645 Section 866.1645 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES...
Costa, Susana P F; Pinto, Paula C A G; Lapa, Rui A S; Saraiva, M Lúcia M F S
2015-03-02
A fully automated Vibrio fischeri methodology based on sequential injection analysis (SIA) has been developed. The methodology was based on the aspiration of 75 μL of bacteria and 50 μL of inhibitor followed by measurement of the luminescence of bacteria. The assays were conducted for contact times of 5, 15, and 30 min, by means of three mixing chambers that ensured adequate mixing conditions. The optimized methodology provided a precise control of the reaction conditions which is an asset for the analysis of a large number of samples. The developed methodology was applied to the evaluation of the impact of a set of ionic liquids (ILs) on V. fischeri and the results were compared with those provided by a conventional assay kit (Biotox(®)). The collected data evidenced the influence of different cation head groups and anion moieties on the toxicity of ILs. Generally, aromatic cations and fluorine-containing anions displayed higher impact on V. fischeri, evidenced by lower EC50. The proposed methodology was validated through statistical analysis which demonstrated a strong positive correlation (P>0.98) between assays. It is expected that the automated methodology can be tested for more classes of compounds and used as alternative to microplate based V. fischeri assay kits. Copyright © 2014 Elsevier B.V. All rights reserved.
Faded-example as a Tool to Acquire and Automate Mathematics Knowledge
NASA Astrophysics Data System (ADS)
Retnowati, E.
2017-04-01
Students themselves accomplish Knowledge acquisition and automation. The teacher plays a role as the facilitator by creating mathematics tasks that assist students in building knowledge efficiently and effectively. Cognitive load caused by learning material presented by teachers should be considered as a critical factor. While the intrinsic cognitive load is related to the degree of complexity of the material learning ones can handle, the extraneous cognitive load is directly caused by how the material is presented. Strategies to present a learning material in computational learning domains like mathematics are a namely worked example (fully-guided task) or problem-solving (discovery task with no guidance). According to the empirical evidence, learning based on problem-solving may cause high-extraneous cognitive load for students who have limited prior knowledge, conversely learn based on worked example may cause high-extraneous cognitive load for students who have mastered the knowledge base. An alternative is a faded example consisting of the partly-completed task. Learning from faded-example can facilitate students who already acquire some knowledge about the to-be-learned material but still need more practice to automate the knowledge further. This instructional strategy provides a smooth transition from a fully-guided into an independent problem solver. Designs of faded examples for learning trigonometry are discussed.
Enhancing surveillance for hepatitis C through public health informatics.
Heisey-Grove, Dawn M; Church, Daniel R; Haney, Gillian A; Demaria, Alfred
2011-01-01
Disease surveillance for hepatitis C in the United States is limited by the occult nature of many of these infections, the large volume of cases, and limited public health resources. Through a series of discrete processes, the Massachusetts Department of Public Health modified its surveillance system in an attempt to improve timeliness and completeness of reporting and case follow-up of hepatitis C. These processes included clinician-based reporting, electronic laboratory reporting, deployment of a Web-based disease surveillance system, automated triage of pertinent data, and automated character recognition software for case-report processing. These changes have resulted in an increase in the timeliness of reporting.
Review and evaluation of innovative technologies for measuring diet in nutritional epidemiology.
Illner, A-K; Freisling, H; Boeing, H; Huybrechts, I; Crispim, S P; Slimani, N
2012-08-01
The use of innovative technologies is deemed to improve dietary assessment in various research settings. However, their relative merits in nutritional epidemiological studies, which require accurate quantitative estimates of the usual intake at individual level, still need to be evaluated. To report on the inventory of available innovative technologies for dietary assessment and to critically evaluate their strengths and weaknesses as compared with the conventional methodologies (i.e. Food Frequency Questionnaires, food records, 24-hour dietary recalls) used in epidemiological studies. A list of currently available technologies was identified from English-language journals, using PubMed and Web of Science. The search criteria were principally based on the date of publication (between 1995 and 2011) and pre-defined search keywords. Six main groups of innovative technologies were identified ('Personal Digital Assistant-', 'Mobile-phone-', 'Interactive computer-', 'Web-', 'Camera- and tape-recorder-' and 'Scan- and sensor-based' technologies). Compared with the conventional food records, Personal Digital Assistant and mobile phone devices seem to improve the recording through the possibility for 'real-time' recording at eating events, but their validity to estimate individual dietary intakes was low to moderate. In 24-hour dietary recalls, there is still limited knowledge regarding the accuracy of fully automated approaches; and methodological problems, such as the inaccuracy in self-reported portion sizes might be more critical than in interview-based applications. In contrast, measurement errors in innovative web-based and in conventional paper-based Food Frequency Questionnaires are most likely similar, suggesting that the underlying methodology is unchanged by the technology. Most of the new technologies in dietary assessment were seen to have overlapping methodological features with the conventional methods predominantly used for nutritional epidemiology. Their main potential to enhance dietary assessment is through more cost- and time-effective, less laborious ways of data collection and higher subject acceptance, though their integration in epidemiological studies would need additional considerations, such as the study objectives, the target population and the financial resources available. However, even in innovative technologies, the inherent individual bias related to self-reported dietary intake will not be resolved. More research is therefore crucial to investigate the validity of innovative dietary assessment technologies.
Automated position control of a surface array relative to a liquid microjunction surface sampler
Van Berkel, Gary J.; Kertesz, Vilmos; Ford, Michael James
2007-11-13
A system and method utilizes an image analysis approach for controlling the probe-to-surface distance of a liquid junction-based surface sampling system for use with mass spectrometric detection. Such an approach enables a hands-free formation of the liquid microjunction used to sample solution composition from the surface and for re-optimization, as necessary, of the microjunction thickness during a surface scan to achieve a fully automated surface sampling system.
Synchronous Distance Education: Using Web-Conferencing in an MBA Accounting Course
ERIC Educational Resources Information Center
Ellingson, Dee Ann; Notbohm, Matthew
2012-01-01
Online distance education can take many forms, from a correspondence course with materials online to fully synchronous, live instruction. This paper describes a fully synchronous, live format using web-conferencing. Some useful features of web-conferencing and the way they are employed in this course are described. Instructor observations and…
Gardeux, Vincent; David, Fabrice P. A.; Shajkofci, Adrian; Schwalie, Petra C.; Deplancke, Bart
2017-01-01
Abstract Motivation Single-cell RNA-sequencing (scRNA-seq) allows whole transcriptome profiling of thousands of individual cells, enabling the molecular exploration of tissues at the cellular level. Such analytical capacity is of great interest to many research groups in the world, yet these groups often lack the expertise to handle complex scRNA-seq datasets. Results We developed a fully integrated, web-based platform aimed at the complete analysis of scRNA-seq data post genome alignment: from the parsing, filtering and normalization of the input count data files, to the visual representation of the data, identification of cell clusters, differentially expressed genes (including cluster-specific marker genes), and functional gene set enrichment. This Automated Single-cell Analysis Pipeline (ASAP) combines a wide range of commonly used algorithms with sophisticated visualization tools. Compared with existing scRNA-seq analysis platforms, researchers (including those lacking computational expertise) are able to interact with the data in a straightforward fashion and in real time. Furthermore, given the overlap between scRNA-seq and bulk RNA-seq analysis workflows, ASAP should conceptually be broadly applicable to any RNA-seq dataset. As a validation, we demonstrate how we can use ASAP to simply reproduce the results from a single-cell study of 91 mouse cells involving five distinct cell types. Availability and implementation The tool is freely available at asap.epfl.ch and R/Python scripts are available at github.com/DeplanckeLab/ASAP. Contact bart.deplancke@epfl.ch Supplementary information Supplementary data are available at Bioinformatics online. PMID:28541377
Gardeux, Vincent; David, Fabrice P A; Shajkofci, Adrian; Schwalie, Petra C; Deplancke, Bart
2017-10-01
Single-cell RNA-sequencing (scRNA-seq) allows whole transcriptome profiling of thousands of individual cells, enabling the molecular exploration of tissues at the cellular level. Such analytical capacity is of great interest to many research groups in the world, yet these groups often lack the expertise to handle complex scRNA-seq datasets. We developed a fully integrated, web-based platform aimed at the complete analysis of scRNA-seq data post genome alignment: from the parsing, filtering and normalization of the input count data files, to the visual representation of the data, identification of cell clusters, differentially expressed genes (including cluster-specific marker genes), and functional gene set enrichment. This Automated Single-cell Analysis Pipeline (ASAP) combines a wide range of commonly used algorithms with sophisticated visualization tools. Compared with existing scRNA-seq analysis platforms, researchers (including those lacking computational expertise) are able to interact with the data in a straightforward fashion and in real time. Furthermore, given the overlap between scRNA-seq and bulk RNA-seq analysis workflows, ASAP should conceptually be broadly applicable to any RNA-seq dataset. As a validation, we demonstrate how we can use ASAP to simply reproduce the results from a single-cell study of 91 mouse cells involving five distinct cell types. The tool is freely available at asap.epfl.ch and R/Python scripts are available at github.com/DeplanckeLab/ASAP. bart.deplancke@epfl.ch. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
MA, S.; Huang, Y.; Stacy, M.; Jiang, J.; Sundi, N.; Ricciuto, D. M.; Hanson, P. J.; Luo, Y.; Saruta, V.
2017-12-01
Ecological forecasting is critical in various aspects of our coupled human-nature systems, such as disaster risk reduction, natural resource management and climate change mitigation. Novel advancements are in urgent need to deepen our understandings of ecosystem dynamics, boost the predictive capacity of ecology, and provide timely and effective information for decision-makers in a rapidly changing world. Our study presents a smart system - Ecological Platform for Assimilation of Data (EcoPAD) - which streamlines web request-response, data management, model execution, result storage and visualization. EcoPAD allows users to (i) estimate model parameters or state variables, (ii) quantify uncertainty of estimated parameters and projected states of ecosystems, (iii) evaluate model structures, (iv) assess sampling strategies, (v) conduct ecological forecasting, and (vi) detect ecosystem acclimation to climate change. One of the key innovations of the web-based EcoPAD is the automated near- or real-time forecasting of ecosystem dynamics with uncertainty fully quantified. The user friendly webpage enables non-modelers to explore their data for simulation and data assimilation. As a case study, we applied EcoPAD to the Spruce and Peatland Responses Under Climatic and Environmental Change Experiment (SPRUCE), a whole ecosystem warming and CO2 enrichment treatment project in the northern peatland, assimilated multiple data streams into a process based ecosystem model, enhanced timely feedback between modelers and experimenters, ultimately improved ecosystem forecasting and made better use of current knowledge. Built in a framework with flexible API, EcoPAD is easily portable and will benefit scientific communities, policy makers as well as the general public.
Happy ending: a randomized controlled trial of a digital multi-media smoking cessation intervention.
Brendryen, Håvar; Kraft, Pål
2008-03-01
To assess the long-term efficacy of a fully automated digital multi-media smoking cessation intervention. Two-arm randomized control trial (RCT). Setting World Wide Web (WWW) study based in Norway. Subjects (n = 396) were recruited via internet advertisements and assigned randomly to conditions. Inclusion criteria were willingness to quit smoking and being aged 18 years or older. The treatment group received the internet- and cell-phone-based Happy Ending intervention. The intervention programme lasted 54 weeks and consisted of more than 400 contacts by e-mail, web-pages, interactive voice response (IVR) and short message service (SMS) technology. The control group received a self-help booklet. Additionally, both groups were offered free nicotine replacement therapy (NRT). Abstinence was defined as 'not even a puff of smoke, for the last 7 days', and assessed by means of internet surveys or telephone interviews. The main outcome was repeated point abstinence at 1, 3, 6 and 12 months following cessation. Participants in the treatment group reported clinically and statistically significantly higher repeated point abstinence rates than control participants [22.3% versus 13.1%; odds ratio (OR) = 1.91, 95% confidence interval (CI): 1.12-3.26, P = 0.02; intent-to-treat). Improved adherence to NRT and a higher level of post-cessation self-efficacy were observed in the treatment group compared with the control group. As the first RCT documenting the long-term treatment effects of such an intervention, this study adds to the promise of digital media in supporting behaviour change.
Reaction time effects in lab- versus Web-based research: Experimental evidence.
Hilbig, Benjamin E
2016-12-01
Although Web-based research is now commonplace, it continues to spur skepticism from reviewers and editors, especially whenever reaction times are of primary interest. Such persistent preconceptions are based on arguments referring to increased variation, the limits of certain software and technologies, and a noteworthy lack of comparisons (between Web and lab) in fully randomized experiments. To provide a critical test, participants were randomly assigned to complete a lexical decision task either (a) in the lab using standard experimental software (E-Prime), (b) in the lab using a browser-based version (written in HTML and JavaScript), or (c) via the Web using the same browser-based version. The classical word frequency effect was typical in size and corresponded to a very large effect in all three conditions. There was no indication that the Web- or browser-based data collection was in any way inferior. In fact, if anything, a larger effect was obtained in the browser-based conditions than in the condition relying on standard experimental software. No differences between Web and lab (within the browser-based conditions) could be observed, thus disconfirming any substantial influence of increased technical or situational variation. In summary, the present experiment contradicts the still common preconception that reaction time effects of only a few hundred milliseconds cannot be detected in Web experiments.
Fully automated gynecomastia quantification from low-dose chest CT
NASA Astrophysics Data System (ADS)
Liu, Shuang; Sonnenblick, Emily B.; Azour, Lea; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2018-02-01
Gynecomastia is characterized by the enlargement of male breasts, which is a common and sometimes distressing condition found in over half of adult men over the age of 44. Although the majority of gynecomastia is physiologic or idiopathic, its occurrence may also associate with an extensive variety of underlying systemic disease or drug toxicity. With the recent large-scale implementation of annual lung cancer screening using low-dose chest CT (LDCT), gynecomastia is believed to be a frequent incidental finding on LDCT. A fully automated system for gynecomastia quantification from LDCT is presented in this paper. The whole breast region is first segmented using an anatomyorientated approach based on the propagation of pectoral muscle fronts in the vertical direction. The subareolar region is then localized, and the fibroglandular tissue within it is measured for the assessment of gynecomastia. The presented system was validated using 454 breast regions from non-contrast LDCT scans of 227 adult men. The ground truth was established by an experienced radiologist by classifying each breast into one of the five categorical scores. The automated measurements have been demonstrated to achieve promising performance for the gynecomastia diagnosis with the AUC of 0.86 for the ROC curve and have statistically significant Spearman correlation r=0.70 (p < 0.001) with the reference categorical grades. The encouraging results demonstrate the feasibility of fully automated gynecomastia quantification from LDCT, which may aid the early detection as well as the treatment of both gynecomastia and the underlying medical problems, if any, that cause gynecomastia.
Space station automation: the role of robotics and artificial intelligence (Invited Paper)
NASA Astrophysics Data System (ADS)
Park, W. T.; Firschein, O.
1985-12-01
Automation of the space station is necessary to make more effective use of the crew, to carry out repairs that are impractical or dangerous, and to monitor and control the many space station subsystems. Intelligent robotics and expert systems play a strong role in automation, and both disciplines are highly dependent on a common artificial intelligence (Al) technology base. The AI technology base provides the reasoning and planning capabilities needed in robotic tasks, such as perception of the environment and planning a path to a goal, and in expert systems tasks, such as control of subsystems and maintenance of equipment. This paper describes automation concepts for the space station, the specific robotic and expert systems required to attain this automation, and the research and development required. It also presents an evolutionary development plan that leads to fully automatic mobile robots for servicing satellites. Finally, we indicate the sequence of demonstrations and the research and development needed to confirm the automation capabilities. We emphasize that advanced robotics requires AI, and that to advance, AI needs the "real-world" problems provided by robotics.
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2014 CFR
2014-04-01
....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2011 CFR
2011-04-01
....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2012 CFR
2012-04-01
....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...
21 CFR 864.5200 - Automated cell counter.
Code of Federal Regulations, 2013 CFR
2013-04-01
....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...
NASA Technical Reports Server (NTRS)
Cabrall, C.; Gomez, A.; Homola, J.; Hunt, S..; Martin, L.; Merccer, J.; Prevott, T.
2013-01-01
As part of an ongoing research effort on separation assurance and functional allocation in NextGen, a controller- in-the-loop study with ground-based automation was conducted at NASA Ames' Airspace Operations Laboratory in August 2012 to investigate the potential impact of introducing self-separating aircraft in progressively advanced NextGen timeframes. From this larger study, the current exploratory analysis of controller-automation interaction styles focuses on the last and most far-term time frame. Measurements were recorded that firstly verified the continued operational validity of this iteration of the ground-based functional allocation automation concept in forecast traffic densities up to 2x that of current day high altitude en-route sectors. Additionally, with greater levels of fully automated conflict detection and resolution as well as the introduction of intervention functionality, objective and subjective analyses showed a range of passive to active controller- automation interaction styles between the participants. Not only did the controllers work with the automation to meet their safety and capacity goals in the simulated future NextGen timeframe, they did so in different ways and with different attitudes of trust/use of the automation. Taken as a whole, the results showed that the prototyped controller-automation functional allocation framework was very flexible and successful overall.
Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.
Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart
2017-01-01
Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.
ERIC Educational Resources Information Center
Wagner, Erica; Enders, Jeanne; Pirie, Melissa Shaquid; Thomas, Domanic
2016-01-01
Since 2012, we have used synchronous, web-based video conferences in our fully-online degree completion program. Students are required to participate in four live video conferences with their professor and a small group of peers in all upper division online courses as a minimum requirement for passing the class. While these synchronous video…
Blending Technology with Camp Tradition: Technology Can Simplify Camp Operations.
ERIC Educational Resources Information Center
Salzman, Jeff
2000-01-01
Discusses uses of technology appropriate for camps, which are service organizations based on building relationships. Describes relationship marketing and how it can be enhanced through use of Web sites, interactive brochures, and client databases. Outlines other technology uses at camp: automated dispensing of medications, satellite tracking of…
Automated Workflow: Balancing Reduced Personnel with Web-Based Systems
ERIC Educational Resources Information Center
Armbruster, Stephanie; Strasburger, Tom
2011-01-01
The economic crisis has caused districts throughout the United States to cut resources and limit spending, particularly with regard to staff. However, the requirements for maintaining safe, efficiently functioning schools and for remaining in compliance with state and federal regulations have not decreased; in many cases, those responsibilities…
Automated homogeneous liposome immunoassay systems for anticonvulsant drugs.
Kubotsu, K; Goto, S; Fujita, M; Tuchiya, H; Kida, M; Takano, S; Matsuura, S; Sakurabayashi, I
1992-06-01
We developed automated homogeneous immunoassays, based on immunolysis of liposomes, for measuring phenytoin, phenobarbital, and carbamazepine from serum. Liposome lysis was detected spectrophotometrically from entrapped glucose-6-phosphate dehydrogenase activity. The procedure was fully automated on a routine automated clinical analyzer. Within-run, between-run, dilution, and recovery tests showed good accuracies and reproducibilities. Bilirubin, hemoglobin, triglycerides, and Intrafat did not affect assay results. The results obtained by liposome immunoassays for phenytoin, phenobarbital, and carbamazepine correlated well with those obtained by enzyme-multiplied immunoassay (Syva EMIT) kits (r = 0.995, 0.986, and 0.988, respectively) and fluorescence polarization immunoassay (Abbott TDx) kits (r = 0.990, 0.991, and 0.975, respectively). The proposed method should be useful for monitoring anticonvulsant drug concentrations in blood.
Fully Automated Sample Preparation for Ultrafast N-Glycosylation Analysis of Antibody Therapeutics.
Szigeti, Marton; Lew, Clarence; Roby, Keith; Guttman, Andras
2016-04-01
There is a growing demand in the biopharmaceutical industry for high-throughput, large-scale N-glycosylation profiling of therapeutic antibodies in all phases of product development, but especially during clone selection when hundreds of samples should be analyzed in a short period of time to assure their glycosylation-based biological activity. Our group has recently developed a magnetic bead-based protocol for N-glycosylation analysis of glycoproteins to alleviate the hard-to-automate centrifugation and vacuum-centrifugation steps of the currently used protocols. Glycan release, fluorophore labeling, and cleanup were all optimized, resulting in a <4 h magnetic bead-based process with excellent yield and good repeatability. This article demonstrates the next level of this work by automating all steps of the optimized magnetic bead-based protocol from endoglycosidase digestion, through fluorophore labeling and cleanup with high-throughput sample processing in 96-well plate format, using an automated laboratory workstation. Capillary electrophoresis analysis of the fluorophore-labeled glycans was also optimized for rapid (<3 min) separation to accommodate the high-throughput processing of the automated sample preparation workflow. Ultrafast N-glycosylation analyses of several commercially relevant antibody therapeutics are also shown and compared to their biosimilar counterparts, addressing the biological significance of the differences. © 2015 Society for Laboratory Automation and Screening.
FoldMiner and LOCK 2: protein structure comparison and motif discovery on the web.
Shapiro, Jessica; Brutlag, Douglas
2004-07-01
The FoldMiner web server (http://foldminer.stanford.edu/) provides remote access to methods for protein structure alignment and unsupervised motif discovery. FoldMiner is unique among such algorithms in that it improves both the motif definition and the sensitivity of a structural similarity search by combining the search and motif discovery methods and using information from each process to enhance the other. In a typical run, a query structure is aligned to all structures in one of several databases of single domain targets in order to identify its structural neighbors and to discover a motif that is the basis for the similarity among the query and statistically significant targets. This process is fully automated, but options for manual refinement of the results are available as well. The server uses the Chime plugin and customized controls to allow for visualization of the motif and of structural superpositions. In addition, we provide an interface to the LOCK 2 algorithm for rapid alignments of a query structure to smaller numbers of user-specified targets.
USDA-ARS?s Scientific Manuscript database
The Food Intake Recording Software System, version 4 (FIRSSt4), is a web-based 24-h dietary recall (24 hdr) self-administered by children based on the Automated Self-Administered 24-h recall (ASA24) (a self-administered 24 hdr for adults). The food choices in FIRSSt4 are abbreviated to include only ...
Automating the Processing of Earth Observation Data
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wan-Lin; Nemani, Ramakrishna; Votava, Petr
2003-01-01
NASA s vision for Earth science is to build a "sensor web": an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving this vision will require automation not only in the scheduling of the observations but also in the processing of the resulting data. To address this need, we are developing a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products.
Localization-based super-resolution imaging meets high-content screening.
Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste
2017-12-01
Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.
He, Ji; Dai, Xinbin; Zhao, Xuechun
2007-02-09
BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform-independent, easily configurable and capable of comprehensive expansion, and user-intuitive. PLAN is freely available to academic users at http://bioinfo.noble.org/plan/. The source code for local deployment is provided under free license. Full support on system utilization, installation, configuration and customization are provided to academic users.
He, Ji; Dai, Xinbin; Zhao, Xuechun
2007-01-01
Background BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Results Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. Conclusion PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform-independent, easily configurable and capable of comprehensive expansion, and user-intuitive. PLAN is freely available to academic users at . The source code for local deployment is provided under free license. Full support on system utilization, installation, configuration and customization are provided to academic users. PMID:17291345
Web-Accessible Scientific Workflow System for Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roelof Versteeg; Roelof Versteeg; Trevor Rowe
2006-03-01
We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoringmore » systems with different degrees of complexity.« less
Low-Bandwidth and Non-Compute Intensive Remote Identification of Microbes from Raw Sequencing Reads
Gautier, Laurent; Lund, Ole
2013-01-01
Cheap DNA sequencing may soon become routine not only for human genomes but also for practically anything requiring the identification of living organisms from their DNA: tracking of infectious agents, control of food products, bioreactors, or environmental samples. We propose a novel general approach to the analysis of sequencing data where a reference genome does not have to be specified. Using a distributed architecture we are able to query a remote server for hints about what the reference might be, transferring a relatively small amount of data. Our system consists of a server with known reference DNA indexed, and a client with raw sequencing reads. The client sends a sample of unidentified reads, and in return receives a list of matching references. Sequences for the references can be retrieved and used for exhaustive computation on the reads, such as alignment. To demonstrate this approach we have implemented a web server, indexing tens of thousands of publicly available genomes and genomic regions from various organisms and returning lists of matching hits from query sequencing reads. We have also implemented two clients: one running in a web browser, and one as a python script. Both are able to handle a large number of sequencing reads and from portable devices (the browser-based running on a tablet), perform its task within seconds, and consume an amount of bandwidth compatible with mobile broadband networks. Such client-server approaches could develop in the future, allowing a fully automated processing of sequencing data and routine instant quality check of sequencing runs from desktop sequencers. A web access is available at http://tapir.cbs.dtu.dk. The source code for a python command-line client, a server, and supplementary data are available at http://bit.ly/1aURxkc. PMID:24391826
Low-bandwidth and non-compute intensive remote identification of microbes from raw sequencing reads.
Gautier, Laurent; Lund, Ole
2013-01-01
Cheap DNA sequencing may soon become routine not only for human genomes but also for practically anything requiring the identification of living organisms from their DNA: tracking of infectious agents, control of food products, bioreactors, or environmental samples. We propose a novel general approach to the analysis of sequencing data where a reference genome does not have to be specified. Using a distributed architecture we are able to query a remote server for hints about what the reference might be, transferring a relatively small amount of data. Our system consists of a server with known reference DNA indexed, and a client with raw sequencing reads. The client sends a sample of unidentified reads, and in return receives a list of matching references. Sequences for the references can be retrieved and used for exhaustive computation on the reads, such as alignment. To demonstrate this approach we have implemented a web server, indexing tens of thousands of publicly available genomes and genomic regions from various organisms and returning lists of matching hits from query sequencing reads. We have also implemented two clients: one running in a web browser, and one as a python script. Both are able to handle a large number of sequencing reads and from portable devices (the browser-based running on a tablet), perform its task within seconds, and consume an amount of bandwidth compatible with mobile broadband networks. Such client-server approaches could develop in the future, allowing a fully automated processing of sequencing data and routine instant quality check of sequencing runs from desktop sequencers. A web access is available at http://tapir.cbs.dtu.dk. The source code for a python command-line client, a server, and supplementary data are available at http://bit.ly/1aURxkc.
Standardized Automated CO2/H2O Flux Systems for Individual Research Groups and Flux Networks
NASA Astrophysics Data System (ADS)
Burba, George; Begashaw, Israel; Fratini, Gerardo; Griessbaum, Frank; Kathilankal, James; Xu, Liukang; Franz, Daniela; Joseph, Everette; Larmanou, Eric; Miller, Scott; Papale, Dario; Sabbatini, Simone; Sachs, Torsten; Sakai, Ricardo; McDermitt, Dayle
2017-04-01
In recent years, spatial and temporal flux data coverage improved significantly, and on multiple scales, from a single station to continental networks, due to standardization, automation, and management of data collection, and better handling of the extensive amounts of generated data. With more stations and networks, larger data flows from each station, and smaller operating budgets, modern tools are required to effectively and efficiently handle the entire process. Such tools are needed to maximize time dedicated to authoring publications and answering research questions, and to minimize time and expenses spent on data acquisition, processing, and quality control. Thus, these tools should produce standardized verifiable datasets and provide a way to cross-share the standardized data with external collaborators to leverage available funding, promote data analyses and publications. LI-COR gas analyzers are widely used in past and present flux networks such as AmeriFlux, ICOS, AsiaFlux, OzFlux, NEON, CarboEurope, and FluxNet-Canada, etc. These analyzers have gone through several major improvements over the past 30 years. However, in 2016, a three-prong development was completed to create an automated flux system which can accept multiple sonic anemometer and datalogger models, compute final and complete fluxes on-site, merge final fluxes with supporting weather soil and radiation data, monitor station outputs and send automated alerts to researchers, and allow secure sharing and cross-sharing of the station and data access. Two types of these research systems were developed: open-path (LI-7500RS) and enclosed-path (LI-7200RS). Key developments included: • Improvement of gas analyzer performance • Standardization and automation of final flux calculations onsite, and in real-time • Seamless integration with latest site management and data sharing tools In terms of the gas analyzer performance, the RS analyzers are based on established LI-7500/A and LI-7200 models, and the improvements focused on increased stability in the presence of contamination, refining temperature control and compensation, and providing more accurate fast gas concentration measurements. In terms of the flux calculations, improvements focused on automating the on-site flux calculations using EddyPro® software run by a weatherized fully digital microcomputer, SmartFlux2. In terms of site management and data sharing, the development focused on web-based software, FluxSuite, which allows real-time station monitoring and data access by multiple users. The presentation will describe details for the key developments and will include results from field tests of the RS gas analyzer models in comparison with older models and control reference instruments.
Automated Assessment of the Quality of Depression Websites
Tang, Thanh Tin; Hawking, David; Christensen, Helen
2005-01-01
Background Since health information on the World Wide Web is of variable quality, methods are needed to assist consumers to identify health websites containing evidence-based information. Manual assessment tools may assist consumers to evaluate the quality of sites. However, these tools are poorly validated and often impractical. There is a need to develop better consumer tools, and in particular to explore the potential of automated procedures for evaluating the quality of health information on the web. Objective This study (1) describes the development of an automated quality assessment procedure (AQA) designed to automatically rank depression websites according to their evidence-based quality; (2) evaluates the validity of the AQA relative to human rated evidence-based quality scores; and (3) compares the validity of Google PageRank and the AQA as indicators of evidence-based quality. Method The AQA was developed using a quality feedback technique and a set of training websites previously rated manually according to their concordance with statements in the Oxford University Centre for Evidence-Based Mental Health’s guidelines for treating depression. The validation phase involved 30 websites compiled from the DMOZ, Yahoo! and LookSmart Depression Directories by randomly selecting six sites from each of the Google PageRank bands of 0, 1-2, 3-4, 5-6 and 7-8. Evidence-based ratings from two independent raters (based on concordance with the Oxford guidelines) were then compared with scores derived from the automated AQA and Google algorithms. There was no overlap in the websites used in the training and validation phases of the study. Results The correlation between the AQA score and the evidence-based ratings was high and significant (r=0.85, P<.001). Addition of a quadratic component improved the fit, the combined linear and quadratic model explaining 82 percent of the variance. The correlation between Google PageRank and the evidence-based score was lower than that for the AQA. When sites with zero PageRanks were included the association was weak and non-significant (r=0.23, P=.22). When sites with zero PageRanks were excluded, the correlation was moderate (r=.61, P=.002). Conclusions Depression websites of different evidence-based quality can be differentiated using an automated system. If replicable, generalizable to other health conditions and deployed in a consumer-friendly form, the automated procedure described here could represent an important advance for consumers of Internet medical information. PMID:16403723
AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Yingssu; Stanford University, 333 Campus Drive, Mudd Building, Stanford, CA 94305-5080; McPhillips, Scott E.
New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data,more » performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier maps.« less
Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial
Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee
2014-01-01
This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988
Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee
2014-07-01
This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.
Application of a minicomputer-based system in measuring intraocular fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronzino, J.D.; D'Amato, D.P.; O'Rourke, J.
A complete, computerized system has been developed to automate and display radionuclide clearance studies in an ophthalmology clinical laboratory. The system is based on a PDP-8E computer with a 16-k core memory and includes a dual-drive Decassette system and an interactive display terminal. The software controls the acquisition of data from an NIM scaler, times the procedures, and analyzes and simultaneously displays logarithmically converted data on a fully annotated graph. Animal studies and clinical experiments are presented to illustrate the nature of these displays and the results obtained using this automated eye physiometer.
A Program Certification Assistant Based on Fully Automated Theorem Provers
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2005-01-01
We describe a certification assistant to support formal safety proofs for programs. It is based on a graphical user interface that hides the low-level details of first-order automated theorem provers while supporting limited interactivity: it allows users to customize and control the proof process on a high level, manages the auxiliary artifacts produced during this process, and provides traceability between the proof obligations and the relevant parts of the program. The certification assistant is part of a larger program synthesis system and is intended to support the deployment of automatically generated code in safety-critical applications.
Developing Access Control Model of Web OLAP over Trusted and Collaborative Data Warehouses
NASA Astrophysics Data System (ADS)
Fugkeaw, Somchart; Mitrpanont, Jarernsri L.; Manpanpanich, Piyawit; Juntapremjitt, Sekpon
This paper proposes the design and development of Role- based Access Control (RBAC) model for the Single Sign-On (SSO) Web-OLAP query spanning over multiple data warehouses (DWs). The model is based on PKI Authentication and Privilege Management Infrastructure (PMI); it presents a binding model of RBAC authorization based on dimension privilege specified in attribute certificate (AC) and user identification. Particularly, the way of attribute mapping between DW user authentication and privilege of dimensional access is illustrated. In our approach, we apply the multi-agent system to automate flexible and effective management of user authentication, role delegation as well as system accountability. Finally, the paper culminates in the prototype system A-COLD (Access Control of web-OLAP over multiple DWs) that incorporates the OLAP features and authentication and authorization enforcement in the multi-user and multi-data warehouse environment.
Silicon web process development. [for low cost solar cells
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Hopkins, R. H.; Seidensticker, R. G.; Mchugh, J. P.; Hill, F. E.; Heimlich, M. E.; Driggers, J. M.
1979-01-01
Silicon dendritic web, a single crystal ribbon shaped during growth by crystallographic forces and surface tension (rather than dies), is a highly promising base material for efficient low cost solar cells. The form of the product smooth, flexible strips 100 to 200 microns thick, conserves expensive silicon and facilitates automation of crystal growth and the subsequent manufacturing of solar cells. These characteristics, coupled with the highest demonstrated ribbon solar cell efficiency-15.5%-make silicon web a leading candidate to achieve, or better, the 1986 Low Cost Solar Array (LSA) Project cost objective of 50 cents per peak watt of photovoltaic output power. The main objective of the Web Program, technology development to significantly increase web output rate, and to show the feasibility for simultaneous melt replenishment and growth, have largely been accomplished. Recently, web output rates of 23.6 sq cm/min, nearly three times the 8 sq cm/min maximum rate of a year ago, were achieved. Webs 4 cm wide or greater were grown on a number of occassions.
The value of the Semantic Web in the laboratory.
Frey, Jeremy G
2009-06-01
The Semantic Web is beginning to impact on the wider chemical and physical sciences, beyond the earlier adopted bio-informatics. While useful in large-scale data driven science with automated processing, these technologies can also help integrate the work of smaller scale laboratories producing diverse data. The semantics aid the discovery, reliable re-use of data, provide improved provenance and facilitate automated processing by increased resilience to changes in presentation and reduced ambiguity. The Semantic Web, its tools and collections are not yet competitive with well-established solutions to current problems. It is in the reduced cost of instituting solutions to new problems that the versatility of Semantic Web-enabled data and resources will make their mark once the more general-purpose tools are more available.
NASA Astrophysics Data System (ADS)
Ekenes, K.
2017-12-01
This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.
SU-E-T-11: A Cloud Based CT and LINAC QA Data Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersma, R; Grelewicz, Z; Belcher, A
Purpose: The current status quo of QA data management consists of a mixture of paper-based forms and spreadsheets for recording the results of daily, monthly, and yearly QA tests for both CT scanners and LINACs. Unfortunately, such systems suffer from a host of problems as, (1) records can be easily lost or destroyed, (2) data is difficult to access — one must physically hunt down records, (3) poor or no means of historical data analysis, and (4) no remote monitoring of machine performance off-site. To address these issues, a cloud based QA data management system was developed and implemented. Methods:more » A responsive tablet interface that optimizes clinic workflow with an easy-to-navigate interface accessible from any web browser was implemented in HTML/javascript/CSS to allow user mobility when entering QA data. Automated image QA was performed using a phantom QA kit developed in Python that is applicable to any phantom and is currently being used with the Gammex ACR, Las Vegas, Leeds, and Catphan phantoms for performing automated CT, MV, kV, and CBCT QAs, respectively. A Python based resource management system was used to distribute and manage intensive CPU tasks such as QA phantom image analysis or LaTeX-to-PDF QA report generation to independent process threads or different servers such that website performance is not affected. Results: To date the cloud QA system has performed approximately 185 QA procedures. Approximately 200 QA parameters are being actively tracked by the system on a monthly basis. Electronic access to historical QA parameter information was successful in proactively identifying a Linac CBCT scanner’s performance degradation. Conclusion: A fully comprehensive cloud based QA data management system was successfully implemented for the first time. Potential machine performance issues were proactively identified that would have been otherwise missed by a paper or spreadsheet based QA system.« less
Harati, Vida; Khayati, Rasoul; Farzan, Abdolreza
2011-07-01
Uncontrollable and unlimited cell growth leads to tumor genesis in the brain. If brain tumors are not diagnosed early and cured properly, they could cause permanent brain damage or even death to patients. As in all methods of treatments, any information about tumor position and size is important for successful treatment; hence, finding an accurate and a fully automated method to give information to physicians is necessary. A fully automatic and accurate method for tumor region detection and segmentation in brain magnetic resonance (MR) images is suggested. The presented approach is an improved fuzzy connectedness (FC) algorithm based on a scale in which the seed point is selected automatically. This algorithm is independent of the tumor type in terms of its pixels intensity. Tumor segmentation evaluation results based on similarity criteria (similarity index (SI), overlap fraction (OF), and extra fraction (EF) are 92.89%, 91.75%, and 3.95%, respectively) indicate a higher performance of the proposed approach compared to the conventional methods, especially in MR images, in tumor regions with low contrast. Thus, the suggested method is useful for increasing the ability of automatic estimation of tumor size and position in brain tissues, which provides more accurate investigation of the required surgery, chemotherapy, and radiotherapy procedures. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kovacevic, Sanja; Rafii, Michael S.; Brewer, James B.
2008-01-01
Medial temporal lobe (MTL) atrophy is associated with increased risk for conversion to Alzheimer's disease (AD), but manual tracing techniques and even semi-automated techniques for volumetric assessment are not practical in the clinical setting. In addition, most studies that examined MTL atrophy in AD have focused only on the hippocampus. It is unknown the extent to which volumes of amygdala and temporal horn of the lateral ventricle predict subsequent clinical decline. This study examined whether measures of hippocampus, amygdala, and temporal horn volume predict clinical decline over the following 6-month period in patients with mild cognitive impairment (MCI). Fully-automated volume measurements were performed in 269 MCI patients. Baseline volumes of the hippocampus, amygdala, and temporal horn were evaluated as predictors of change in Mini-mental State Exam (MMSE) and Clinical Dementia Rating Sum of Boxes (CDR SB) over a 6-month interval. Fully-automated measurements of baseline hippocampus and amygdala volumes correlated with baseline delayed recall scores. Patients with smaller baseline volumes of the hippocampus and amygdala or larger baseline volumes of the temporal horn had more rapid subsequent clinical decline on MMSE and CDR SB. Fully-automated and rapid measurement of segmental MTL volumes may help clinicians predict clinical decline in MCI patients. PMID:19474571
Automated, per pixel Cloud Detection from High-Resolution VNIR Data
NASA Technical Reports Server (NTRS)
Varlyguin, Dmitry L.
2007-01-01
CASA is a fully automated software program for the per-pixel detection of clouds and cloud shadows from medium- (e.g., Landsat, SPOT, AWiFS) and high- (e.g., IKONOS, QuickBird, OrbView) resolution imagery without the use of thermal data. CASA is an object-based feature extraction program which utilizes a complex combination of spectral, spatial, and contextual information available in the imagery and the hierarchical self-learning logic for accurate detection of clouds and their shadows.
Automated sensor networks to advance ocean science
NASA Astrophysics Data System (ADS)
Schofield, O.; Orcutt, J. A.; Arrott, M.; Vernon, F. L.; Peach, C. L.; Meisinger, M.; Krueger, I.; Kleinert, J.; Chao, Y.; Chien, S.; Thompson, D. R.; Chave, A. D.; Balasuriya, A.
2010-12-01
The National Science Foundation has funded the Ocean Observatories Initiative (OOI), which over the next five years will deploy infrastructure to expand scientist’s ability to remotely study the ocean. The deployed infrastructure will be linked by a robust cyberinfrastructure (CI) that will integrate marine observatories into a coherent system-of-systems. OOI is committed to engaging the ocean sciences community during the construction pahse. For the CI, this is being enabled by using a “spiral design strategy” allowing for input throughout the construction phase. In Fall 2009, the OOI CI development team used an existing ocean observing network in the Mid-Atlantic Bight (MAB) to test OOI CI software. The objective of this CI test was to aggregate data from ships, autonomous underwater vehicles (AUVs), shore-based radars, and satellites and make it available to five different data-assimilating ocean forecast models. Scientists used these multi-model forecasts to automate future glider missions in order to demonstrate the feasibility of two-way interactivity between the sensor web and predictive models. The CI software coordinated and prioritized the shared resources that allowed for the semi-automated reconfiguration of assett-tasking, and thus enabled an autonomous execution of observation plans for the fixed and mobile observation platforms. Efforts were coordinated through a web portal that provided an access point for the observational data and model forecasts. Researchers could use the CI software in tandem with the web data portal to assess the performance of individual numerical model results, or multi-model ensembles, through real-time comparisons with satellite, shore-based radar, and in situ robotic measurements. The resulting sensor net will enable a new means to explore and study the world’s oceans by providing scientists a responsive network in the world’s oceans that can be accessed via any wireless network.
De Cocker, Katrien; De Bourdeaudhuij, Ilse; Cardon, Greet; Vandelanotte, Corneel
2016-05-31
Effective interventions to influence workplace sitting are needed, as office-based workers demonstrate high levels of continued sitting, and sitting too much is associated with adverse health effects. Therefore, we developed a theory-driven, Web-based, interactive, computer-tailored intervention aimed at reducing and interrupting sitting at work. The objective of our study was to investigate the effects of this intervention on objectively measured sitting time, standing time, and breaks from sitting, as well as self-reported context-specific sitting among Flemish employees in a field-based approach. Employees (n=213) participated in a 3-group randomized controlled trial that assessed outcomes at baseline, 1-month follow-up, and 3-month follow-up through self-reports. A subsample (n=122) were willing to wear an activity monitor (activPAL) from Monday to Friday. The tailored group received an automated Web-based, computer-tailored intervention including personalized feedback and tips on how to reduce or interrupt workplace sitting. The generic group received an automated Web-based generic advice with tips. The control group was a wait-list control condition, initially receiving no intervention. Intervention effects were tested with repeated-measures multivariate analysis of variance. The tailored intervention was successful in decreasing self-reported total workday sitting (time × group: P<.001), sitting at work (time × group: P<.001), and leisure time sitting (time × group: P=.03), and in increasing objectively measured breaks at work (time × group: P=.07); this was not the case in the other conditions. The changes in self-reported total nonworkday sitting, sitting during transport, television viewing, and personal computer use, objectively measured total sitting time, and sitting and standing time at work did not differ between conditions. Our results point out the significance of computer tailoring for sedentary behavior and its potential use in public health promotion, as the effects of the tailored condition were superior to the generic and control conditions. Clinicaltrials.gov NCT02672215; http://clinicaltrials.gov/ct2/show/NCT02672215 (Archived by WebCite at http://www.webcitation.org/6glPFBLWv).
De Bourdeaudhuij, Ilse; Cardon, Greet; Vandelanotte, Corneel
2016-01-01
Background Effective interventions to influence workplace sitting are needed, as office-based workers demonstrate high levels of continued sitting, and sitting too much is associated with adverse health effects. Therefore, we developed a theory-driven, Web-based, interactive, computer-tailored intervention aimed at reducing and interrupting sitting at work. Objective The objective of our study was to investigate the effects of this intervention on objectively measured sitting time, standing time, and breaks from sitting, as well as self-reported context-specific sitting among Flemish employees in a field-based approach. Methods Employees (n=213) participated in a 3-group randomized controlled trial that assessed outcomes at baseline, 1-month follow-up, and 3-month follow-up through self-reports. A subsample (n=122) were willing to wear an activity monitor (activPAL) from Monday to Friday. The tailored group received an automated Web-based, computer-tailored intervention including personalized feedback and tips on how to reduce or interrupt workplace sitting. The generic group received an automated Web-based generic advice with tips. The control group was a wait-list control condition, initially receiving no intervention. Intervention effects were tested with repeated-measures multivariate analysis of variance. Results The tailored intervention was successful in decreasing self-reported total workday sitting (time × group: P<.001), sitting at work (time × group: P<.001), and leisure time sitting (time × group: P=.03), and in increasing objectively measured breaks at work (time × group: P=.07); this was not the case in the other conditions. The changes in self-reported total nonworkday sitting, sitting during transport, television viewing, and personal computer use, objectively measured total sitting time, and sitting and standing time at work did not differ between conditions. Conclusions Our results point out the significance of computer tailoring for sedentary behavior and its potential use in public health promotion, as the effects of the tailored condition were superior to the generic and control conditions. Trial Registration Clinicaltrials.gov NCT02672215; http://clinicaltrials.gov/ct2/show/NCT02672215 (Archived by WebCite at http://www.webcitation.org/6glPFBLWv) PMID:27245789
Hegarty, Kelsey; Tarzia, Laura; Murray, Elizabeth; Valpied, Jodie; Humphreys, Cathy; Taft, Angela; Gold, Lisa; Glass, Nancy
2015-08-01
Domestic violence is a serious problem affecting the health and wellbeing of women globally. Interventions in health care settings have primarily focused on screening and referral, however, women often may not disclose abuse to health practitioners. The internet offers a confidential space in which women can assess the health of their relationships and make a plan for safety and wellbeing for themselves and their children. This randomised controlled trial is testing the effectiveness of a web-based healthy relationship tool and safety decision aid (I-DECIDE). Based broadly on the IRIS trial in the United States, it has been adapted for the Australian context where it is conducted entirely online and uses the Psychosocial Readiness Model as the basis for the intervention. In this two arm, pragmatic randomised controlled trial, women who have experienced abuse or fear of a partner in the previous 6 months will be computer randomised to receive either the I-DECIDE website or a comparator website (basic relationship and safety advice). The intervention includes self-directed reflection exercises on their relationship, danger level, priority setting, and results in an individualised, tailored action plan. Primary self-reported outcomes are: self-efficacy (General Self-Efficacy Scale) immediately after completion, 6 and 12 months post-baseline; and depressive symptoms (Centre for Epidemiologic Studies Depression Scale, Revised, 6 and 12 months post-baseline). Secondary outcomes include mean number of helpful actions for safety and wellbeing, mean level of fear of partner and cost-effectiveness. This fully-automated trial will evaluate a web-based self-information, self-reflection and self-management tool for domestic violence. We hypothesise that the improvement in self-efficacy and mental health will be mediated by increased perceived support and awareness encouraging positive change. If shown to be effective, I-DECIDE could be easily incorporated into the community sector and health care settings, providing an alternative to formal services for women not ready or able to acknowledge abuse and access specialised services. Trial registered on 15(th) December 2014 with the Australian New Zealand Clinical Trials Registry ACTRN12614001306606.
Web Audio/Video Streaming Tool
NASA Technical Reports Server (NTRS)
Guruvadoo, Eranna K.
2003-01-01
In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.
Automation of 3D cell culture using chemically defined hydrogels.
Rimann, Markus; Angres, Brigitte; Patocchi-Tenzer, Isabel; Braum, Susanne; Graf-Hausner, Ursula
2014-04-01
Drug development relies on high-throughput screening involving cell-based assays. Most of the assays are still based on cells grown in monolayer rather than in three-dimensional (3D) formats, although cells behave more in vivo-like in 3D. To exemplify the adoption of 3D techniques in drug development, this project investigated the automation of a hydrogel-based 3D cell culture system using a liquid-handling robot. The hydrogel technology used offers high flexibility of gel design due to a modular composition of a polymer network and bioactive components. The cell inert degradation of the gel at the end of the culture period guaranteed the harmless isolation of live cells for further downstream processing. Human colon carcinoma cells HCT-116 were encapsulated and grown in these dextran-based hydrogels, thereby forming 3D multicellular spheroids. Viability and DNA content of the cells were shown to be similar in automated and manually produced hydrogels. Furthermore, cell treatment with toxic Taxol concentrations (100 nM) had the same effect on HCT-116 cell viability in manually and automated hydrogel preparations. Finally, a fully automated dose-response curve with the reference compound Taxol showed the potential of this hydrogel-based 3D cell culture system in advanced drug development.
21 CFR 864.5620 - Automated hemoglobin system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated hemoglobin system. 864.5620 Section 864....5620 Automated hemoglobin system. (a) Identification. An automated hemoglobin system is a fully... hemoglobin content of human blood. (b) Classification. Class II (performance standards). [45 FR 60601, Sept...
21 CFR 864.5620 - Automated hemoglobin system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated hemoglobin system. 864.5620 Section 864....5620 Automated hemoglobin system. (a) Identification. An automated hemoglobin system is a fully... hemoglobin content of human blood. (b) Classification. Class II (performance standards). [45 FR 60601, Sept...
Harvey, Matthew J; Mason, Nicholas J; McLean, Andrew; Rzepa, Henry S
2015-01-01
We describe three different procedures based on metadata standards for enabling automated retrieval of scientific data from digital repositories utilising the persistent identifier of the dataset with optional specification of the attributes of the data document such as filename or media type. The procedures are demonstrated using the JSmol molecular visualizer as a component of a web page and Avogadro as a stand-alone modelling program. We compare our methods for automated retrieval of data from a standards-compliant data repository with those currently in operation for a selection of existing molecular databases and repositories. Our methods illustrate the importance of adopting a standards-based approach of using metadata declarations to increase access to and discoverability of repository-based data. Graphical abstract.
Development of processes for the production of low cost silicon dendritic web for solar cells
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Skutch, M. E.; Driggers, J. M.; Hill, F. E.
1980-01-01
High area output rates and continuous, automated growth are two key technical requirements for the growth of low-cost silicon ribbons for solar cells. By means of computer-aided furnace design, silicon dendritic web output rates as high as 27 sq cm/min have been achieved, a value in excess of that projected to meet a $0.50 per peak watt solar array manufacturing cost. The feasibility of simultaneous web growth while the melt is replenished with pelletized silicon has also been demonstrated. This step is an important precursor to the development of an automated growth system. Solar cells made on the replenished material were just as efficient as devices fabricated on typical webs grown without replenishment. Moreover, web cells made on a less-refined, pelletized polycrystalline silicon synthesized by the Battelle process yielded efficiencies up to 13% (AM1).
Comparison of a web-based versus traditional diet recall among children
USDA-ARS?s Scientific Manuscript database
Self-administered instruments offer a low-cost diet assessment method for use in adult and pediatric populations. This study tested whether 8- to 13-year-old children could complete an early version of the Automated Self Administered 24 (ASA24) hour dietary recall and how this compared to an intervi...
A New Internet Tool for Automatic Evaluation in Control Systems and Programming
ERIC Educational Resources Information Center
Munoz de la Pena, D.; Gomez-Estern, F.; Dormido, S.
2012-01-01
In this paper we present a web-based innovative education tool designed for automating the collection, evaluation and error detection in practical exercises assigned to computer programming and control engineering students. By using a student/instructor code-fusion architecture, the conceptual limits of multiple-choice tests are overcome by far.…
Automating individualized coaching and authentic role-play practice for brief intervention training.
Hayes-Roth, B; Saker, R; Amano, K
2010-01-01
Brief intervention helps to reduce alcohol abuse, but there is a need for accessible, cost-effective training of clinicians. This study evaluated STAR Workshop , a web-based training system that automates efficacious techniques for individualized coaching and authentic role-play practice. We compared STAR Workshop to a web-based, self-guided e-book and a no-treatment control, for training the Engage for Change (E4C) brief intervention protocol. Subjects were medical and nursing students. Brief written skill probes tested subjects' performance of individual protocol steps, in different clinical scenarios, at three test times: pre-training, post-training, and post-delay (two weeks). Subjects also did live phone interviews with a standardized patient, post-delay. STAR subjects performed significantly better than both other groups. They showed significantly greater improvement from pre-training probes to post-training and post-delay probes. They scored significantly higher on post-delay phone interviews. STAR Workshop appears to be an accessible, cost-effective approach for training students to use the E4C protocol for brief intervention in alcohol abuse. It may also be useful for training other clinical interviewing protocols.
How much can a single webcam tell to the operation of a water system?
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Castelletti, Andrea; Fedorov, Roman; Fraternali, Piero
2017-04-01
Recent advances in environmental monitoring are making a wide range of hydro-meteorological data available with a great potential to enhance understanding, modelling and management of environmental processes. Despite this progress, continuous monitoring of highly spatiotemporal heterogeneous processes is not well established yet, especially in inaccessible sites. In this context, the unprecedented availability of user-generated data on the web might open new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatiotemporally dense. In this work, we focus on snow and contribute a novel crowdsourcing procedure for extracting snow-related information from public web images, either produced by users or generated by touristic webcams. A fully automated process fetches mountain images from multiple sources, identifies the peaks present therein, and estimates virtual snow indexes representing a proxy of the snow-covered area. The operational value of the obtained virtual snow indexes is then assessed for a real-world water-management problem, where we use these indexes for informing the daily control of a regulated lake supplying water for multiple purposes. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance. Our procedure has the potential for complementing traditional snow-related information, minimizing costs and efforts for obtaining the virtual snow indexes and, at the same time, maximizing the portability of the procedure to several locations where such public images are available.
Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin
2011-07-15
Massive datasets comprising high-resolution images, generated in neuro-imaging studies and in clinical imaging research, are increasingly challenging our ability to analyze, share, and filter such images in clinical and basic translational research. Pivot collection exploratory analysis provides each user the ability to fully interact with the massive amounts of visual data to fully facilitate sufficient sorting, flexibility and speed to fluidly access, explore or analyze the massive image data sets of high-resolution images and their associated meta information, such as neuro-imaging databases from the Allen Brain Atlas. It is used in clustering, filtering, data sharing and classifying of the visual data into various deep zoom levels and meta information categories to detect the underlying hidden pattern within the data set that has been used. We deployed prototype Pivot collections using the Linux CentOS running on the Apache web server. We also tested the prototype Pivot collections on other operating systems like Windows (the most common variants) and UNIX, etc. It is demonstrated that the approach yields very good results when compared with other approaches used by some researchers for generation, creation, and clustering of massive image collections such as the coronal and horizontal sections of the mouse brain from the Allen Brain Atlas. Pivot visual analytics was used to analyze a prototype of dataset Dab2 co-expressed genes from the Allen Brain Atlas. The metadata along with high-resolution images were automatically extracted using the Allen Brain Atlas API. It is then used to identify the hidden information based on the various categories and conditions applied by using options generated from automated collection. A metadata category like chromosome, as well as data for individual cases like sex, age, and plan attributes of a particular gene, is used to filter, sort and to determine if there exist other genes with a similar characteristics to Dab2. And online access to the mouse brain pivot collection can be viewed using the link http://edtech-dev.uthsc.edu/CTSI/teeDev1/unittest/PaPa/collection.html (user name: tviangte and password: demome) Our proposed algorithm has automated the creation of large image Pivot collections; this will enable investigators of clinical research projects to easily and quickly analyse the image collections through a perspective that is useful for making critical decisions about the image patterns discovered.
A web access script language to support clinical application development.
O'Kane, K C; McColligan, E E
1998-02-01
This paper describes the development of a script language to support the implementation of decentralized, clinical information applications on the World Wide Web (Web). The goal of this work is to facilitate construction of low overhead, fully functional clinical information systems that can be accessed anywhere by low cost Web browsers to search, retrieve and analyze stored patient data. The Web provides a model of network access to data bases on a global scale. Although it was originally conceived as a means to exchange scientific documents, Web browsers and servers currently support access to a wide variety of audio, video, graphical and text based data to a rapidly growing community. Access to these services is via inexpensive client software browsers that connect to servers by means of the open architecture of the Internet. In this paper, the design and implementation of a script language that supports the development of low cost, Web-based, distributed clinical information systems for both Inter- and Intra-Net use is presented. The language is based on the Mumps language and, consequently, supports many legacy applications with few modifications. Several enhancements, however, have been made to support modern programming practices and the Web interface. The interpreter for the language also supports standalone program execution on Unix, MS-Windows, OS/2 and other operating systems.
An automated and integrated framework for dust storm detection based on ogc web processing services
NASA Astrophysics Data System (ADS)
Xiao, F.; Shea, G. Y. K.; Wong, M. S.; Campbell, J.
2014-11-01
Dust storms are known to have adverse effects on public health. Atmospheric dust loading is also one of the major uncertainties in global climatic modelling as it is known to have a significant impact on the radiation budget and atmospheric stability. The complexity of building scientific dust storm models is coupled with the scientific computation advancement, ongoing computing platform development, and the development of heterogeneous Earth Observation (EO) networks. It is a challenging task to develop an integrated and automated scheme for dust storm detection that combines Geo-Processing frameworks, scientific models and EO data together to enable the dust storm detection and tracking processes in a dynamic and timely manner. This study develops an automated and integrated framework for dust storm detection and tracking based on the Web Processing Services (WPS) initiated by Open Geospatial Consortium (OGC). The presented WPS framework consists of EO data retrieval components, dust storm detecting and tracking component, and service chain orchestration engine. The EO data processing component is implemented based on OPeNDAP standard. The dust storm detecting and tracking component combines three earth scientific models, which are SBDART model (for computing aerosol optical depth (AOT) of dust particles), WRF model (for simulating meteorological parameters) and HYSPLIT model (for simulating the dust storm transport processes). The service chain orchestration engine is implemented based on Business Process Execution Language for Web Service (BPEL4WS) using open-source software. The output results, including horizontal and vertical AOT distribution of dust particles as well as their transport paths, were represented using KML/XML and displayed in Google Earth. A serious dust storm, which occurred over East Asia from 26 to 28 Apr 2012, is used to test the applicability of the proposed WPS framework. Our aim here is to solve a specific instance of a complex EO data and scientific model integration problem by using a framework and scientific workflow approach together. The experimental result shows that this newly automated and integrated framework can be used to give advance near real-time warning of dust storms, for both environmental authorities and public. The methods presented in this paper might be also generalized to other types of Earth system models, leading to improved ease of use and flexibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Giancardo, Luca; Li, Yaquin
2013-01-01
Automated retina image analysis has reached a high level of maturity in recent years, and thus the question of how validation is performed in these systems is beginning to grow in importance. One application of retina image analysis is in telemedicine, where an automated system could enable the automated detection of diabetic retinopathy and other eye diseases as a low-cost method for broad-based screening. In this work we discuss our experiences in developing a telemedical network for retina image analysis, including our progression from a manual diagnosis network to a more fully automated one. We pay special attention to howmore » validations of our algorithm steps are performed, both using data from the telemedicine network and other public databases.« less
ddPCRclust - An R package and Shiny app for automated analysis of multiplexed ddPCR data.
Brink, Benedikt G; Meskas, Justin; Brinkman, Ryan R
2018-03-09
Droplet digital PCR (ddPCR) is an emerging technology for quantifying DNA. By partitioning the target DNA into ∼20000 droplets, each serving as its own PCR reaction compartment, a very high sensitivity of DNA quantification can be achieved. However, manual analysis of the data is time consuming and algorithms for automated analysis of non-orthogonal, multiplexed ddPCR data are unavailable, presenting a major bottleneck for the advancement of ddPCR transitioning from low-throughput to high- throughput. ddPCRclust is an R package for automated analysis of data from Bio-Rad's droplet digital PCR systems (QX100 and QX200). It can automatically analyse and visualise multiplexed ddPCR experiments with up to four targets per reaction. Results are on par with manual analysis, but only take minutes to compute instead of hours. The accompanying Shiny app ddPCRvis provides easy access to the functionalities of ddPCRclust through a web-browser based GUI. R package: https://github.com/bgbrink/ddPCRclust; Interface: https://github.com/bgbrink/ddPCRvis/; Web: https://bibiserv.cebitec.uni-bielefeld.de/ddPCRvis/. bbrink@cebitec.uni-bielefeld.de.
Towards Automated Screening of Two-dimensional Crystals
Cheng, Anchi; Leung, Albert; Fellmann, Denis; Quispe, Joel; Suloway, Christian; Pulokas, James; Carragher, Bridget; Potter, Clinton S.
2007-01-01
Screening trials to determine the presence of two-dimensional (2D) protein crystals suitable for three-dimensional structure determination using electron crystallography is a very labor-intensive process. Methods compatible with fully automated screening have been developed for the process of crystal production by dialysis and for producing negatively stained grids of the resulting trials. Further automation via robotic handling of the EM grids, and semi-automated transmission electron microscopic imaging and evaluation of the trial grids is also possible. We, and others, have developed working prototypes for several of these tools and tested and evaluated them in a simple screen of 24 crystallization conditions. While further development of these tools is certainly required for a turn-key system, the goal of fully automated screening appears to be within reach. PMID:17977016
A continuously growing web-based interface structure databank
NASA Astrophysics Data System (ADS)
Erwin, N. A.; Wang, E. I.; Osysko, A.; Warner, D. H.
2012-07-01
The macroscopic properties of materials can be significantly influenced by the presence of microscopic interfaces. The complexity of these interfaces coupled with the vast configurational space in which they reside has been a long-standing obstacle to the advancement of true bottom-up material behavior predictions. In this vein, atomistic simulations have proven to be a valuable tool for investigating interface behavior. However, before atomistic simulations can be utilized to model interface behavior, meaningful interface atomic structures must be generated. The generation of structures has historically been carried out disjointly by individual research groups, and thus, has constituted an overlap in effort across the broad research community. To address this overlap and to lower the barrier for new researchers to explore interface modeling, we introduce a web-based interface structure databank (www.isdb.cee.cornell.edu) where users can search, download and share interface structures. The databank is intended to grow via two mechanisms: (1) interface structure donations from individual research groups and (2) an automated structure generation algorithm which continuously creates equilibrium interface structures. In this paper, we describe the databank, the automated interface generation algorithm, and compare a subset of the autonomously generated structures to structures currently available in the literature. To date, the automated generation algorithm has been directed toward aluminum grain boundary structures, which can be compared with experimentally measured population densities of aluminum polycrystals.
Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M
2014-04-01
Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.
MR efficiency using automated MRI-desktop eProtocol
NASA Astrophysics Data System (ADS)
Gao, Fei; Xu, Yanzhe; Panda, Anshuman; Zhang, Min; Hanson, James; Su, Congzhe; Wu, Teresa; Pavlicek, William; James, Judy R.
2017-03-01
MRI protocols are instruction sheets that radiology technologists use in routine clinical practice for guidance (e.g., slice position, acquisition parameters etc.). In Mayo Clinic Arizona (MCA), there are over 900 MR protocols (ranging across neuro, body, cardiac, breast etc.) which makes maintaining and updating the protocol instructions a labor intensive effort. The task is even more challenging given different vendors (Siemens, GE etc.). This is a universal problem faced by all the hospitals and/or medical research institutions. To increase the efficiency of the MR practice, we designed and implemented a web-based platform (eProtocol) to automate the management of MRI protocols. It is built upon a database that automatically extracts protocol information from DICOM compliant images and provides a user-friendly interface to the technologists to create, edit and update the protocols. Advanced operations such as protocol migrations from scanner to scanner and capability to upload Multimedia content were also implemented. To the best of our knowledge, eProtocol is the first MR protocol automated management tool used clinically. It is expected that this platform will significantly improve the radiology operations efficiency including better image quality and exam consistency, fewer repeat examinations and less acquisition errors. These protocols instructions will be readily available to the technologists during scans. In addition, this web-based platform can be extended to other imaging modalities such as CT, Mammography, and Interventional Radiology and different vendors for imaging protocol management.
Towards fully automated structure-based function prediction in structural genomics: a case study.
Watson, James D; Sanderson, Steve; Ezersky, Alexandra; Savchenko, Alexei; Edwards, Aled; Orengo, Christine; Joachimiak, Andrzej; Laskowski, Roman A; Thornton, Janet M
2007-04-13
As the global Structural Genomics projects have picked up pace, the number of structures annotated in the Protein Data Bank as hypothetical protein or unknown function has grown significantly. A major challenge now involves the development of computational methods to assign functions to these proteins accurately and automatically. As part of the Midwest Center for Structural Genomics (MCSG) we have developed a fully automated functional analysis server, ProFunc, which performs a battery of analyses on a submitted structure. The analyses combine a number of sequence-based and structure-based methods to identify functional clues. After the first stage of the Protein Structure Initiative (PSI), we review the success of the pipeline and the importance of structure-based function prediction. As a dataset, we have chosen all structures solved by the MCSG during the 5 years of the first PSI. Our analysis suggests that two of the structure-based methods are particularly successful and provide examples of local similarity that is difficult to identify using current sequence-based methods. No one method is successful in all cases, so, through the use of a number of complementary sequence and structural approaches, the ProFunc server increases the chances that at least one method will find a significant hit that can help elucidate function. Manual assessment of the results is a time-consuming process and subject to individual interpretation and human error. We present a method based on the Gene Ontology (GO) schema using GO-slims that can allow the automated assessment of hits with a success rate approaching that of expert manual assessment.
Innovative technology for web-based data management during an outbreak
Mukhi, Shamir N; Chester, Tammy L Stuart; Klaver-Kibria, Justine DA; Nowicki, Deborah L; Whitlock, Mandy L; Mahmud, Salah M; Louie, Marie; Lee, Bonita E
2011-01-01
Lack of automated and integrated data collection and management, and poor linkage of clinical, epidemiological and laboratory data during an outbreak can inhibit effective and timely outbreak investigation and response. This paper describes an innovative web-based technology, referred to as Web Data, developed for the rapid set-up and provision of interactive and adaptive data management during outbreak situations. We also describe the benefits and limitations of the Web Data technology identified through a questionnaire that was developed to evaluate the use of Web Data implementation and application during the 2009 H1N1 pandemic by Winnipeg Regional Health Authority and Provincial Laboratory for Public Health of Alberta. Some of the main benefits include: improved and secure data access, increased efficiency and reduced error, enhanced electronic collection and transfer of data, rapid creation and modification of the database, conversion of specimen-level to case-level data, and user-defined data extraction and query capabilities. Areas requiring improvement include: better understanding of privacy policies, increased capability for data sharing and linkages between jurisdictions to alleviate data entry duplication. PMID:23569597
Combining Domain-driven Design and Mashups for Service Development
NASA Astrophysics Data System (ADS)
Iglesias, Carlos A.; Fernández-Villamor, José Ignacio; Del Pozo, David; Garulli, Luca; García, Boni
This chapter presents the Romulus project approach to Service Development using Java-based web technologies. Romulus aims at improving productivity of service development by providing a tool-supported model to conceive Java-based web applications. This model follows a Domain Driven Design approach, which states that the primary focus of software projects should be the core domain and domain logic. Romulus proposes a tool-supported model, Roma Metaframework, that provides an abstraction layer on top of existing web frameworks and automates the application generation from the domain model. This metaframework follows an object centric approach, and complements Domain Driven Design by identifying the most common cross-cutting concerns (security, service, view, ...) of web applications. The metaframework uses annotations for enriching the domain model with these cross-cutting concerns, so-called aspects. In addition, the chapter presents the usage of mashup technology in the metaframework for service composition, using the web mashup editor MyCocktail. This approach is applied to a scenario of the Mobile Phone Service Portability case study for the development of a new service.
Neff, Michael; Rauhut, Guntram
2014-02-05
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application. Copyright © 2013 Elsevier B.V. All rights reserved.
The effect of JPEG compression on automated detection of microaneurysms in retinal images
NASA Astrophysics Data System (ADS)
Cree, M. J.; Jelinek, H. F.
2008-02-01
As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.
Investigating Factors Affecting the Uptake of Automated Assessment Technology
ERIC Educational Resources Information Center
Dreher, Carl; Reiners, Torsten; Dreher, Heinz
2011-01-01
Automated assessment is an emerging innovation in educational praxis, however its pedagogical potential is not fully utilised in Australia, particularly regarding automated essay grading. The rationale for this research is that the usage of automated assessment currently lags behind the capacity that the technology provides, thus restricting the…
ERIC Educational Resources Information Center
Protopapas, Athanassios; Skaloumbakas, Christos; Bali, Persefoni
2008-01-01
After reviewing past efforts related to computer-based reading disability (RD) assessment, we present a fully automated screening battery that evaluates critical skills relevant for RD diagnosis designed for unsupervised application in the Greek educational system. Psychometric validation in 301 children, 8-10 years old (grades 3 and 4; including…
NASA Technical Reports Server (NTRS)
Steinberg, R.
1978-01-01
A low-cost communications system to provide meteorological data from commercial aircraft, in neat real-time, on a fully automated basis has been developed. The complete system including the low profile antenna and all installation hardware weighs 34 kg. The prototype system was installed on a B-747 aircraft and provided meteorological data (wind angle and velocity, temperature, altitude and position as a function of time) on a fully automated basis. The results were exceptional. This concept is expected to have important implications for operational meteorology and airline route forecasting.
NASA Astrophysics Data System (ADS)
Xie, Dengling; Xie, Yanjun; Liu, Peng; Tong, Lieshu; Chu, Kaiqin; Smith, Zachary J.
2017-02-01
Current flow-based blood counting devices require expensive and centralized medical infrastructure and are not appropriate for field use. In this paper we report a method to count red blood cells, white blood cells as well as platelets through a low-cost and fully-automated blood counting system. The approach consists of using a compact, custom-built microscope with large field-of-view to record bright-field and fluorescence images of samples that are diluted with a single, stable reagent mixture and counted using automatic algorithms. Sample collection is performed manually using a spring loaded lancet, and volume-metering capillary tubes. The capillaries are then dropped into a tube of pre-measured reagents and gently shaken for 10-30 seconds. The sample is loaded into a measurement chamber and placed on a custom 3D printed platform. Sample translation and focusing is fully automated, and a user has only to press a button for the measurement and analysis to commence. Cost of the system is minimized through the use of custom-designed motorized components. We performed a series of comparative experiments by trained and untrained users on blood from adults and children. We compare the performance of our system, as operated by trained and untrained users, to the clinical gold standard using a Bland-Altman analysis, demonstrating good agreement of our system to the clinical standard. The system's low cost, complete automation, and good field performance indicate that it can be successfully translated for use in low-resource settings where central hematology laboratories are not accessible.
Automated determination of arterial input function for DCE-MRI of the prostate
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep
2011-03-01
Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.
Peters, Sonja; Kaal, Erwin; Horsting, Iwan; Janssen, Hans-Gerd
2012-02-24
A new method is presented for the analysis of phenolic acids in plasma based on ion-pairing 'Micro-extraction in packed sorbent' (MEPS) coupled on-line to in-liner derivatisation-gas chromatography-mass spectrometry (GC-MS). The ion-pairing reagent served a dual purpose. It was used both to improve extraction yields of the more polar analytes and as the methyl donor in the automated in-liner derivatisation method. In this way, a fully automated procedure for the extraction, derivatisation and injection of a wide range of phenolic acids in plasma samples has been obtained. An extensive optimisation of the extraction and derivatisation procedure has been performed. The entire method showed excellent repeatabilities of under 10% and linearities of 0.99 or better for all phenolic acids. The limits of detection of the optimised method for the majority of phenolic acids were 10ng/mL or lower with three phenolic acids having less-favourable detection limits of around 100 ng/mL. Finally, the newly developed method has been applied in a human intervention trial in which the bioavailability of polyphenols from wine and tea was studied. Forty plasma samples could be analysed within 24h in a fully automated method including sample extraction, derivatisation and gas chromatographic analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
SIFTER search: a web server for accurate phylogeny-based protein function prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-05-15
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less
Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics
Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.
2012-01-01
With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849
CASAS: A tool for composing automatically and semantically astrophysical services
NASA Astrophysics Data System (ADS)
Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.
2017-07-01
Multiple astronomical datasets are available through internet and the astrophysical Distributed Computing Infrastructure (DCI) called Virtual Observatory (VO). Some scientific workflow technologies exist for retrieving and combining data from those sources. However selection of relevant services, automation of the workflows composition and the lack of user-friendly platforms remain a concern. This paper presents CASAS, a tool for semantic web services composition in astrophysics. This tool proposes automatic composition of astrophysical web services and brings a semantics-based, automatic composition of workflows. It widens the services choice and eases the use of heterogeneous services. Semantic web services composition relies on ontologies for elaborating the services composition; this work is based on Astrophysical Services ONtology (ASON). ASON had its structure mostly inherited from the VO services capacities. Nevertheless, our approach is not limited to the VO and brings VO plus non-VO services together without the need for premade recipes. CASAS is available for use through a simple web interface.
Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail
2017-06-01
Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research.
Moreau, Michel; Gagnon, Marie-Pierre
2015-01-01
Background Type 2 diabetes is a major challenge for Canadian public health authorities, and regular physical activity is a key factor in the management of this disease. Given that fewer than half of people with type 2 diabetes in Canada are sufficiently active to meet the recommendations, effective programs targeting the adoption of regular physical activity (PA) are in demand for this population. Many researchers argue that Web-based, tailored interventions targeting PA are a promising and effective avenue for sedentary populations like Canadians with type 2 diabetes, but few have described the detailed development of this kind of intervention. Objective This paper aims to describe the systematic development of the Web-based, tailored intervention, Diabète en Forme, promoting regular aerobic PA among adult Canadian francophones with type 2 diabetes. This paper can be used as a reference for health professionals interested in developing similar interventions. We also explored the integration of theoretical components derived from the I-Change Model, Self-Determination Theory, and Motivational Interviewing, which is a potential path for enhancing the effectiveness of tailored interventions on PA adoption and maintenance. Methods The intervention development was based on the program-planning model for tailored interventions of Kreuter et al. An additional step was added to the model to evaluate the intervention’s usability prior to the implementation phase. An 8-week intervention was developed. The key components of the intervention include a self-monitoring tool for PA behavior, a weekly action planning tool, and eight tailored motivational sessions based on attitude, self-efficacy, intention, type of motivation, PA behavior, and other constructs and techniques. Usability evaluation, a step added to the program-planning model, helped to make several improvements to the intervention prior to the implementation phase. Results The intervention development cost was about CDN $59,700 and took approximately 54 full-time weeks. The intervention officially started on September 29, 2014. Out of 2300 potential participants targeted for the tailored intervention, approximately 530 people visited the website, 170 people completed the registration process, and 83 corresponded to the selection criteria and were enrolled in the intervention. Conclusions Usability evaluation is an essential step in the development of a Web-based tailored intervention in order to make pre-implementation improvements. The effectiveness and relevance of the theoretical framework used for the intervention will be analyzed following the process and impact evaluation. Implications for future research are discussed. PMID:25691346
Moreau, Michel; Gagnon, Marie-Pierre; Boudreau, François
2015-02-17
Type 2 diabetes is a major challenge for Canadian public health authorities, and regular physical activity is a key factor in the management of this disease. Given that fewer than half of people with type 2 diabetes in Canada are sufficiently active to meet the recommendations, effective programs targeting the adoption of regular physical activity (PA) are in demand for this population. Many researchers argue that Web-based, tailored interventions targeting PA are a promising and effective avenue for sedentary populations like Canadians with type 2 diabetes, but few have described the detailed development of this kind of intervention. This paper aims to describe the systematic development of the Web-based, tailored intervention, Diabète en Forme, promoting regular aerobic PA among adult Canadian francophones with type 2 diabetes. This paper can be used as a reference for health professionals interested in developing similar interventions. We also explored the integration of theoretical components derived from the I-Change Model, Self-Determination Theory, and Motivational Interviewing, which is a potential path for enhancing the effectiveness of tailored interventions on PA adoption and maintenance. The intervention development was based on the program-planning model for tailored interventions of Kreuter et al. An additional step was added to the model to evaluate the intervention's usability prior to the implementation phase. An 8-week intervention was developed. The key components of the intervention include a self-monitoring tool for PA behavior, a weekly action planning tool, and eight tailored motivational sessions based on attitude, self-efficacy, intention, type of motivation, PA behavior, and other constructs and techniques. Usability evaluation, a step added to the program-planning model, helped to make several improvements to the intervention prior to the implementation phase. The intervention development cost was about CDN $59,700 and took approximately 54 full-time weeks. The intervention officially started on September 29, 2014. Out of 2300 potential participants targeted for the tailored intervention, approximately 530 people visited the website, 170 people completed the registration process, and 83 corresponded to the selection criteria and were enrolled in the intervention. Usability evaluation is an essential step in the development of a Web-based tailored intervention in order to make pre-implementation improvements. The effectiveness and relevance of the theoretical framework used for the intervention will be analyzed following the process and impact evaluation. Implications for future research are discussed.
CSHM: Web-based safety and health monitoring system for construction management.
Cheung, Sai On; Cheung, Kevin K W; Suen, Henry C H
2004-01-01
This paper describes a web-based system for monitoring and assessing construction safety and health performance, entitled the Construction Safety and Health Monitoring (CSHM) system. The design and development of CSHM is an integration of internet and database systems, with the intent to create a total automated safety and health management tool. A list of safety and health performance parameters was devised for the management of safety and health in construction. A conceptual framework of the four key components of CSHM is presented: (a) Web-based Interface (templates); (b) Knowledge Base; (c) Output Data; and (d) Benchmark Group. The combined effect of these components results in a system that enables speedy performance assessment of safety and health activities on construction sites. With the CSHM's built-in functions, important management decisions can theoretically be made and corrective actions can be taken before potential hazards turn into fatal or injurious occupational accidents. As such, the CSHM system will accelerate the monitoring and assessing of performance safety and health management tasks.
Godfrey, Alexander G; Masquelin, Thierry; Hemmerle, Horst
2013-09-01
This article describes our experiences in creating a fully integrated, globally accessible, automated chemical synthesis laboratory. The goal of the project was to establish a fully integrated automated synthesis solution that was initially focused on minimizing the burden of repetitive, routine, rules-based operations that characterize more established chemistry workflows. The architecture was crafted to allow for the expansion of synthetic capabilities while also providing for a flexible interface that permits the synthesis objective to be introduced and manipulated as needed under the judicious direction of a remote user in real-time. This innovative central synthesis suite is herein described along with some case studies to illustrate the impact such a system is having in expanding drug discovery capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Effectiveness of a web-based automated cell distribution system.
Niland, Joyce C; Stiller, Tracey; Cravens, James; Sowinski, Janice; Kaddis, John; Qian, Dajun
2010-01-01
In recent years, industries have turned to the field of operations research to help improve the efficiency of production and distribution processes. Largely absent is the application of this methodology to biological materials, such as the complex and costly procedure of human pancreas procurement and islet isolation. Pancreatic islets are used for basic science research and in a promising form of cell replacement therapy for a subset of patients afflicted with severe type 1 diabetes mellitus. Having an accurate and reliable system for cell distribution is therefore crucial. The Islet Cell Resource Center Consortium was formed in 2001 as the first and largest cooperative group of islet production and distribution facilities in the world. We previously reported on the development of a Matching Algorithm for Islet Distribution (MAID), an automated web-based tool used to optimize the distribution of human pancreatic islets by matching investigator requests to islet characteristics. This article presents an assessment of that algorithm and compares it to the manual distribution process used prior to MAID. A comparison was done using an investigator's ratio of the number of islets received divided by the number requested pre- and post-MAID. Although the supply of islets increased between the pre- versus post-MAID period, the median received-to-requested ratio remained around 60% due to an increase in demand post-MAID. A significantly smaller variation in the received-to-requested ratio was achieved in the post- versus pre-MAID period. In particular, the undesirable outcome of providing users with more islets than requested, ranging up to four times their request, was greatly reduced through the algorithm. In conclusion, this analysis demonstrates, for the first time, the effectiveness of using an automated web-based cell distribution system to facilitate efficient and consistent delivery of human pancreatic islets by enhancing the islet matching process.
Visualizing multiattribute Web transactions using a freeze technique
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj
2003-05-01
Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.
NASA Astrophysics Data System (ADS)
Jiang, Luan; Ling, Shan; Li, Qiang
2016-03-01
Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.
DAVID-WS: a stateful web service to facilitate gene/protein list analysis
Jiao, Xiaoli; Sherman, Brad T.; Huang, Da Wei; Stephens, Robert; Baseler, Michael W.; Lane, H. Clifford; Lempicki, Richard A.
2012-01-01
Summary: The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. Availability: The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html. Contact: xiaoli.jiao@nih.gov; rlempicki@nih.gov PMID:22543366
DAVID-WS: a stateful web service to facilitate gene/protein list analysis.
Jiao, Xiaoli; Sherman, Brad T; Huang, Da Wei; Stephens, Robert; Baseler, Michael W; Lane, H Clifford; Lempicki, Richard A
2012-07-01
The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html.
j5 DNA assembly design automation.
Hillson, Nathan J
2014-01-01
Modern standardized methodologies, described in detail in the previous chapters of this book, have enabled the software-automated design of optimized DNA construction protocols. This chapter describes how to design (combinatorial) scar-less DNA assembly protocols using the web-based software j5. j5 assists biomedical and biotechnological researchers construct DNA by automating the design of optimized protocols for flanking homology sequence as well as type IIS endonuclease-mediated DNA assembly methodologies. Unlike any other software tool available today, j5 designs scar-less combinatorial DNA assembly protocols, performs a cost-benefit analysis to identify which portions of an assembly process would be less expensive to outsource to a DNA synthesis service provider, and designs hierarchical DNA assembly strategies to mitigate anticipated poor assembly junction sequence performance. Software integrated with j5 add significant value to the j5 design process through graphical user-interface enhancement and downstream liquid-handling robotic laboratory automation.
Automated frame selection process for high-resolution microendoscopy
NASA Astrophysics Data System (ADS)
Ishijima, Ayumu; Schwarz, Richard A.; Shin, Dongsuk; Mondrik, Sharon; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2015-04-01
We developed an automated frame selection algorithm for high-resolution microendoscopy video sequences. The algorithm rapidly selects a representative frame with minimal motion artifact from a short video sequence, enabling fully automated image analysis at the point-of-care. The algorithm was evaluated by quantitative comparison of diagnostically relevant image features and diagnostic classification results obtained using automated frame selection versus manual frame selection. A data set consisting of video sequences collected in vivo from 100 oral sites and 167 esophageal sites was used in the analysis. The area under the receiver operating characteristic curve was 0.78 (automated selection) versus 0.82 (manual selection) for oral sites, and 0.93 (automated selection) versus 0.92 (manual selection) for esophageal sites. The implementation of fully automated high-resolution microendoscopy at the point-of-care has the potential to reduce the number of biopsies needed for accurate diagnosis of precancer and cancer in low-resource settings where there may be limited infrastructure and personnel for standard histologic analysis.
2005-06-01
need for user-defined dashboard • automated monitoring of web data sources • task driven data aggregation and display Working toward automated processing of task, resource, and intelligence updates
Berry, Donna L; Blonquist, Traci M; Patel, Rupa A; Halpenny, Barbara; McReynolds, Justin
2015-06-03
Effective eHealth interventions can benefit a large number of patients with content intended to support self-care and management of both chronic and acute conditions. Even though usage statistics are easily logged in most eHealth interventions, usage or exposure has rarely been reported in trials, let alone studied in relationship to effectiveness. The intent of the study was to evaluate use of a fully automated, Web-based program, the Electronic Self Report Assessment-Cancer (ESRA-C), and how delivery and total use of the intervention may have affected cancer symptom distress. Patients at two cancer centers used ESRA-C to self-report symptom and quality of life (SxQOL) issues during therapy. Participants were randomized to ESRA-C assessment only (control) or the ESRA-C intervention delivered via the Internet to patients' homes or to a tablet at the clinic. The intervention enabled participants to self-monitor SxQOL and receive self-care education and customized coaching on how to report concerns to clinicians. Overall and voluntary intervention use were defined as having ≥2 exposures, and one non-prompted exposure to the intervention, respectively. Factors associated with intervention use were explored with Fisher's exact test. Propensity score matching was used to select a sample of control participants similar to intervention participants who used the intervention. Analysis of covariance (ANCOVA) was used to compare change in Symptom Distress Scale (SDS-15) scores from pre-treatment to end-of-study by groups in the matched sample. Radiation oncology participants used the intervention, overall and voluntarily, more than medical oncology and transplant participants. Participants who were working and had more than a high school education voluntarily used the intervention more. The SDS-15 score was reduced by an estimated 1.53 points (P=.01) in the intervention group users compared to the matched control group. The intended effects of a Web-based, patient-centered intervention on cancer symptom distress were modified by intervention use frequency. Clinical and personal demographics influenced voluntary use. Clinicaltrials.gov NCT00852852; http://clinicaltrials.gov/ct2/show/NCT00852852 (Archived by WebCite at http://www.webcitation.org/6YwAfwWl7).
NASA Astrophysics Data System (ADS)
Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting
2015-08-01
Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.
Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting
2015-08-11
Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.
NASA Astrophysics Data System (ADS)
Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew
Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.
Turning a remotely controllable observatory into a fully autonomous system
NASA Astrophysics Data System (ADS)
Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael
2014-08-01
We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.
NASA Astrophysics Data System (ADS)
Harms, Justin D.; Bachmann, Charles M.; Ambeau, Brittany L.; Faulring, Jason W.; Ruiz Torres, Andres J.; Badura, Gregory; Myers, Emily
2017-10-01
Field-portable goniometers are created for a wide variety of applications. Many of these applications require specific types of instruments and measurement schemes and must operate in challenging environments. Therefore, designs are based on the requirements that are specific to the application. We present a field-portable goniometer that was designed for measuring the hemispherical-conical reflectance factor (HCRF) of various soils and low-growing vegetation in austere coastal and desert environments and biconical reflectance factors in laboratory settings. Unlike some goniometers, this system features a requirement for "target-plane tracking" to ensure that measurements can be collected on sloped surfaces, without compromising angular accuracy. The system also features a second upward-looking spectrometer to measure the spatially dependent incoming illumination, an integrated software package to provide full automation, an automated leveling system to ensure a standard frame of reference, a design that minimizes the obscuration due to self-shading to measure the opposition effect, and the ability to record a digital elevation model of the target region. This fully automated and highly mobile system obtains accurate and precise measurements of HCRF in a wide variety of terrain and in less time than most other systems while not sacrificing consistency or repeatability in laboratory environments.
A Semantic Grid Oriented to E-Tourism
NASA Astrophysics Data System (ADS)
Zhang, Xiao Ming
With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.
A Web-based system for the intelligent management of diabetic patients.
Riva, A; Bellazzi, R; Stefanelli, M
1997-01-01
We describe the design and implementation of a distributed computer-based system for the management of insulin-dependent diabetes mellitus. The goal of the system is to support the normal activities of the physicians and patients involved in the care of diabetes by providing them with a set of automated services ranging from data collection and transmission to data analysis and decision support. The system is highly integrated with current practices in the management of diabetes, and it uses Internet technology to achieve high availability and ease of use. In particular, the user interaction takes place through dynamically generated World Wide Web pages, so that all the system's functions share an intuitive graphic user interface.
Detection And Classification Of Web Robots With Honeypots
2016-03-01
CLASSIFICATION OF WEB ROBOTS WITH HONEYPOTS by Sean F. McKenna March 2016 Thesis Advisor: Neil Rowe Second Reader: Justin P. Rohrer THIS...Master’s thesis 4. TITLE AND SUBTITLE DETECTION AND CLASSIFICATION OF WEB ROBOTS WITH HONEYPOTS 5. FUNDING NUMBERS 6. AUTHOR(S) Sean F. McKenna 7...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Web robots are automated programs that systematically browse the Web , collecting information. Although
NASA Astrophysics Data System (ADS)
Nuzhnaya, Tatyana; Bakic, Predrag; Kontos, Despina; Megalooikonomou, Vasileios; Ling, Haibin
2012-02-01
This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.
Automatic and continuous landslide monitoring: the Rotolon Web-based platform
NASA Astrophysics Data System (ADS)
Frigerio, Simone; Schenato, Luca; Mantovani, Matteo; Bossi, Giulia; Marcato, Gianluca; Cavalli, Marco; Pasuto, Alessandro
2013-04-01
Mount Rotolon (Eastern Italian Alps) is affected by a complex landslide that, since 1985, is threatening the nearby village of Recoaro Terme. The first written proof of a landslide occurrence dated back to 1798. After the last re-activation on November 2010 (637 mm of intense rainfall recorded in the 12 days prior the event), a mass of approximately 320.000 m3 detached from the south flank of Mount Rotolon and evolved into a fast debris flow that ran for about 3 km along the stream bed. A real-time monitoring system was required to detect early indication of rapid movements, potentially saving lives and property. A web-based platform for automatic and continuous monitoring was designed as a first step in the implementation of an early-warning system. Measurements collected by the automated geotechnical and topographic instrumentation, deployed over the landslide body, are gathered in a central box station. After the calibration process, they are transmitted by web services on a local server, where graphs, maps, reports and alert announcement are automatically generated and updated. All the processed information are available by web browser with different access rights. The web environment provides the following advantages: 1) data is collected from different data sources and matched on a single server-side frame 2) a remote user-interface allows regular technical maintenance and direct access to the instruments 3) data management system is synchronized and automatically tested 4) a graphical user interface on browser provides a user-friendly tool for decision-makers to interact with a system continuously updated. On this site two monitoring systems are actually on course: 1) GB-InSAR radar interferometer (University of Florence - Department of Earth Science) and 2) Automated Total Station (ATS) combined with extensometers network in a Web-based solution (CNR-IRPI Padova). This work deals with details on methodology, services and techniques adopted for the second monitoring solution. The activity directly interfaces with local Civil Protection agency, Regional Geological Service and local authorities with integrated roles and aims.
Rebooting the East: Automation in University Libraries of the Former German Democratic Republic.
ERIC Educational Resources Information Center
Seadle, Michael
1996-01-01
Provides a history of the automation efforts at former East German libraries that have made their resources available for the first time. Highlights include World Wide Web home page addresses; library problems, including censorship; automation guidelines, funding, and cooperation; online catalogs; and specific examples of university libraries'…
RootGraph: a graphic optimization tool for automated image analysis of plant roots
Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N.; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J.
2015-01-01
This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions. PMID:26224880
21st Century Recruiting: Automated, Digital, Electronic.
ERIC Educational Resources Information Center
Patterson, Valerie
1997-01-01
Examines ways in which technology is changing staffing office practices. Discusses features of the worldwide web, some of the potential problems in establishing a web site, and the importance of carefully planning a web site. Looks at digital resume warehouses and the increased power such warehouses offers recruiters. (RJM)
Followup Audit: Enterprise Blood Management System Not Ready for Full Deployment
2014-10-23
Executive ( CAE ) for DHA considered these two systems as a single Defense acquisition category III automated information system.2 According to the...the improved integration of blood products inventory management and shipment availability. The CAE for DHA is the milestone decision authority for...to officials, the capability is a web-based IT product used to track blood products in theater. Specifically, theater-based medical treatment
Shattuck, Dominick; Haile, Liya T; Simmons, Rebecca G
2018-04-20
Smartphone apps that provide women with information about their daily fertility status during their menstrual cycles can contribute to the contraceptive method mix. However, if these apps claim to help a user prevent pregnancy, they must undergo similar rigorous research required for other contraceptive methods. Georgetown University's Institute for Reproductive Health is conducting a prospective longitudinal efficacy trial on Dot (Dynamic Optimal Timing), an algorithm-based fertility app designed to help women prevent pregnancy. The aim of this paper was to highlight decision points during the recruitment-enrollment process and the effect of modifications on enrollment numbers and demographics. Recruiting eligible research participants for a contraceptive efficacy study and enrolling an adequate number to statistically assess the effectiveness of Dot is critical. Recruiting and enrolling participants for the Dot study involved making decisions based on research and analytic data, constant process modification, and close monitoring and evaluation of the effect of these modifications. Originally, the only option for women to enroll in the study was to do so over the phone with a study representative. On noticing low enrollment numbers, we examined the 7 steps from the time a woman received the recruitment message until she completed enrollment and made modifications accordingly. In modification 1, we added call-back and voicemail procedures to increase the number of completed calls. Modification 2 involved using a chat and instant message (IM) features to facilitate study enrollment. In modification 3, the process was fully automated to allow participants to enroll in the study without the aid of study representatives. After these modifications were implemented, 719 women were enrolled in the study over a 6-month period. The majority of participants (494/719, 68.7%) were enrolled during modification 3, in which they had the option to enroll via phone, chat, or the fully automated process. Overall, 29.2% (210/719) of the participants were enrolled via a phone call, 19.9% (143/719) via chat/IM, and 50.9% (366/719) directly through the fully automated process. With respect to the demographic profile of our study sample, we found a significant statistical difference in education level across all modifications (P<.05) but not in age or race or ethnicity (P>.05). Our findings show that agile and consistent modifications to the recruitment and enrollment process were necessary to yield an appropriate sample size. An automated process resulted in significantly higher enrollment rates than one that required phone interaction with study representatives. Although there were some differences in demographic characteristics of enrollees as the process was modified, in general, our study population is diverse and reflects the overall United States population in terms of race/ethnicity, age, and education. Additional research is proposed to identify how differences in mode of enrollment and demographic characteristics may affect participants' performance in the study. ClinicalTrials.gov NCT02833922; http://clinicaltrials.gov/ct2/show/NCT02833922 (Archived by WebCite at http://www.webcitation.org/6yj5FHrBh). ©Dominick Shattuck, Liya T Haile, Rebecca G Simmons. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 20.04.2018.
ConfocalCheck - A Software Tool for the Automated Monitoring of Confocal Microscope Performance
Hng, Keng Imm; Dormann, Dirk
2013-01-01
Laser scanning confocal microscopy has become an invaluable tool in biomedical research but regular quality testing is vital to maintain the system’s performance for diagnostic and research purposes. Although many methods have been devised over the years to characterise specific aspects of a confocal microscope like measuring the optical point spread function or the field illumination, only very few analysis tools are available. Our aim was to develop a comprehensive quality assurance framework ranging from image acquisition to automated analysis and documentation. We created standardised test data to assess the performance of the lasers, the objective lenses and other key components required for optimum confocal operation. The ConfocalCheck software presented here analyses the data fully automatically. It creates numerous visual outputs indicating potential issues requiring further investigation. By storing results in a web browser compatible file format the software greatly simplifies record keeping allowing the operator to quickly compare old and new data and to spot developing trends. We demonstrate that the systematic monitoring of confocal performance is essential in a core facility environment and how the quantitative measurements obtained can be used for the detailed characterisation of system components as well as for comparisons across multiple instruments. PMID:24224017
Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M
2008-01-01
To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM
2009-01-01
Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582
Implementing a distributed intranet-based information system.
O'Kane, K C; McColligan, E E; Davis, G A
1996-11-01
The article discusses Internet and intranet technologies and describes how to install an intranet-based information system using the Merle language facility and other readily available components. Merle is a script language designed to support decentralized medical record information retrieval applications on the World Wide Web. The goal of this work is to provide a script language tool to facilitate construction of efficient, fully functional, multipoint medical record information systems that can be accessed anywhere by low-cost Web browsers to search, retrieve, and analyze patient information. The language allows legacy MUMPS applications to function in a Web environment and to make use of the Web graphical, sound, and video presentation services. It also permits downloading of script applets for execution on client browsers, and it can be used in standalone mode with the Unix, Windows 95, Windows NT, and OS/2 operating systems.
Education problems and Web-based teaching: how it impacts dental educators?
Clark, G T
2001-01-01
This article looks at six problems that vex educators and how web-based teaching might help solve them. These problems include: (1) limited access to educational content, (2) need for asynchronous access to educational content, (3) depth and diversity of educational content, (4) training in complex problem solving, (5) promotion of lifelong learning behaviors and (6) achieving excellence in education. The advantages and disadvantage of web-based educational content for each problem are discussed. The article suggests that when a poorly organized course with inaccurate and irrelevant content is placed online, it solves no problems. However some of the above issues can be partially or fully solved by hosting well-constructed teaching modules on the web. This article also reviews the literature investigating the efficacy of off-site education as compared to that provided on-site. The conclusion of this review is that teleconference-based and web-based delivery of educational content can be as effective as traditional classroom-based teaching assuming the technologic problems sometimes associated with delivering teaching content to off-site locations do not interfere in the learning process. A suggested hierarchy for rating and comparing e-learning concepts and methods is presented for consideration.
Hwang, Dennis
2016-06-01
Technology is changing the way health care is delivered and how patients are approaching their own health. Given the challenge within sleep medicine of optimizing adherence to continuous positive airway pressure (CPAP) therapy in patients with obstructive sleep apnea (OSA), implementation of telemedicine-based mechanisms is a critical component toward developing a comprehensive and cost-effective solution for OSA management. Key elements include the use of electronic messaging, remote monitoring, automated care mechanisms, and patient self-management platforms. Current practical sleep-related telemedicine platforms include Web-based educational programs, automated CPAP follow-up platforms that promote self-management, and peer-based patient-driven Internet support forums. Copyright © 2016 Elsevier Inc. All rights reserved.
Fish Ontology framework for taxonomy-based fish recognition
Ali, Najib M.; Khan, Haris A.; Then, Amy Y-Hui; Ving Ching, Chong; Gaur, Manas
2017-01-01
Life science ontologies play an important role in Semantic Web. Given the diversity in fish species and the associated wealth of information, it is imperative to develop an ontology capable of linking and integrating this information in an automated fashion. As such, we introduce the Fish Ontology (FO), an automated classification architecture of existing fish taxa which provides taxonomic information on unknown fish based on metadata restrictions. It is designed to support knowledge discovery, provide semantic annotation of fish and fisheries resources, data integration, and information retrieval. Automated classification for unknown specimens is a unique feature that currently does not appear to exist in other known ontologies. Examples of automated classification for major groups of fish are demonstrated, showing the inferred information by introducing several restrictions at the species or specimen level. The current version of FO has 1,830 classes, includes widely used fisheries terminology, and models major aspects of fish taxonomy, grouping, and character. With more than 30,000 known fish species globally, the FO will be an indispensable tool for fish scientists and other interested users. PMID:28929028
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-29
...] Electronic Filing of Import Inspection Applications for Meat, Poultry, and Egg Products: Availability of..., and egg products through the Automated Commercial Environment (ACE). ACE is the Web- based portal for... products (21 U.S.C. 620, 466). The Egg Products Inspection Act (EPIA) (21 U.S.C. 1031 et seq.) prohibits...
Impact of Voluntary Accreditation on Deficiency Citations in U.S. Nursing Homes
ERIC Educational Resources Information Center
Wagner, Laura M.; McDonald, Shawna M.; Castle, Nicholas G.
2012-01-01
Purpose of the Study: This study examines the association between nursing home accreditation and deficiency citations. Design and Methods: Data originated from a web-based search of The Joint Commission (TJC) accreditation and On-line Survey Certification of Automated Records from 2002 to 2010. Deficiency citations were divided into 4 categories:…
Improving Critical Thinking Using Web Based Argument Mapping Exercises with Automated Feedback
ERIC Educational Resources Information Center
Butchart, Sam; Forster, Daniella; Gold, Ian; Bigelow, John; Korb, Kevin; Oppy, Graham; Serrenti, Alexandra
2009-01-01
In this paper we describe a simple software system that allows students to practise their critical thinking skills by constructing argument maps of natural language arguments. As the students construct their maps of an argument, the system provides automatic, real time feedback on their progress. We outline the background and theoretical framework…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-09
... waivers are met, as described in 7 CFR 210.17. The form is an intrinsic part of the accounting system... adequate recordkeeping. The FNS-13 form is provided to States through a web-based Federal reporting system... reporting burden hours as a result of automation and the advancement of State systems technology. The...
Web-based expert system for foundry pollution prevention
NASA Astrophysics Data System (ADS)
Moynihan, Gary P.
2004-02-01
Pollution prevention is a complex task. Many small foundries lack the in-house expertise to perform these tasks. Expert systems are a type of computer information system that incorporates artificial intelligence. As noted in the literature, they provide a means of automating specialized expertise. This approach may be further leveraged by implementing the expert system on the internet (or world-wide web). This will allow distribution of the expertise to a variety of geographically-dispersed foundries. The purpose of this research is to develop a prototype web-based expert system to support pollution prevention for the foundry industry. The prototype system identifies potential emissions for a specified process, and also provides recommendations for the prevention of these contaminants. The system is viewed as an initial step toward assisting the foundry industry in better meeting government pollution regulations, as well as improving operating efficiencies within these companies.
Duncan, Mitch J; Plotnikoff, Ronald C; Mummery, W Kerry
2012-01-01
Background In randomized controlled trials, participants cannot choose their preferred intervention delivery mode and thus might refuse to participate or not engage fully if assigned to a nonpreferred group. This might underestimate the true effectiveness of behavior-change interventions. Objective To examine whether receiving interventions either matched or mismatched with participants’ preferred delivery mode would influence effectiveness of a Web-based physical activity intervention. Methods Adults (n = 863), recruited via email, were randomly assigned to one of three intervention delivery modes (text based, video based, or combined) and received fully automated, Internet-delivered personal advice about physical activity. Personalized intervention content, based on the theory of planned behavior and stages of change concept, was identical across groups. Online, self-assessed questionnaires measuring physical activity were completed at baseline, 1 week, and 1 month. Physical activity advice acceptability and website usability were assessed at 1 week. Before randomization, participants were asked which delivery mode they preferred, to categorize them as matched or mismatched. Time spent on the website was measured throughout the intervention. We applied intention-to-treat, repeated-measures analyses of covariance to assess group differences. Results Attrition was high (575/863, 66.6%), though equal between groups (t 86 3 =1.31, P =.19). At 1-month follow-up, 93 participants were categorized as matched and 195 as mismatched. They preferred text mode (493/803, 61.4%) over combined (216/803, 26.9%) and video modes (94/803, 11.7%). After the intervention, 20% (26/132) of matched-group participants and 34% (96/282) in the mismatched group changed their delivery mode preference. Time effects were significant for all physical activity outcomes (total physical activity: F 2,801 = 5.07, P = .009; number of activity sessions: F 2,801 = 7.52, P < .001; walking: F 2,801 = 8.32, P < .001; moderate physical activity: F 2,801 = 9.53, P < .001; and vigorous physical activity: F 2,801 = 6.04, P = .002), indicating that physical activity increased over time for both matched and mismatched groups. Matched-group participants improved physical activity outcomes slightly more than those in the mismatched group, but interaction effects were not significant. Physical activity advice acceptability (content scale: t 368 = .10, P = .92; layout scale: t 368 = 1.53, P = .12) and website usability (layout scale: t 426 = .05, P = .96; ease of use scale: t 426 = .21, P = .83) were generally high and did not differ between the matched and mismatched groups. The only significant difference (t 621 = 2.16, P = .03) was in relation to total time spent on the website: the mismatched group spent significantly more time on the website (14.4 minutes) than the matched group (12.1 minutes). Conclusion Participants’ preference regarding delivery mode may not significantly influence intervention outcomes. Consequently, allowing participants to choose their preferred delivery mode may not increase effectiveness of Web-based interventions. PMID:22377834
Vandelanotte, Corneel; Duncan, Mitch J; Plotnikoff, Ronald C; Mummery, W Kerry
2012-02-29
In randomized controlled trials, participants cannot choose their preferred intervention delivery mode and thus might refuse to participate or not engage fully if assigned to a nonpreferred group. This might underestimate the true effectiveness of behavior-change interventions. To examine whether receiving interventions either matched or mismatched with participants' preferred delivery mode would influence effectiveness of a Web-based physical activity intervention. Adults (n = 863), recruited via email, were randomly assigned to one of three intervention delivery modes (text based, video based, or combined) and received fully automated, Internet-delivered personal advice about physical activity. Personalized intervention content, based on the theory of planned behavior and stages of change concept, was identical across groups. Online, self-assessed questionnaires measuring physical activity were completed at baseline, 1 week, and 1 month. Physical activity advice acceptability and website usability were assessed at 1 week. Before randomization, participants were asked which delivery mode they preferred, to categorize them as matched or mismatched. Time spent on the website was measured throughout the intervention. We applied intention-to-treat, repeated-measures analyses of covariance to assess group differences. Attrition was high (575/863, 66.6%), though equal between groups (t(86) (3) =1.31, P =.19). At 1-month follow-up, 93 participants were categorized as matched and 195 as mismatched. They preferred text mode (493/803, 61.4%) over combined (216/803, 26.9%) and video modes (94/803, 11.7%). After the intervention, 20% (26/132) of matched-group participants and 34% (96/282) in the mismatched group changed their delivery mode preference. Time effects were significant for all physical activity outcomes (total physical activity: F(2,801) = 5.07, P = .009; number of activity sessions: F(2,801) = 7.52, P < .001; walking: F(2,801) = 8.32, P < .001; moderate physical activity: F(2,801) = 9.53, P < .001; and vigorous physical activity: F(2,801) = 6.04, P = .002), indicating that physical activity increased over time for both matched and mismatched groups. Matched-group participants improved physical activity outcomes slightly more than those in the mismatched group, but interaction effects were not significant. Physical activity advice acceptability (content scale: t(368) = .10, P = .92; layout scale: t(368) = 1.53, P = .12) and website usability (layout scale: t(426) = .05, P = .96; ease of use scale: t(426) = .21, P = .83) were generally high and did not differ between the matched and mismatched groups. The only significant difference (t(621) = 2.16, P = .03) was in relation to total time spent on the website: the mismatched group spent significantly more time on the website (14.4 minutes) than the matched group (12.1 minutes). Participants' preference regarding delivery mode may not significantly influence intervention outcomes. Consequently, allowing participants to choose their preferred delivery mode may not increase effectiveness of Web-based interventions.
TermGenie – a web-application for pattern-based ontology class generation
Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.; ...
2014-01-01
Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less
TermGenie – a web-application for pattern-based ontology class generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.
Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less
TermGenie - a web-application for pattern-based ontology class generation.
Dietze, Heiko; Berardini, Tanya Z; Foulger, Rebecca E; Hill, David P; Lomax, Jane; Osumi-Sutherland, David; Roncaglia, Paola; Mungall, Christopher J
2014-01-01
Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 new classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.
Bae, Jeongyee
2013-04-01
The purpose of this project was to develop an international web-based expert system using principals of artificial intelligence and user-centered design for management of mental health by Korean emigrants. Using this system, anyone can access the system via computer access to the web. Our design process utilized principles of user-centered design with 4 phases: needs assessment, analysis, design/development/testing, and application release. A survey was done with 3,235 Korean emigrants. Focus group interviews were also conducted. Survey and analysis results guided the design of the web-based expert system. With this system, anyone can check their mental health status by themselves using a personal computer. The system analyzes facts based on answers to automated questions, and suggests solutions accordingly. A history tracking mechanism enables monitoring and future analysis. In addition, this system will include intervention programs to promote mental health status. This system is interactive and accessible to anyone in the world. It is expected that this management system will contribute to Korean emigrants' mental health promotion and allow researchers and professionals to share information on mental health.
Queralt-Rosinach, Núria; Piñero, Janet; Bravo, Àlex; Sanz, Ferran; Furlong, Laura I
2016-07-15
DisGeNET-RDF makes available knowledge on the genetic basis of human diseases in the Semantic Web. Gene-disease associations (GDAs) and their provenance metadata are published as human-readable and machine-processable web resources. The information on GDAs included in DisGeNET-RDF is interlinked to other biomedical databases to support the development of bioinformatics approaches for translational research through evidence-based exploitation of a rich and fully interconnected linked open data. http://rdf.disgenet.org/ support@disgenet.org. © The Author 2016. Published by Oxford University Press.
Automatic geospatial information Web service composition based on ontology interface matching
NASA Astrophysics Data System (ADS)
Xu, Xianbin; Wu, Qunyong; Wang, Qinmin
2008-10-01
With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.
Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems.
Zerrouki, Taha; Balla, Amar
2017-04-01
Arabic diacritics are often missed in Arabic scripts. This feature is a handicap for new learner to read َArabic, text to speech conversion systems, reading and semantic analysis of Arabic texts. The automatic diacritization systems are the best solution to handle this issue. But such automation needs resources as diactritized texts to train and evaluate such systems. In this paper, we describe our corpus of Arabic diacritized texts. This corpus is called Tashkeela. It can be used as a linguistic resource tool for natural language processing such as automatic diacritics systems, dis-ambiguity mechanism, features and data extraction. The corpus is freely available, it contains 75 million of fully vocalized words mainly 97 books from classical and modern Arabic language. The corpus is collected from manually vocalized texts using web crawling process.
bpRNA: large-scale automated annotation and analysis of RNA secondary structure.
Danaee, Padideh; Rouches, Mason; Wiley, Michelle; Deng, Dezhong; Huang, Liang; Hendrix, David
2018-05-09
While RNA secondary structure prediction from sequence data has made remarkable progress, there is a need for improved strategies for annotating the features of RNA secondary structures. Here, we present bpRNA, a novel annotation tool capable of parsing RNA structures, including complex pseudoknot-containing RNAs, to yield an objective, precise, compact, unambiguous, easily-interpretable description of all loops, stems, and pseudoknots, along with the positions, sequence, and flanking base pairs of each such structural feature. We also introduce several new informative representations of RNA structure types to improve structure visualization and interpretation. We have further used bpRNA to generate a web-accessible meta-database, 'bpRNA-1m', of over 100 000 single-molecule, known secondary structures; this is both more fully and accurately annotated and over 20-times larger than existing databases. We use a subset of the database with highly similar (≥90% identical) sequences filtered out to report on statistical trends in sequence, flanking base pairs, and length. Both the bpRNA method and the bpRNA-1m database will be valuable resources both for specific analysis of individual RNA molecules and large-scale analyses such as are useful for updating RNA energy parameters for computational thermodynamic predictions, improving machine learning models for structure prediction, and for benchmarking structure-prediction algorithms.
On improving the communication between models and data.
Dietze, Michael C; Lebauer, David S; Kooper, Rob
2013-09-01
The potential for model-data synthesis is growing in importance as we enter an era of 'big data', greater connectivity and faster computation. Realizing this potential requires that the research community broaden its perspective about how and why they interact with models. Models can be viewed as scaffolds that allow data at different scales to inform each other through our understanding of underlying processes. Perceptions of relevance, accessibility and informatics are presented as the primary barriers to broader adoption of models by the community, while an inability to fully utilize the breadth of expertise and data from the community is a primary barrier to model improvement. Overall, we promote a community-based paradigm to model-data synthesis and highlight some of the tools and techniques that facilitate this approach. Scientific workflows address critical informatics issues in transparency, repeatability and automation, while intuitive, flexible web-based interfaces make running and visualizing models more accessible. Bayesian statistics provides powerful tools for assimilating a diversity of data types and for the analysis of uncertainty. Uncertainty analyses enable new measurements to target those processes most limiting our predictive ability. Moving forward, tools for information management and data assimilation need to be improved and made more accessible. © 2013 John Wiley & Sons Ltd.
Espie, Colin A.; Kyle, Simon D.; Williams, Chris; Ong, Jason C.; Douglas, Neil J.; Hames, Peter; Brown, June S.L.
2012-01-01
Study Objectives: The internet provides a pervasive milieu for healthcare delivery. The purpose of this study was to determine the effectiveness of a novel web-based cognitive behavioral therapy (CBT) course delivered by an automated virtual therapist, when compared with a credible placebo; an approach required because web products may be intrinsically engaging, and vulnerable to placebo response. Design: Randomized, placebo-controlled trial comprising 3 arms: CBT, imagery relief therapy (IRT: placebo), treatment as usual (TAU). Setting: Online community of participants in the UK. Participants: One hundred sixty-four adults (120 F: [mean age 49y (18-78y)] meeting proposed DSM-5 criteria for Insomnia Disorder, randomly assigned to CBT (n = 55; 40 F), IRT placebo (n = 55; 42 F) or TAU (n = 54; 38 F). Interventions: CBT and IRT each comprised 6 online sessions delivered by an animated personal therapist, with automated web and email support. Participants also had access to a video library/back catalogue of session content and Wikipedia style articles. Online CBT users had access to a moderated social network/community of users. TAU comprised no restrictions on usual care and access to an online sleep diary. Measurements and Results: Major assessments at baseline, post-treatment, and at follow-up 8-weeks post-treatment; outcomes appraised by online sleep diaries and clinical status. On the primary endpoint of sleep efficiency (SE; total time asleep expressed as a percentage of the total time spent in bed), online CBT was associated with sustained improvement at post-treatment (+20%) relative to both TAU (+6%; d = 0.95) and IRT (+6%: d = 1.06), and at 8 weeks (+20%) relative to IRT (+7%: d = 1.00) and TAU (+9%: d = 0.69) These findings were mirrored across a range of sleep diary measures. Clinical benefits of CBT were evidenced by modest superiority over placebo on daytime outcomes (d = 0.23-0.37) and by substantial improved sleep-wake functioning on the Sleep Condition Indicator (range of d = 0.77-1.20). Three-quarters of CBT participants (76% [CBT] vs. 29% [IRT] and 18% [TAU]) completed treatment with SE > 80%, more than half (55% [CBT] vs. 17% [IRT] and 8% [TAU]) with SE > 85%, and over one-third (38% [CBT] vs. 6% [IRT] and 0% [TAU]) with SE > 90%; these improvements were largely maintained during follow-up. Conclusion: CBT delivered using a media-rich web application with automated support and a community forum is effective in improving the sleep and associated daytime functioning of adults with insomnia disorder. Clinical Trial Registration: ISRCTN – 44615689. Citation: Espie CA; Kyle SD; Williams C; Ong JC; Douglas NJ; Hames P; Brown JSL. A randomized, placebo-controlled trial of online cognitive behavioral therapy for chronic insomnia disorder delivered via an automated media-rich web application. SLEEP 2012;35(6):769-781. PMID:22654196
A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay.
Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham
2017-08-01
Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.
Seahawk: moving beyond HTML in Web-based bioinformatics analysis.
Gordon, Paul M K; Sensen, Christoph W
2007-06-18
Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therefore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer.
Seahawk: moving beyond HTML in Web-based bioinformatics analysis
Gordon, Paul MK; Sensen, Christoph W
2007-01-01
Background Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. Results We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. Conclusion As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer. PMID:17577405
NASA Astrophysics Data System (ADS)
Nadeem, Syed Ahmed; Hoffman, Eric A.; Sieren, Jered P.; Saha, Punam K.
2018-03-01
Numerous large multi-center studies are incorporating the use of computed tomography (CT)-based characterization of the lung parenchyma and bronchial tree to understand chronic obstructive pulmonary disease status and progression. To the best of our knowledge, there are no fully automated airway tree segmentation methods, free of the need for user review. A failure in even a fraction of segmentation results necessitates manual revision of all segmentation masks which is laborious considering the thousands of image data sets evaluated in large studies. In this paper, we present a novel CT-based airway tree segmentation algorithm using topological leakage detection and freeze-and-grow propagation. The method is fully automated requiring no manual inputs or post-segmentation editing. It uses simple intensity-based connectivity and a freeze-and-grow propagation algorithm to iteratively grow the airway tree starting from an initial seed inside the trachea. It begins with a conservative parameter and then, gradually shifts toward more generous parameter values. The method was applied on chest CT scans of fifteen subjects at total lung capacity. Airway segmentation results were qualitatively assessed and performed comparably to established airway segmentation method with no major visual leakages.
Methodological Approaches to Online Scoring of Essays.
ERIC Educational Resources Information Center
Chung, Gregory K. W. K.; O'Neil, Harold F., Jr.
This report examines the feasibility of scoring essays using computer-based techniques. Essays have been incorporated into many of the standardized testing programs. Issues of validity and reliability must be addressed to deploy automated approaches to scoring fully. Two approaches that have been used to classify documents, surface- and word-based…
Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2006-09-01
This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.
Validation of automated white matter hyperintensity segmentation.
Smart, Sean D; Firbank, Michael J; O'Brien, John T
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion.
Automated retinofugal visual pathway reconstruction with multi-shell HARDI and FOD-based analysis.
Kammen, Alexandra; Law, Meng; Tjan, Bosco S; Toga, Arthur W; Shi, Yonggang
2016-01-15
Diffusion MRI tractography provides a non-invasive modality to examine the human retinofugal projection, which consists of the optic nerves, optic chiasm, optic tracts, the lateral geniculate nuclei (LGN) and the optic radiations. However, the pathway has several anatomic features that make it particularly challenging to study with tractography, including its location near blood vessels and bone-air interface at the base of the cerebrum, crossing fibers at the chiasm, somewhat-tortuous course around the temporal horn via Meyer's Loop, and multiple closely neighboring fiber bundles. To date, these unique complexities of the visual pathway have impeded the development of a robust and automated reconstruction method using tractography. To overcome these challenges, we develop a novel, fully automated system to reconstruct the retinofugal visual pathway from high-resolution diffusion imaging data. Using multi-shell, high angular resolution diffusion imaging (HARDI) data, we reconstruct precise fiber orientation distributions (FODs) with high order spherical harmonics (SPHARM) to resolve fiber crossings, which allows the tractography algorithm to successfully navigate the complicated anatomy surrounding the retinofugal pathway. We also develop automated algorithms for the identification of ROIs used for fiber bundle reconstruction. In particular, we develop a novel approach to extract the LGN region of interest (ROI) based on intrinsic shape analysis of a fiber bundle computed from a seed region at the optic chiasm to a target at the primary visual cortex. By combining automatically identified ROIs and FOD-based tractography, we obtain a fully automated system to compute the main components of the retinofugal pathway, including the optic tract and the optic radiation. We apply our method to the multi-shell HARDI data of 215 subjects from the Human Connectome Project (HCP). Through comparisons with post-mortem dissection measurements, we demonstrate the retinotopic organization of the optic radiation including a successful reconstruction of Meyer's loop. Then, using the reconstructed optic radiation bundle from the HCP cohort, we construct a probabilistic atlas and demonstrate its consistency with a post-mortem atlas. Finally, we generate a shape-based representation of the optic radiation for morphometry analysis. Copyright © 2015 Elsevier Inc. All rights reserved.
Dera, Dimah; Bouaynaya, Nidhal; Fathallah-Shaykh, Hassan M
2016-07-01
We address the problem of fully automated region discovery and robust image segmentation by devising a new deformable model based on the level set method (LSM) and the probabilistic nonnegative matrix factorization (NMF). We describe the use of NMF to calculate the number of distinct regions in the image and to derive the local distribution of the regions, which is incorporated into the energy functional of the LSM. The results demonstrate that our NMF-LSM method is superior to other approaches when applied to synthetic binary and gray-scale images and to clinical magnetic resonance images (MRI) of the human brain with and without a malignant brain tumor, glioblastoma multiforme. In particular, the NMF-LSM method is fully automated, highly accurate, less sensitive to the initial selection of the contour(s) or initial conditions, more robust to noise and model parameters, and able to detect as small distinct regions as desired. These advantages stem from the fact that the proposed method relies on histogram information instead of intensity values and does not introduce nuisance model parameters. These properties provide a general approach for automated robust region discovery and segmentation in heterogeneous images. Compared with the retrospective radiological diagnoses of two patients with non-enhancing grade 2 and 3 oligodendroglioma, the NMF-LSM detects earlier progression times and appears suitable for monitoring tumor response. The NMF-LSM method fills an important need of automated segmentation of clinical MRI.
LiFi based automated shopping assistance application in IoT
NASA Astrophysics Data System (ADS)
Akter, Sharmin; Funke Olanrewaju, Rashidah, Dr; Islam, Thouhedul; Salma
2018-05-01
Urban people minimize shopping time in daily life due to time constrain. From that point of view, the concept of supermarket is being popular while consumers can buy different items from same place. However, customer spends hours and hours to find desired items in a large supermarket. In addition, it’s also required to be queued during payment at counter that is also time consuming. As a result, a customer has to spend 2-3 hours for shopping in a large superstore. This paper proposes an Internet of Things and Li-Fi based automated application for smart phone and web to find items easily during shopping that can save consumer’s time as well as reduce man power in supermarket.
Paul, Lorna; Coulter, Elaine H; Miller, Linda; McFadyen, Angus; Dorfman, Joe; Mattison, Paul George G
2014-09-01
To explore the effectiveness and participant experience of web-based physiotherapy for people moderately affected with Multiple Sclerosis (MS) and to provide data to establish the sample size required for a fully powered, definitive randomized controlled study. A randomized controlled pilot study. Rehabilitation centre and participants' homes. Thirty community dwelling adults moderately affected by MS (Expanded Disability Status Scale 5-6.5). Twelve weeks of individualised web-based physiotherapy completed twice per week or usual care (control). Online exercise diaries were monitored; participants were telephoned weekly by the physiotherapist and exercise programmes altered remotely by the physiotherapist as required. The following outcomes were completed at baseline and after 12 weeks; 25 Foot Walk, Berg Balance Scale, Timed Up and Go, Multiple Sclerosis Impact Scale, Leeds MS Quality of Life Scale, MS-Related Symptom Checklist and Hospital Anxiety and Depression Scale. The intervention group also completed a website evaluation questionnaire and interviews. Participants reported that website was easy to use, convenient, and motivating and would be happy to use in the future. There was no statistically significant difference in the primary outcome measure, the timed 25ft walk in the intervention group (P=0.170), or other secondary outcome measures, except the Multiple Sclerosis Impact Scale (P=0.048). Effect sizes were generally small to moderate. People with MS were very positive about web-based physiotherapy. The results suggested that 80 participants, 40 in each group, would be sufficient for a fully powered, definitive randomized controlled trial. © The Author(s) 2014.
Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard
2018-04-01
To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Classification of Automated Search Traffic
NASA Astrophysics Data System (ADS)
Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.
As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.
STITCHER 2.0: primer design for overlapping PCR applications
O’Halloran, Damien M.; Uriagereka-Herburger, Isabel; Bode, Katrin
2017-01-01
Overlapping polymerase chain reaction (PCR) is a common technique used by researchers in very diverse fields that enables the user to ‘stitch’ individual pieces of DNA together. Previously, we have reported a web based tool called STITCHER that provides a platform for researchers to automate the design of primers for overlapping PCR applications. Here we present STITCHER 2.0, which represents a substantial update to STITCHER. STITCHER 2.0 is a newly designed web tool that automates the design of primers for overlapping PCR. Unlike STITCHER, STITCHER 2.0 considers diverse algorithmic parameters, and returns multiple result files that include a facility for the user to draw their own primers as well as comprehensive visual guides to the user’s input, output, and designed primers. These result files provide greater control and insight during experimental design and troubleshooting. STITCHER 2.0 is freely available to all users without signup or login requirements and can be accessed at the following webpage: www.ohalloranlab.net/STITCHER2.html. PMID:28358011
STITCHER 2.0: primer design for overlapping PCR applications.
O'Halloran, Damien M; Uriagereka-Herburger, Isabel; Bode, Katrin
2017-03-30
Overlapping polymerase chain reaction (PCR) is a common technique used by researchers in very diverse fields that enables the user to 'stitch' individual pieces of DNA together. Previously, we have reported a web based tool called STITCHER that provides a platform for researchers to automate the design of primers for overlapping PCR applications. Here we present STITCHER 2.0, which represents a substantial update to STITCHER. STITCHER 2.0 is a newly designed web tool that automates the design of primers for overlapping PCR. Unlike STITCHER, STITCHER 2.0 considers diverse algorithmic parameters, and returns multiple result files that include a facility for the user to draw their own primers as well as comprehensive visual guides to the user's input, output, and designed primers. These result files provide greater control and insight during experimental design and troubleshooting. STITCHER 2.0 is freely available to all users without signup or login requirements and can be accessed at the following webpage: www.ohalloranlab.net/STITCHER2.html.
Scattering Banner Acknowledgements The graphics used on the Neutron Scattering Web Pages were designed by reused on these web pages by kind permission of Jack Carpenter, and with the assistance of Mary Koelbl (IPD). Rick Goyette (IPNS) set up and maintains the Linux web server as well as helping to automate the
Implementation and Challenges of Direct Acoustic Dosing into Cell-Based Assays.
Roberts, Karen; Callis, Rowena; Ikeda, Tim; Paunovic, Amalia; Simpson, Carly; Tang, Eric; Turton, Nick; Walker, Graeme
2016-02-01
Since the adoption of Labcyte Echo Acoustic Droplet Ejection (ADE) technology by AstraZeneca in 2005, ADE has become the preferred method for compound dosing into both biochemical and cell-based assays across AstraZeneca research and development globally. The initial implementation of Echos and the direct dosing workflow provided AstraZeneca with a unique set of challenges. In this article, we outline how direct Echo dosing has evolved over the past decade in AstraZeneca. We describe the practical challenges of applying ADE technology to 96-well, 384-well, and 1536-well assays and how AstraZeneca developed and applied software and robotic solutions to generate fully automated and effective cell-based assay workflows. © 2015 Society for Laboratory Automation and Screening.
2002-06-01
Student memo for personnel MCLLS . . . . . . . . . . . . . . 75 i. Migrate data to SQL Server...The Web Server is on the same server as the SWORD database in the current version. 4: results set 5: dynamic HTML page 6: dynamic HTML page 3: SQL ...still be supported by Access. SQL Server would be a more viable tool for a fully developed application based on the number of potential users and